Insights into India’s growing AI ecosystem and rising security risks.
Sachinn Adithiya & Tripti Pandey
Insights into India’s growing AI ecosystem and rising security risks.
Sachinn Adithiya & Tripti Pandey
India is rapidly adopting Artificial Intelligence (AI) across banking, retail, healthcare, telecom, and agriculture. In recent years, the Indian AI market has expanded to reach $11.78 billion by the end of 2025, and it is projected to reach $17 billion by 2027 by NASSCOM report. From corporate offices to households, AI is quietly becoming a part of daily life — yet awareness about its security implications remains dangerously low. While AI brings immense opportunities, it also introduces complex cybersecurity challenges that India must address urgently.
AI adoption: Fraud detection, KYC automation, credit scoring, customer support.
AI adoption: Product recommendations, supply-chain prediction, chatbot support.
Retail is one of the fastest - targeted sectors for phishing and credential theft.
Indian consumers are increasingly adopting EVs as AI becomes central to the EV ecosystem. AI now powers advanced Battery Management Systems for predicting battery health, ADAS features for safer driving, predictive maintenance for identifying faults early, and AI-based in-car assistants that improve the overall driving experience.
As of February 2025, India has around 5.6 million registered EVs, according to MoRTH data, and the number is expected to cross 28 million by 2030. A Rhodium report suggests that Indian manufacturers could produce nearly 2.5 million four-wheelers annually by 2030. Tata Motors remains the leading player in the Indian EV sector. As reported by TimesEV, the company plans to invest ?1.6 lakh crore by 2030 and introduce at least 10 EV models.
AI adoption: Medical imaging, diagnosis support, patient monitoring, hospital automation.
AI adoption: Crop forecasting, climate prediction, pest/disease detection.
India added approximately 29.52 GW of renewable capacity in FY2024–25, taking its total non-fossil fuel capacity to ~217–220 GW. AI supports India’s renewable-energy expansion through:
India is targeting 500 GW of clean energy by 2030, with AI playing a pivotal enabling role in achieving grid reliability and climate commitments.
AI adoption: Network optimisation, outage prediction, spam control, threat detection.
Benefits:
Required for EV ADAS, CCTV analysis, medical imaging
Benefits:
Used for energy prediction, stock forecasting, crop yield prediction
Benefits:
Retail, OTT, e-commerce rely heavily on personalization
Benefits:
Artificial Intelligence has transformed India’s digital environment, but it has also intensified cyber threats. Attackers now use AI to scale attacks, automate reconnaissance, bypass filters, and create highly personalized deception. Below are the major AI-enabled threats affecting India today, written in a research-backed manner with citation support.
Phishing has evolved from simple email scams to AI-engineered deception. Attackers now use dark-web LLMs such as WormGPT and FraudGPT to generate polished, grammatically perfect, and personalized messages that closely match an organization’s writing style. These models can produce thousands of unique email variations (polymorphic phishing), making signature-based filters ineffective.
According to StrongestLayer (2025), enterprises witnessed a 1,265% rise in AI-generated phishing attacks, marking it the top email threat for 2025 (StrongestLayer, 2025).
A dangerous trend is deepfake-based phishing, including India’s growing “digital arrest” scams where AI generates realistic video calls impersonating police or government officials, tricking victims into payments.
AI is being used to create malware that adapts dynamically to evade detection. According to CERT-In reports, India recorded 2.2 lakh+ malware incidents in 2024, with 87% originating from phishing emails augmented by AI-crafted messages.
Shadow AI are attacks that occur due to the use of AI tools without approval or security checks by the IT/security team. It means employees are using AI tools like ChatGPT, Sider, Bard, etc. to process company data and create major security risks. In India, over 60% of employees use AI tools, and unintentionally, they leak the company data. Public AI tools store data unless explicitly turned off, and users mostly forget to do this. Banks, IT companies, and startups are the most at risk.
India is among the top 5 ransomware-affected countries worldwide, with attacks increasing by nearly 70% between 2024–2025. AI now enables ransomware operators to perform:
CERT-In data shows that AI-generated phishing emails have become one of the main infection vectors for ransomware campaigns.
Attackers use AI to analyze social media data and craft highly tailored manipulation campaigns. This includes:
AI-driven bots can execute large-scale automated attacks such as:
Unlike traditional bots, AI bots can modify behavior in real time.
AI voice-generation systems can clone a person’s voice with as little as 30 seconds of audio. Attackers use this for:
Voice scams are now common across UPI, fintech, and telecom.
1. DPDP Act (Digital Personal Data Protection Act, 2023)
The DPDP Act is India’s central privacy law and plays an essential role in AI security. It requires:
Since AI systems depend heavily on personal information, the DPDP Act ensures accountability and reinforces privacy as a legal and ethical priority.
2. CERT-In Incident Reporting for AI Threats
CERT-In’s 6-hour reporting requirement should be expanded to include:
3. DLP Tools (Data Loss Prevention) – Protecting Against Shadow AI
DLP solutions help organizations prevent accidental or unauthorized sharing of sensitive data, especially while employees use AI tools like ChatGPT, Copilot, or Gemini. These tools:
Modern solutions like Cyberhaven add deep data intelligence, enabling automatic blocking of high-risk transfers, making them highly effective against Shadow AI threats.
4. XDR (Extended Detection and Response)
XDR platforms detect and respond to cyber threats using machine learning and behavioral analytics. They offer:
As cyberattacks increasingly use AI, XDR provides the speed and intelligence required to keep pace with modern threats.
5. AI Red Teaming –AI Security Testing
AI Red Teaming evaluates AI systems by simulating adversarial attacks to uncover vulnerabilities before they are exploited. It identifies:
This practice is recommended by NIST, CERT-In, and the EU AI Act for securing high-risk AI deployments.
1. Predictive XDR (From Detection to Anticipation)
Future XDR platforms will shift from detecting attacks to predicting them. Through reinforcement learning, predictive XDR will:
This level of anticipation is essential for defending against machine-speed attacks.
2. LLM Firewalls (Protecting Model Integrity)
LLM Firewalls will become mandatory for enterprise AI security. They will:
These firewalls act as dedicated protective layers around AI systems.
3. Prepare for Quantum Risks
Quantum computers can break today’s encryption in future.
Companies handling banking, telecom, and Aadhaar-linked data must start adopting quantum-safe encryption.
This protects long-term data from “harvest now, decrypt later” attacks.
4. Deploy Deepfake Detection Systems
Deepfake scams in India are rising fast — especially financial fraud.
A national-level deepfake detection system can help:
5. Organizational AI Usage Policies & Workforce Awareness
Shadow AI and internal misuse remain major threats. Organizations must enforce:
AI is expanding quickly in India and is now deeply used in banking, retail, healthcare, telecom, and agriculture. But this growth comes with a sharp increase in cyber threats such as spyware attacks, deepfake scams, and large data breaches. Because of this, AI adoption and security must now go together.
India needs stronger defense measures like Zero Trust, strict DPDP compliance, CERT-In readiness, MFA, DLP, and AI-driven threat detection. These are no longer optional — they are essential for protecting people, companies, and national infrastructure.
If India strengthens its security now, AI can grow safely and support long-term progress. But ignoring security will create bigger risks than benefits. The coming years will decide how securely India enters the AI era.
We sincerely thank our mentor, Mr. Prabhat Pathak, CTO & Cyber Security Head, for his continuous guidance and valuable insights. His direction on using updated industry data, structuring the report, and analyzing AI’s impact significantly strengthened our research. His expertise in Core AI Models, AI Security Threats, and Predictive XDR greatly enhanced the clarity and accuracy of our work. We truly appreciate his mentorship and support in ensuring the report meets industry standards.
Tripti Pandey- Cybersecurity Associate – JesperApps (Linkedln)
Sachinn Adithiya – Cybersecurity Associate – JesperApps (Linkedln)