India Moves to Regulate AI and Deepfakes: New Laws, Technology, and Ethical Roadmap Take Shape

India Moves to Regulate AI and Deepfakes

As AI-powered misinformation and manipulated media become increasingly prevalent, India is taking decisive steps to regulate deepfakes and protect citizens. While comprehensive legislation is still evolving, recent developments hint at a clear direction—balancing innovation with accountability.


The Rise of AI Misuse and Legal Vacuums

Artificial Intelligence is no longer just tech-talk, but an everyday concern—ranging from cyber-fraud to political manipulation. While existing laws such as sections of the Indian Penal Code and the IT Act, 2000 cover general cybercrime, they do not specifically address AI-generated content such as deepfakes, disinformation, or manipulated media.

In high-profile cases like Ankur Warikoo v John Doe, Indian courts have granted interim relief using civil law tools such as injunctions and defamation suits, but these are piecemeal and reactive. The Delhi High Court has repeatedly urged the central government to formulate robust AI-specific guidelines, calling deepfakes “a menace in society.”


Karnataka’s Controversial Anti-Misinformation Bill

In the meantime, Karnataka has proposed the Misinformation and Fake News (Prohibition) Bill, potentially imposing up to seven years in prison for creators of “fake news,” including misinformation spread through AI-based deepfakes. The bill aims to curb superstition and anti-feminist content—but activism groups warn of vague definitions that could result in censorship.


Deepfake-Specific Legislation: The AI TRA Bill

India’s federal government is working on broader legislation under the AI TRA Bill, 2024, which includes a dedicated Deepfake Prevention Bill, 2023. Key provisions:

  • Criminal penalties of up to five years’ imprisonment or fines for creating/distributing deepfakes without consent—especially in reproductions involving sexual content, fraud, or identity theft.
  • The NAITRA Authority (National AI and Technology Regulatory Authority) to oversee enforcement, issue directives, and penalize both individuals and organizations.

Detection Technologies & Institutional Safeguards

India is not only focused on laws but also on detection tools and governance infrastructure:

  • CERT-In, the national cybersecurity agency, issued advisories in late 2024 to help individuals and organizations identify and counter deepfakes.
  • The newly established IndiaAI Safety Institute—under MeitY—will focus on standards-setting, risk detection, and safe AI research, backed by collaborations across IITs, tech firms, and UNESCO.

Adding strength on the tech front is Vastav AI, India’s first dedicated deepfake detection system. Developed by cybersecurity firm Zero Defend Security, it reportedly achieves 99% accuracy in identifying AI-manipulated images, audio, and videos and is already available to law enforcement agencies.


Copyright, Oversight, and Legislative Reform

Adding to the legal pileup, a cabinet-appointed panel is currently reviewing whether copyright laws—codified in the Copyright Act of 1957—are adequate to govern AI-generated content. The panel, convened by the Commerce and IT ministries, aims to guide reforms or propose new frameworks.

Meanwhile, MeitY and the Bureau of Indian Standards (BIS) are drawing up standards for algorithmic ethics, transparency, and accountability, under its “Responsible AI” and “Digital India Act” proposals.


Table: India’s Multi-Pronged Approach to Deepfake Regulation

Focus AreaInitiatives Underway
LegislationAI TRA Bill & Deepfake Prevention Bill with criminal penalties & enforcement authority.
Enforcement MechanismCreation of the NAITRA Authority for compliance and legal oversight.
Regional ActionKarnataka’s draft law on misinformation (critics warn of overreach).
Detection TechnologyVastav AI: real-time forensic deepfake detector with high accuracy.
Cybersecurity AdvocacyCERT-In advisories and institutional training.
Regulatory FrameworksIndiaAI Safety Institute; upcoming standards via MeitY and BIS.
Copyright ReformExpert panel evaluating AI’s impact on current IP laws.

Broader Implications and Challenges

  • Speaker Accountability vs. Freedom: Vague definitions in state laws raise free-speech concerns.
  • Detectability vs. Enforcement: AI technology is evolving fast—legislation must find pace too.
  • Awareness Gaps: Public sensitivity to deepfakes remains low; awareness campaigns are crucial.
  • Policy Coherence: Partnerships between states and center must avoid conflicting rules—not dilute law.
  • Governance Infrastructure: Institutions like NAITRA and IndiaAI must be competent, transparent, and well-funded.

Parting Thoughts

India is rapidly defining its strategy to tackle AI misuse through a mix of legislation, enforcement mechanisms, technology, and oversight bodies. By integrating tools like Vastav AI with legal reforms, the country is striving to protect citizens against deception while fostering AI innovation. —but the real test remains in implementation. The efficacy of these moves will depend on how well they handle real-world misuse, global tech shifts, and the fine balance between free expression and societal safety.

Leave a Reply

Your email address will not be published. Required fields are marked *