RelateSocial: connect with customers!

India Tightens Rules on AI-Generated Content Under Updated IT Intermediary Regulations

The Indian government has taken a significant step toward regulating artificial intelligence–generated content by amending the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules. The changes, notified through Gazette notification G.S.R. 120(E) and signed by Joint Secretary Ajit Kumar, will come into effect on 20 February 2026. The amendments establish a formal compliance framework for platforms hosting or distributing AI-generated content, including deepfake videos, synthetic audio, and altered visuals.

Clear Labelling and Traceability at the Core

At the heart of the new rules is a strong emphasis on transparency. Social media platforms and digital intermediaries will now be required to clearly label all synthetically generated information (SGI) so users can immediately identify whether content is AI-generated or manipulated. In addition, platforms must embed metadata and unique identifiers that allow such content to be traced back to its origin. Once applied, these labels cannot be altered, hidden, or removed, ensuring long-term traceability and accountability.

First Legal Definition of AI-Generated Content

For the first time, the Indian government has formally defined “synthetically generated information.” Under the amended rules, SGI includes any audio, visual, or audio-visual content that is created or modified using computer resources in a way that appears real and depicts people or events as authentic.

However, the rules also clarify important exemptions. Routine edits such as colour correction, noise reduction, compression, and translations are excluded, provided they do not alter or distort the original meaning. Similarly, research papers, training materials, PDFs, presentations, and fictional or illustrative drafts are not covered by the SGI definition.

Stricter Responsibilities for Social Media Platforms

Major platforms such as Instagram, YouTube, and Facebook will shoulder most of the compliance burden. Under Rule 4(1A), platforms must ask users at the time of upload whether the content has been generated using AI. Crucially, compliance will not rely solely on self-declaration. Platforms are required to deploy automated verification tools to cross-check content format, source, and nature before it goes live.

If content is flagged as synthetic, it must carry a visible disclosure label informing viewers that it is AI-generated. Platforms that knowingly allow violations may be deemed to have failed their due diligence obligations.

An earlier proposal requiring AI visuals to occupy at least 10% of screen space for disclosures has been dropped following objections from industry bodies such as IAMAI and companies including Google, Meta, and Amazon. While watermark size requirements have been relaxed, mandatory labelling remains non-negotiable.

Faster Compliance Timelines

The amendments significantly shorten response and compliance deadlines:

  • Certain legal orders: reduced from 36 hours to 3 hours
  • 15-day compliance window: reduced to 7 days
  • 24-hour deadline: reduced to 12 hours

These tighter timelines aim to limit the rapid spread of harmful or misleading synthetic content.

Criminal Liability for Harmful AI Content

The new framework explicitly links AI-generated content with existing criminal laws. Synthetic material involving child sexual abuse, obscene content, false electronic records, explosives-related material, or deepfakes impersonating real individuals may attract penalties under laws such as the Bharatiya Nyaya Sanhita, the POCSO Act, and the Explosive Substances Act.

User Warnings and Limits on Safe Harbour

Platforms must also issue mandatory user warnings at least once every three months, in English or any language listed in the Eighth Schedule of the Constitution. These warnings must inform users about penalties associated with the misuse of AI-generated content.

Finally, the government has clarified that intermediaries cannot claim Section 79 safe harbour protection under the IT Act if they fail to act against violations involving synthetic content under the new rules.

Together, these amendments mark a decisive move by India to balance innovation in artificial intelligence with accountability, transparency, and user safety in the digital ecosystem.

Tbuy
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.