Mandatory AI Labels and 3-Hour Takedown Deadlines Enforced

In a significant step to combat the rising threat of AI-generated misinformation, the Indian government has introduced stricter regulations targeting deepfakes and other synthetic content on digital platforms.

The Ministry of Electronics and Information Technology (MeitY) has amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, bringing AI-created or altered material—including deepfake videos, synthetic audio, and manipulated visuals—under a formal regulatory framework for the first time. These changes, notified recently, will take effect from February 20, 2026.

Under the updated rules, social media intermediaries and platforms must ensure that all AI-generated content carries clear, prominent, and visible disclosures to users. Platforms are required to embed persistent metadata and unique identifiers in such material to enable traceability of its origin and the tools used for creation. These labels and metadata cannot be altered, hidden, or removed once applied.

Users uploading content will need to declare if it is synthetically generated, with large platforms deploying automated verification tools to check declarations by examining format, source, and other characteristics. If identified as AI-altered, the content must be visibly labelled. Routine edits, such as colour adjustments, noise reduction, compression, or translations that do not alter meaning, are exempt, as are hypothetical or illustrative drafts.

The government has also drastically shortened compliance timelines for removing unlawful content. Platforms must now act on takedown orders issued by courts or competent authorities within three hours, down from the previous 36 hours. Other windows have been reduced: a 15-day period cut to seven days, and 24 hours trimmed to 12 hours. For user grievances, platforms are required to acknowledge complaints within two hours and resolve them within seven days.

ALSO READ : Indian Government Mandates Labelling Of AI-Generated Content On Social Media Platforms

Additionally, platforms must periodically warn users—at least once every three months—about the penalties for misusing synthetically generated information, particularly in cases involving child sexual abuse material, obscene content, false electronic records, impersonation using a real person’s identity or voice, or material related to explosives. Such misuse will invite action under various criminal laws.

Oversight and enforcement fall under MeitY, with users able to appeal platform decisions to the grievance appellate committee. Direct responsibility has been placed on platforms and their senior officers to ensure compliance.

These measures reflect growing official concern over the potential for AI tools to spread deception, especially in sensitive contexts like elections and public discourse.

Exit mobile version