Loading Now

Take down deepfakes within two hours: India’s new IT Rules

Take down deepfakes within two hours: India’s new IT Rules

Take down deepfakes within two hours: India’s new IT Rules


New Delhi: The Centre on Tuesday brought in a stricter compliance regime for social media companies such as X, Facebook, Instagram, and Telegram, among others, by formally notifying a law aimed at combating the misuse of artificial intelligence (AI) through deepfakes and other sensitive ‘synthetic’ content.

At the same time, it removed a contentious proposal of watermarking 10% of practically any online content, a move hailed by lawyers, industry stakeholders and policy consultants as a victory for the industry.

In the amended Information Technology (IT) Rules, 2026 notified on Tuesday, social media intermediaries will only need to label AI-generated and modified content that “such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event”.

The watermark proposal was part of the initial draft for AI rules that Meity had published on 22 October.

Alongside, the government has sharply tightened enforcement timelines for taking down objectionable material from digital platforms.

Non-consensual sexual imagery — including deepfakes — must be removed by a platform within two hours instead of 24 hours previously. Any other unlawful content is to be removed by intermediaries within three hours of a user report or a government or court order, instead of the previous 36-hour timeline.

Complaints related to content linked with defamation, harassment and other legal violations will need to be resolved within 36 hours, instead of 72 hours.

Further, the timeline for grievance resolution officers of digital platforms to produce a final verdict in case of user reports has been brought down from 15 days to seven.

Social media platforms will also need to inform users about privacy laws, policies, prohibited content and recourse available to them at least once every three months, instead of once a year from before.

Failure to comply with the new timelines will attract criminal litigation under the country’s existing social media intermediary laws. Companies falling under the intermediary definition will have to comply with this law starting 20 February, giving them a 10-day window.

Also Read | Will 2026 be a turning point for AI in India?

Even as the stricter timelines have been flagged by some as a move that could find industry pushback, the government is firm that compliance is not a challenge.

“Platforms have demonstrated the capacity to act within minutes — tech companies have very clever technical features and resources seen at exhibitions, which allow them to do much more than they sometimes admit,” a senior government official with direct knowledge of the proceedings said. “The three-hour window is a reasonable obligation given their technological capabilities.”

The official, who spoke on condition of anonymity, also dismissed concerns of potential censorship through the new rules. “Government takedown requests constitute a tiny fraction, less than 1-2%, of total takedowns, while the vast majority are done by platforms enforcing their own community guidelines,” the official said.

Queries sent to Meta, Google, YouTube and X were not immediately answered at the time of publication.

Also Read | Centre notifies amended IT rules to enhance transparency, accountability in content removal by intermediaries

Industry feedback

“Prima facie the final set of rules appears to be fairly balanced, and have removed the primary objections that the industry had floated in terms of the 10% watermarking rule,” said Ashish Aggarwal, vice-president and head of public policy at industry body Nasscom. “The timeline revisions, while potentially subject to pushback from certain quarters, is in keeping with the idea that it only takes a few seconds or minutes for AI platforms to generate explicit or unlawful content to be generated, and then disseminated.”

Aggarwal further said that most such filtering is also automated today, and is not done manually. “The operations part should therefore not be a key challenge but obviously each platform needs to assess the implications and we need to understand if there are, importantly, challenges as all intermediaries will fall under this definition,” he added.

Others, however, said that the latest amendment is adding a compliance burden to companies as a result of the definition of AI content in India.

“While the notified form of AI rules tries to address some of the critical issues that our society has been grappling with, one of the key challenges in compliance is rising from the fact that the definition and coverage of intermediary in India has become heavily convoluted and complicated,” said Supratim Chakraborty, partner at law firm, Khaitan & Co.

An intermediary is currently defined as any platform that is protected against any form of prosecution by law, provided they comply with any intermediary rules mandated by the Centre. Such platforms typically host varying forms of content hosted by users, and claim limited liability in law provided they take steps mandated by law to prohibit harm.

Chakraborty added that if the intention was to simplify regulation on ground and implement a progressive regime for cutting-edge technologies, India may consider separate definitions for intermediaries / digital businesses working in the field of AI, and “not offer a blanket definition covering varied kinds of businesses which cannot be necessarily clubbed under the same head”.

Also Read | Mint Quick Edit | Can we really police the proliferation of AI deepfakes?

Rutuja Pol, partner at fellow law firm Ikigai, added that the 10-day window to comply with the amendments “will likely require all intermediaries to come up with modified or even new workflows and processes in order to comply practically overnight”.

“More importantly, the sweeping classification of all AI tools as intermediaries, continues to perpetuate the flaw with intermediary classification that 2025 AI governance guidelines itself recognise,” she added.

The final draft of India’s AI law exempted use cases such as applying filters or creative filmmaking, terming such use cases to be “routine or good-faith editing, formatting, enhancement, technical correction, colour adjustment, noise reduction, transcription, or compression that does not materially alter, distort, or misrepresent the substance, context, or meaning of the underlying audio, visual or audio-visual information.”

Post Comment