India takes the first shot at regulating artificial intelligence
New Delhi: India has taken the first step to regulate artificial intelligence (AI) and curb its misuse on the internet.
New rules proposed by the ministry of electronics and information technology (Meity) on Wednesday require social media platforms to mandate that their users declare any AI-generated or AI-altered content. While the obligation to label content will be on social media intermediaries, the companies may flag accounts of users who offend the law.
To clearly label AI content, companies will need to visibly post AI watermarks and labels across more than 10% of the duration or size of the content. Social media firms may lose their safe harbour protection if violations are not flagged proactively.
Industry stakeholders have until 6 November to provide feedback to the draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
The draft reflects growing concerns about rising deepfakes or fabricated content that resembles a person’s appearance, voice, mannerisms, or any other trait. Powerful tools such as OpenAI’s ChatGPT and Google’s Gemini have made generating such content easier. While Big Tech enforces safety guardrails against the use of their platforms to impersonate public figures, Google’s ‘Nano Banana’ image model–technically named Gemini 2.5 Flash–has heightened worries because of its ability to create realistic duplicate images of people.
Union IT minister Ashwini Vaishnaw said at a press briefing on Wednesday that the amendment “raises the level of accountability” for users, companies and the government alike, as the volume of deepfake content rises on the internet. “Enforcement of orders with social media intermediaries will now be done by officers at a designation of joint secretary and above at the central government, and DIG and above in case take-down reports are filed by the police bodies,” Vaishnaw added.
“The Centre has already consulted with top AI companies, who have indicated that using metadata to identify AI-altered content is possible. We have notified the rules in accordance, as deepfake content creates social issues at scale,” a top government official further added.
He also said the obligation of identifying and reporting deepfakes will lie with companies, and not users. The new rules will seek to make AI content part of the community guidelines of social media companies, the official said.
On 22 September, consulting firm Gartner revealed that 62% of 302 top enterprise cybersecurity executives it surveyed said that their organizations faced at least one AI deepfake attack that cloned an executive’s voice, appearance or otherwise.
“The proposed amendments to the IT Rules are a significant step in India’s evolving approach to AI governance. By formally defining synthetically generated information and mandating labelling norms for AI-generated content, the government is proactively addressing one of the most complex challenges of the digital age—ensuring transparency and trust in online information,” said Dhruv Garg, founding partner at policy think-tank, India Governance and Policy Project (Igap).
“As highlighted in our recent research on global legal responses to deepfakes, regulatory safeguards must be carefully designed to prevent misuse of such provisions in ways that could inadvertently restrict legitimate expression or artistic, satirical, and creative uses of synthetic media,” Garg said. “Balancing authenticity and accountability with freedom of speech will be key to the success of this framework.”
Meity’s draft comes after the parliamentary standing committee on home affairs, in its 254th report titled ‘Cyber crime: Ramifications, protection and prevention’, said on 20 August that the Centre may need to strengthen the existing legal framework to handle any content generated by AI.
“The committee recommends that to address issues of deepfake or obscene content being uploaded on social media, Meity should consider developing an innovative technological framework mandating all photos, videos and similar content shared on digital platforms to have a watermark as it would help to prove the origin of the content and make it more difficult to edit or manipulate. Further, to make this initiative functional, Meity should set up uniform technical standards for media provenance, while Cert-In (Indian Cyber Emergency Response Team) would act as the coordinator for monitoring and issuing detection alerts,” the committee’s report had observed.
The cases of AI misuse have been rising. On 19 September, the Delhi High Court issued an interim order in favour of film producer Karan Johar, preventing third parties from using AI-generated deepfake content for commercial purposes or any form of impersonation. On 10 September, actor Aishwarya Rai Bachchan also won a similar directive from the court against the misuse of identity, spurred by AI.
Union finance minister Nirmala Sitharaman, speaking at Global Fintech Fest in Mumbai on 7 October, also flagged concerns about the rising volume of deepfake videos, speaking at a session of Mumbai’s Global Fintech Fest.
Post Comment