Loading Now

India’s bet on light-touch AI regulation is pragmatic—but will new norms need legal teeth?

India’s bet on light-touch AI regulation is pragmatic—but will new norms need legal teeth?

India’s bet on light-touch AI regulation is pragmatic—but will new norms need legal teeth?


In the absence of a dedicated law to regulate fast-evolving artificial intelligence (AI) technologies, some of which impersonate humans and even operate autonomously, India’s new guidelines for AI governance mark a pragmatic step forward.

With the Digital Personal Data Protection (DPDP) Act yet to be implemented and the Digital India Act still pending, the government’s techno-legal approach contrasts sharply with rigid regulatory models of the West. It seeks to test corporate compliance while encouraging innovation, urging companies to voluntarily embed safety and accountability into AI design.

AI is now seen as pivotal to India’s development goals. According to Niti Aayog’s AI for Viksit Bharat report, by 2035, AI could add $1-1.4 trillion to India’s GDP, with productivity gains generating $500-600 billion and innovation contributing another $280-475 billion, plus other yields.

An AI foundation is being laid. Apart from the Centre’s semiconductor thrust, its 10,000-crore IndiaAI Mission aims for AI capacity that covers infrastructure, data, talent and adoption.

It plans to deploy over 38,000 graphics processing units (GPUs) through a shared network and has identified 12 firms to develop India-specific language models. The AI Kosha repository hosts local data-sets for innovators to use. AI integration with UPI and the Account Aggregator system are expected to drive financial inclusion through voice-based access, fraud detection and affordable services.

Yet, AI remains a double-edged sword. Agentic AI systems capable of reasoning, planning and acting on our behalf are advancing rapidly, but experts warn they may soon surpass humans in research, communication and more, even as deepfake threats grow.

India’s AI Governance Guidelines, released this week by the ministry of electronics and information technology (MeitY), aim to strike a balance between innovation and safety. The framework outlines seven key principles: trust, fairness and equity, human-centric design, responsible innovation, design openness, accountability and safety; and proposes six pillars: infrastructure, capacity building, policy, regulation, institutions and risk mitigation.

So far, India has relied on existing laws to address AI misuse. The IT Act of 2000 and its 2021 rules govern online platforms, which, if recently proposed tweaks go through, must monitor GenAI content for labelling and takedowns, deepfakes included.

The guidelines acknowledge that the IT Act needs an update. Their drafting panel has recommended that roles be defined across the AI value chain (developer, deployer or user). Also, that the IT Act’s safe harbour provision—which protects platforms from liability for what people post online—be revisited, as it may not apply to AI systems that generate or modify content on their own.

Further, as rules under the DPDP Act take shape, key questions remain: Can AI models train on publicly available personal data? How will rules on minimal data and purpose limitation apply? What roles will consent managers and dynamic consent play?

To address such concerns, this week’s guidelines propose a high-level AI governance group involving MeitY and regulators such as RBI, Sebi, Trai and CCI. They also urge the enlisting of various other state agencies to help frame and enforce AI safety and technical standards.

All said, the government is hoping for big AI gains with little loss of privacy and public trust. However, if voluntary compliance falls short, we may need a special AI law enacted to make publicly determined norms legally binding.

Post Comment