Are India’s AI rules future-proof? Let’s test them against 2030 scenarios of an evolving internet
Announcing new AI rules on the eve of a high-profile global gathering allows India to signal that it takes AI harms seriously. But beyond the optics, the substance of these rules deserves attention.
At their core, the amendments focus on “synthetically generated information,” which is content created or materially altered using AI and made available online via digital services like social media. Over the past few years, the world has witnessed an explosion of AI-generated political memes, deepfakes and morphed images, including sexually explicit fabrications involving real individuals. These have caused India’s political and societal fabric significant harm.
The government’s initial response was blunt, as most early drafts of technology rules tend to be. Previous proposals sought sweeping restrictions that evoked industry concern and even alarm. To the ministry’s credit, much of this has been rationalized. The definition of ‘synthetic content,’ for instance, has been narrowed to exclude routine editing, such as quality enhancement or the use of assistive AI. This carve-out addresses a big industry fear: that everyday tools like photo filters could face a heavy compliance burden.
Earlier drafts also leaned towards rigid specifications for watermarking all synthetic content. The final rules step back from this approach. Instead of prescribing that 10% of visible content be watermarked, as the previous rules did, they require content labels that are “prominent, noticeable and perceivable.” This leaves room for industry-led labelling standards to evolve.
These are welcome changes. But content takedown timelines have been drastically shortened—for example, from 36 to 3 hours after a platform such as a social media service receives a notice from the government or a court. This could mean access cut-offs, content removal or even account suspensions on the assumption of wrongdoing. There is a big problem here. The rules do a bad job of imagining or accommodating the future. Let us fast forward to a projected 2030 scenario to discover why.
The internet we inhabit a few years hence is unlikely to look like today’s. It will increasingly be populated not by humans, but by AI agents acting on our behalf. They will draft our posts, respond to messages, generate images, summarize news, argue online and even maintain our digital presence while we sleep. The line between ‘authentic’ and ‘synthetic’ will blur to the point of irrelevance.
We may not be comfortable with that vision. But cognitive outsourcing to machines is already taking place. The idea that most content online could soon be AI-assisted or entirely AI-generated is not science fiction. In fact, the internet as a whole is becoming self-referential—machines learning from machines and generating material for machines. In such a world, what does a 3-hour content takedown window mean?
Such a short time-span allows businesses no space to assess the legitimacy of requests, particularly from government departments. Today, platforms may pause, examine context, weigh legal risk or even challenge executive notices. Tomorrow, a delay could imply a legal liability. The incentive will be to comply first and scrutinize later.
Governments are not infallible. Decisions on political speech, satire and dissent are not straightforward. In a 2030-internet saturated with AI, the volume of such decisions will likely multiply. So what comes next? There are, broadly, two paths.
The first is of a subtle convergence. Digital platforms and governments become comfortable with each other. The arrangement suits both. Platforms exercise no judgement on the nuances of AI-generated online speech—particularly on posts that could be construed as political—because the government acts as the final arbiter. The state, on its part, acquires the capacity to monitor and direct online speech at scale in a government-mediated digital environment.
The second path is harder but healthier. Society demands that platforms invest meaningfully in online trust and safety, especially in vulnerable geographies. Platforms respond with better detection tools, stronger contextual review systems and human-in-the-loop safeguards. At the same time, they ask for a regulatory posture that is future-ready and adaptive, rather than one that defaults to command-and-control when faced with uncertainty.
The future-proofing question is not limited to AI rules alone. Consider data protection. By 2030, we will likely live in an IoT-saturated environment, with internet-connected devices embedded in public spaces, workplaces and homes. In such a world, will the model of obtaining granular informed consent for each act of data processing, as is the case under our data protection law, remain workable? We will move through immersive digital environments, surrounded by sensors and digital artefacts. The idea that individuals can meaningfully read, understand and consent to every notice is implausible.
Laws and platforms will not be able to deliver a perfectly secure or just online world. The basic question we face is: What kind of digital society do we want?
The author is a public policy expert and partner at Koan Advisory Group, New Delhi.
Post Comment