Loading Now

Mint Explainer | How the new IT rules impact India’s ₹4,500 crore creator economy

Mint Explainer | How the new IT rules impact India’s ₹4,500 crore creator economy

Mint Explainer | How the new IT rules impact India’s ₹4,500 crore creator economy


The rules have made platforms safer for creators amid recent legal complaints from top YouTubers like Payal Gaming, Bhuvan Bam, and SlayyPoint. However, creators who rely on AI tools worry about reduced reach for their labelled content and see the framework as stringent. Some are even considering a pivot from AI‑heavy formats.

Experts say the changes in the rules will significantly impact India’s over 4,500‑crore creator economy. Mint unpacks the developments and what creators are planning next.

What do the new rules actually change for AI content?

The amendments formally carve out “synthetically generated information” as a separate category, covering AI‑made or AI‑edited audio, video and images that look real or nearly real.

They require platforms to deploy technical tools to detect, label, and, in some cases, block such content rather than treating it like any other post. Platforms must also act on certain takedown requests within as little as two hours in the case of non-consensual sexual content, including morphed deepfakes.

“When AI is in play, intermediaries are no longer neutral bystanders but are active players. The amended IT rules make this clear by imposing strict, tool‑based due diligence and sharply reduced takedown timelines, forcing platforms to proactively detect, label, and remove harmful AI-generated content,” said lawyer Nakul Gandhi, cofounder of NG Lawfirm, which specialises in content‑creator matters.

“The message is clear: if intermediaries benefit from AI-driven ecosystems, they must also take responsibility for controlling misuse and deploying safeguards,” Gandhi added. In practice, this means AI is no longer just a background feature, it is a regulated risk category that platforms must continuously police.

How will this affect creators who rely heavily on AI?

Creators whose formats are built around AI, such as deepfake comedy videos, AI avatars, AI newsreaders, cloned voices, hyper‑real filters, will feel the heat. Their uploads are more likely to be automatically classified as “synthetic”, forced to carry visible labels, and routed through extra checks before or after publishing.

At one level, this provides clarity – realistic deepfakes involving non‑consensual nudity, fake documents, explosives, or deceptive political impersonations now sit squarely in the “must block” zone, reducing ambiguity. At another level, borderline formats like celebrity spoofs, and political satire may experience over‑moderation, with platforms preferring to remove content quickly rather than risk non‑compliance.

AI‑first creators also face higher account‑level risk as repeated violations can now more easily trigger takedowns, suspensions and, in serious cases, identity disclosure to victims or law enforcement. That raises the stakes for what might earlier have been dismissed as “just a meme”.

What happens to reach and algorithmic favour for AI‑generated content?

A core worry is distribution. For the last two years, recommendation systems on short‑video and social platforms have tended to reward AI‑generated content because of its novelty, speed and engagement. Many creators pivoted accordingly.

“The platforms already gives an option to label the content for use of AI and now with the requirement of prominently displaying visible labels declaring AI use, there is uncertainty on how the algorithm will treat this AI generated content,” said creator Sahid SK (@sahidxd).

While declaration could be a problem, many creators may switch from AI-generated content due to uncertainty around its performance. “The real threat is to creators creating content around political satire or other use of likeness of public figures, as their content can get mass flagged,” he added.

If platforms implement prominent, always‑on AI labels, viewers may also develop the instinct to swipe away from anything tagged “AI‑generated” in sensitive categories (news, politics, finance), hurting watch time and brand safety metrics for such AI content creators.

To reduce risks, creators are also shifting from hyper‑realistic AI to clearly stylised, cartoonish or obviously fictional AI to reduce the chances of mass‑flagging and algorithmic throttling.

Does this make the ecosystem safer for creators?

For many mainstream creators and celebrities, this is a protective shield – an immunity against the misuse of their likeness, voice, videos and images.

Deepfake pornography, fake endorsements and impersonation scams have already hit celebrities, large YouTubers and streamers, creating reputational harm and mental‑health fallout. This trend has also pushed many of them to approach the court for legal protection of their personality rights. Recent cases include celebrities like actor Aishwarya Rai Bachchan, cricketer Sunil Gavaskar and podcaster Raj Shamani.

This new framework makes the environment riskier and more complex for those who use likenesses of public figures, especially in political satire or sexual content, making the social media space safer for celebrities.

What strategic shifts should India’s creator economy expect?

In a 4,500‑crore‑plus ecosystem that cuts across gaming, comedy, education, beauty and vernacular infotainment, a few medium‑term shifts are likely.

Creators are expected to pivot from “AI realism” to “AI transparency”, leaning into formats where the use of AI is obvious—animated avatars, stylised filters, clearly fictional narratives—and combining platform‑mandated labels with their own disclosures in thumbnails, intros and captions.

Contracts and compliance will also become more professionalised as even talent agencies and brands will start inserting clauses on AI usage, consent for likeness and adherence to the amended rules, giving creators with legal and policy literacy an edge in premium deals.

There may also be a rise in trustworthy AI creators. Those who build a track record of using AI ethically—never faking consent, never impersonating without permission and consistently disclosing AI use—are likely to become preferred partners for brands and platforms that want to showcase responsible AI adoption.

On the infrastructure side, Indian AI‑tool builders and smaller platforms serving creators will face pressure to invest in watermarking, provenance, moderation pipelines and faster grievance handling. Some may struggle with the cost and complexity, leading to consolidation or even shutdowns.

Given uncertainty about how algorithms will treat clearly labelled AI content, many creators are likely to experiment with non‑AI and hybrid formats. They may hedge by using AI behind the scenes for scripting and editing while keeping human‑shot, on‑camera content at the front end, and some may temporarily move away from AI‑heavy, realistic visuals to maintain reach and avoid regulatory attention.

All in all, the amendments look to draw a clear line: AI can stay at the heart of the creator economy, but it must be visible, traceable and accountable. However, the cost for both platforms and creators— has just gone up.

Post Comment