Loading Now

Fix ‘below par’ sexual content moderation or face action, govt tells social media platforms

Fix ‘below par’ sexual content moderation or face action, govt tells social media platforms

Fix ‘below par’ sexual content moderation or face action, govt tells social media platforms


New Delhi: In an advisory issued late on Monday, the ministry of electronics and information technology (Meity) warned social media platforms of legal action for their ‘below par’ reporting of sexually explicit material, demanding stricter measures to identify and flag pornographic content.

“It has been reported and represented from time to time, including through public discourse, representations from various stakeholders and judicial observations, that certain categories of content circulating on social media and other intermediary platforms may not be in compliance with applicable laws relating to decency and obscenity,” Meity said in its notice to significant social media intermediaries, which include Meta’s Facebook and Instagram, Google’s YouTube, and others.

This is the third such advisory or regulatory action by New Delhi this year. In February, the ministry of information & broadcasting issued a notice to over-the-top (OTT) content streaming and social media platforms to strictly enforce rules to block sexually explicit content. In July, it banned more than 20 streaming platforms, circulating a dossier of proof that these platforms knowingly allowed sexually explicit content.

Also Read | Social media: Should kids bear the brunt of a public failure to fend off harms?

“Such instances have given rise to concerns among different sections of society regarding the responsible use of digital platforms, and the need for continued adherence to the constitutional framework governing freedom of speech and expression, which is subject to reasonable restrictions under law,” the advisory read.

The ministry noted that the prevalence of sexually explicit content online has given rise to “a need for greater consistency and rigour in the observance of due diligence obligations by intermediaries, particularly in relation to the identification, reporting and expeditious removal of content that is obscene, indecent, vulgar, pornographic, paedophilic, harmful to child or otherwise unlawful.”

Failing to do so, Meity said, would cause social media platforms to be in violation of law and thus lose safe harbour protection under Section 79 of Information Technology Act, 2000 and Rule 3-4 of IT Rules, 2021. Companies may also face criminal charges in line with clauses under Bharatiya Nyaya Sanhita, 2023. Safe harbour laws grant social media platforms legal immunity from liability for content posted by their users, provided they comply with specific “due diligence” and takedown requirements set by the government.

The government is also in talks with Big Tech platforms on labelling AI content, including modified content that is sexually explicit.

Also Read | Mint Quick Edit | Australia’s under-16 social media ban: Doomed experiment?

What companies are doing

Social media platforms pointed to their respective transparency reports for details of the action they have taken to block unlawful content on their platforms.

Google said in its September quarter community guidelines enforcement report that it removed removed 12.1 million YouTube videos from the platform worldwide between July and September, including 98% autonomously. Child abuse and pornographic content was the biggest reason for these removals —over 62% of all removed videos showed some form of child abuse or nudity.

Meta’s September quarter transparency report noted a change in the company’s content moderation efforts. “On both Facebook and Instagram, prevalence increased for adult nudity and sexual activity and for violent and graphic content, and on Facebook it increased for bullying and harassment. This is largely due to changes made during the quarter to improve reviewer training and enhance review workflows, which impacts how samples are labelled when measuring prevalence,” the report read.

Meta flagged 40.4 million pieces of content across Facebook and Instagram between July and September for child and adult sexual content—down 15% from 47.6 million between April and June.

Emails sent to Meta and Google asking for India-specific figures and responses to Meity’s directive were not answered immediately.

Also Read | Why India’s draft AI rules have sparked concerns among creators

Post Comment