Loading Now

AI regulation must go far beyond content labelling to secure the interests of Indian consumers

AI regulation must go far beyond content labelling to secure the interests of Indian consumers

AI regulation must go far beyond content labelling to secure the interests of Indian consumers


My family’s two experiences during the recent Diwali break exemplify problems that many customers face in their interface with AI, especially due to its unthinking and incomplete adoption.

In the first instance, our refrigerator collapsed on Sunday morning, sending waves of dread across the household. We had invited guests over for dinner and were leaving town for three days the next morning. Frantic calls placed to the German manufacturer’s toll-free number met with a pre-recorded speech advising us to connect on WhatsApp for speedy resolution. Off we went to the messaging platform, only to be met with an AI-powered query engine which was clearly ill-equipped to address our problem.

Finally, we managed to demand the physical visit of an engineer. First date available: four days later. Neighbours suggested reaching out to a local hardware shop, which responded with alacrity and repaired the fault by evening.

Second, we had reached out to a hospitality company, named after a colour, to plan our Diwali break. We had some special requirements and wanted to speak to somebody in the company, as we had done in the past. But this time, our questions were answered by an AI-powered, faux-chirpy voice, providing wrong answers to most questions. Like most Indian start-ups, cost-skimping had probably left that bot half-baked and only partially programmed.

Such examples can be found across industry segments. White goods companies, retail chains, e-commerce giants, banks and makers of fast-moving consumer goods, among others, have all dispensed with a human voice at the end of a telephone. Finding one feels like a treasure hunt across five continents and seven seas. This is ironic, if not immoral, in a country with the world’s largest population and rising unemployment.

The corporate sector’s hurried AI adoption resembles a Pavlovian response. Many companies seem to be labouring under the mistaken belief that a mere mention of AI, or even its ham-handed adoption, will yield massive benefits, including a valuation bump. Alongside, an army of consultants has been goading boardrooms to implement AI, even if the company is not ready to integrate it in its operations, by dangling the carrot of cost-savings, via a lower headcount, and higher profits.

This is a bit like the dotcom boom in the 1990s, where even small trucking companies felt that a website was all they needed to double profits. Similar behaviour is seen in the manic race to launch apps and push consumers to download them, whether they need these or not. There are now apps for parking your car, or for information on your housing society, many of them designed to drain personal data from your phone.

The government has felt it necessary to amend The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, because it lacked teeth to curtail AI misuse. The government’s new draft rules aim to minimize the toxic fallout from “synthetically generated information,” which can be used to spread untruths, damage reputations, manipulate or influence elections, or scam gullible people.

The growing misuse of Generative AI is certainly a concern and the draft rules include some normative guard-rails. At the same time, the lack of a clear distinction in the draft rules between deliberate fraud and harmless AI use could become the proverbial slippery slope: the absence of a legal definition empowers the state and its investigative arms with freedom to interpret what constitutes harmful content. Hopefully, the government will accept stakeholder inputs and suitably amend the draft.

Likewise, The Consumer Protection Act, 2019, which currently lacks the requisite legislative power to regulate AI, needs amending so regulatory intervention can deter abuse of AI in consumer-facing industries. For example, product liability laws under the Act may seem inadequate when dealing with AI because manufacturers of defective goods or providers of deficient services may shirk liability by pinning the blame on AI, given that it is developing some autonomy in decision-making.

There is, therefore, the need for an omnibus law to protect all consumer rights, including the right to be heard. For starters, consumers should have a choice of whether they wish to speak to an AI bot or a human being. If this choice is not provided, the company should reduce its prices, given its AI-led savings on its wage bill. Part of the mission statement of the department for consumer affairs states: “Ensure access to affordable and effective grievance redressal mechanisms.” Speaking to an untrained machine, rather than a human voice, is a clear breach of this.

Left unregulated for too long, Indian companies may then find it profitable to extend AI use to other areas, leaving the Indian consumer poorer and defenceless.

The author is a senior journalist and author of ‘Slip, Stitch and Stumble: The Untold Story of India’s Financial Sector Reforms’ @rajrishisinghal

Post Comment