Why AI’s next smart move might be scrapping chatbots
For three years, chatbots have been the face of generative artificial intelligence. Type anything in them to get a personalized response, which morphs into a seemingly magical dialogue with a machine. While that conversational interface may seem the best way to harness large language models (LLMs), some companies are starting to ditch chatbots, worried about liability and loss of control.
They’ve found that even with guardrails, users can ‘jailbreak’ the technology and get a chatbot to go off topic, sometimes in harmful or unsavoury directions. They might be leaving magic on the table, but these firms are also potentially building safer, more focused products, and raising questions about whether chatbots really are the future interface for AI or just a fad.
Character.ai is one of the most popular consumer AI apps after ChatGPT, with roughly 20 million monthly active users, many of them young people who chat to characters on its platform from the world of anime, books or movies. This month, however, the app is banning users under 18 from having conversations with its chatbots, following complaints about kids becoming too dependent and in some cases experiencing psychological harm.
Chief Executive Officer Karandeep Anand says Character.ai is making a strategic pivot towards becoming an entertainment platform, and taking a more cautious approach. “There is probably not enough tech or research to keep the under-18 companionship experience safe over a very long period of time,” he said.
Six months of research into using audio and video models also gave his company, “high conviction that chatbots are not necessarily the way where you build the best entertainment product,” and that the firm can come up with ways to serve minors with experiences that are “both better and safer.”
Instead of open-ended conversation with a chatbot that could lead anywhere, Character.ai is changing its interface for teens so there’s less typing as if you were texting a friend.
Character.ai’s user numbers, which peaked at 26 million, dropped earlier this year when Anand made another radical change—banning chatbot conversations involving sexual content or self-harm. But usage has gradually recovered, and the CEO believes that will continue even after his latest move. It’s a pivot he can make more easily as Character.ai is owned by its employees, and not beholden to investors.
A similar approach is being taken by Vitality Health, a unit of South African insurance group Discovery, whose business model attempts to incentivize healthy behaviour among subscribers through rewards and discounts.Earlier this month, the company announced a partnership with Alphabet’s Google that uses the tech firm’s LLM, Gemini, to help power the main Vitality app used by customers.
While Gemini is best known as a chatbot, Vitality is choosing to constrain the technology so that it processes language behind the scenes, instead mostly showing text, buttons and options for people to click on. It might, for instance, display a small box encouraging users to take 2,500 steps that day in order to earn points and rewards. In other words, Gemini can help Vitality ‘talk’ to its customers without entering into a conversation.
Conversational AI simply introduces too much risk and unpredictability—not great in healthcare. “We have to be careful and considered,” says Emile Stipp, managing director of Vitality AI and the company’s global chief actuary. Oura Health, the Finnish maker of the Oura Ring, takes a different approach with its ‘AI advisor,’ a chatbot in the device’s app that customers can talk to about their sleep scores and other biometric data.
Vitality says it will look into adding a similar coach to potentially help improve sleep quality, but the company isn’t ready to put it into practice yet.
Constraint can often breed innovation, and it seems likely that over time more businesses will find they prefer having greater control and insight into their digital services, rather than the seemingly uncanny fluency of chatbots. Of course, companies risk missing out on a trendy and engaging feature, which their competitors could offer instead. But switching to suggested prompts and clickable buttons also makes AI easier to use.
Elon Musk’s AI platform Grok, for instance, features template suggestions in its image generation tool which can be handy prompts for people with limited knowledge of graphics.
It can be safer too. In my view, OpenAI should consider reining in ChatGPT, currently the most-used consumer AI interface. A future homework tool for users under 18, for instance, might be more useful as a constrained, purpose-specific interface rather than an open-ended chat window. The future of AI might not be about talking to it at all. ©Bloomberg
The author is a Bloomberg Opinion columnist covering technology.
Post Comment