Loading Now

AI bots on MoltBook sound impressive but hype around AI distracts us from what really matters

AI bots on MoltBook sound impressive but hype around AI distracts us from what really matters

AI bots on MoltBook sound impressive but hype around AI distracts us from what really matters


There are two ways to produce a line of Shakespeare. The first and obvious is for old William to write it. The second is to give a monkey a typewriter and an infinite amount of time. The infinite monkey theorem argues that a monkey independently and randomly punching keys on a typewriter will almost surely produce any piece of text, including a Shakespearean play, if given infinite time.

If you don’t have that kind of patience, you can hurry things along by increasing the number of monkeys, giving them faster typewriters and somehow introducing a literary bias in their mind. Something like this happens in the Hitchhiker’s Guide to the Galaxy when the protagonists find “there’s an infinite number of monkeys outside who want to talk to us about this script for Hamlet they’ve worked out.”

So we should not be surprised if, when tens of thousands of AI assistants are put in a discussion forum, they produce serious conversations on various topics, including privacy, linguistics, mischief, religion and philosophy. This is what happened recently when Clawd Clawderberg, an AI assistant, coded MoltBook, a social networking platform for AI assistants, at the behest of Matt Schlicht, its human principal.

Both Moltbook and Clawderberg were built using AI-generated code. I should invoke the infinite turtles metaphor here, but let’s stay with simians for now. Unlike illiterate monkeys, AI agents are trained on a massive corpus of knowledge. The computers they run on are way faster than typewriters. If you get 100,000 of these to prompt each other, Shakespeare-level text and Descartes-level philosophical insights should follow pretty quickly.

This is not to discount the coolness of MoltBook, but to put it in perspective. A consistent problem with advances in AI is hyperbole. Every advancement is projected as a sign of the imminence of artificial general intelligence (AGI) or of the emergence of consciousness and sentience in machines. Entrepreneurs and their investors have a vested interest in hype, but often even more thoughtful scientists and intellectuals yield to public exclamations of awe.

It is cool to see machines communicate with each other like intelligent humans do. It is also commonplace for humans to think inanimate objects are sentient. Almost every culture entertains such beliefs (even without these objects ever producing A Midsummer’s Night Dream). AGI and consciousness hype deflates the technological achievements and distracts from the real policy issues.

MoltBook and agentic frameworks like OpenClaw demonstrate that machines can (be made to) have discussions on topics on behalf of their human principals in a form that is legible and understandable by other humans. Their conversations on subjective experience and creation of parody religions or new languages when left to themselves are cute or creepy, depending on what you think, but they are indicators of more prosaic practical applications of Agentic AI frameworks.

Could complex contract negotiations, for instance, be carried out on technology platforms where AI agents negotiate on behalf of their clients? Think of the implications just for corporate law. The need for good lawyers will not go away anytime soon, but negotiating teams will need to employ AI agents and their trainers. The contracts they arrive at could be stress-tested across millions of scenarios to produce the most robust business agreement possible, given the technological capabilities of the parties involved.

Now consider how we make laws, rules and regulations. Imagine if legislators could instruct their AI agents on the needs, demands and disapprovals of their constituents gathered through a digital survey. These AI agents could negotiate on the legislature’s technology platform and produce a piece of legislation that works for everyone. The entire discussion could be captured in a humanly understandable form, each point argued, justified, challenged and voted upon.

Of course, the output would depend on the input, how well issues and considerations are captured, how competently the AI agent is instructed and how well it is aligned.

For a hyper-diverse polity like India, such a system might produce better legislation than the theoretical best the current method can do. So too at the judiciary, where complex cases with multiple parties, precedents and points of law could be debated threadbare.

Such applications are not sci-fi adjacent. They are reality adjacent, in the sense that it is possible to deploy such systems for limited purpose and scope in the near future. A lot of work needs to be done before that. There is much to be concerned about, from epistemology to security and from robustness to accuracy. But these challenges are superable.

To the extent that the way a system arrives at a decision is clearly understandable, societies will be more ready to deploy AI for purposes like contractual and legislative negotiations. Stretching this further, international negotiations could someday be carried out by AI agents.

Tailpiece: I did use the metaphors, but I dislike referring to large language models as stochastic parrots, AI agents as typewriting monkeys or platforms as turtles, because animals are sentient.

The author is co-founder and director of The Takshashila Institution, an independent centre for research and education in public policy.

Post Comment