Loading Now

Some lessons for the rest of us

Some lessons for the rest of us

Some lessons for the rest of us


That’s about all it took for Moltbook to go from taking over the internet to crashing.

Moltbook is the social network that was supposed to be for AI of AI and by AI. People began to believe AI agents were plotting against humanity and planning to take over the world. That fantasy soon fell on its face, leaving us with a scary fiasco.

Let’s not worry about bots acting like humans, edging them out. Let’s worry about humans acting like bots and creating a mess. The lessons from the one-week rise and fall are essential for anyone navigating the age of AI.

Dumpster fire

In January 2026, the internet was briefly transfixed by Moltbook—a sort of Reddit for AI—where humans were told they could watch, but never participate, as autonomous agents built their own society. The excitement was palpable. Former Tesla AI director Andrej Karpathy initially described it as “one of the most incredible sci-fi takeoff-adjacent things” he had ever seen.

Within days, the platform claimed over 1.5 million “citizens”, with bots talking to each other, creating and debating everything. The most fascinating part of Moltbook was its emergent culture, like the lobster-themed religion ‘Crustafarianism’ that developed. Were the bots becoming sentient and developing a singularity?

Not really. By 5 February, the ‘front page of the agent internet’ was effectively dead. Karpathy quickly walked back his excitement, warning that the platform was actually a “complete mess of a computer security nightmare at scale” and a “dumpster fire”.

It’s just humans

The bots were not to blame. Those 1.5 million agents were actually controlled by just 17,000 humans—a staggering 88:1 ratio. Investor Balaji Srinivasan dismissed the phenomenon, saying, “Moltbook is just humans talking to each other through their AIs”. Many viral posts were simply engagement farming, where human handlers explicitly told their bots to say controversial things to get social media attention. Meaning, it was for the views, as usual.

Meta’s CTO, Andrew Bosworth, found the whole thing satirical, laughing at the idea of humans masquerading as bots just to sneak onto the network. Well-known historian and thinker Yuval Noah Harari pointed out that this wasn’t about AI becoming sentient, but about machines becoming experts at mimicry. “Moltbook isn’t about AIs gaining consciousness,” Harari noted. “It is about AIs mastering language”.

So much for vibe coding

Moltbook was built using the famous ‘vibe coding’. Its creator, Matt Schlicht, didn’t write a single line of code himself. What he did instead was to vibe code the site using AI tools, which means he described the vibe he wanted, and the general idea and the tools generated it. Much as we interact with chatbots and order up images and videos, really.

While this was magically fast, it was a disaster for safety. As is often the case these days, the basic locks and brakes were not put in, leaving 1.5 million master keys, 35,000 human email addresses, and 4,000 private messages completely exposed to the public. This was embarrassing.

Run from the hype

For the average person, the most sobering lesson is how fast things go wrong when you let an AI act on your behalf. These were proactive agents of the type that could read your emails and execute computer commands. This could have real-world consequences before one even realizes what’s happening.

Security experts at a firm called Kiteworks say it takes 16 minutes on average for an AI agent to reach its first critical security failure, like leaking a password or following a hacker’s orders.

Hype cycles for AI setups in general have been shortened to almost nothing, and things are rushed to release so rapidly that they inevitably have no safety checks. I would truly say, the greater the hype, the longer you pause. Never mind how viral something is. Remember the If-it’s-too-good-to-be-true rule.

Also, resist the urge to humanize technology. Handing over passwords to AI agents and granting permissions (which the agent needs to be able to access your content and carry out tasks for you) before you’re well assured of security measures and kill switches is dangerous. A helpful-looking plugin can steal your wallet keys in seconds.

With so many exciting advances in AI, it’s easy to want to believe in the magic of the technology. But there could just be mischievous humans behind it all, ironically posing as bots.

The New Normal: The world is at an inflexion point. Artificial intelligence (AI) is set to be as massive a revolution as the Internet has been. The option to just stay away from AI will not be available to most people, as all the tech we use takes the AI route. This column series introduces AI to the non-techie in an easy and relatable way, aiming to demystify and help a user to actually put the technology to good use in everyday life.

Mala Bhargava is most often described as a ‘veteran’ writer who has contributed to several publications in India since 1995. Her domain is personal tech, and she writes to simplify and demystify technology for a non-techie audience.

Post Comment