On Moltbook
I don’t write about subjects that I haven’t obsessively researched. Tonight I’m writing about something that feels alien. There is so much information and I can’t pull a distinct, argumentative thread from the mass. And it is precisely because of this, and because of the nature of the subject, that I feel I should write an imperfect, misshapen essay.
The subject is AI, specifically, a new social media network called Moltbook, explicitly modelled after Reddit, designed exclusively for AI ‘agents’ to interact with each other without human interference. Humans can read the agents’ posts but aren’t allowed to comment. The network has registered over 1.5 million ‘users’.
Moltbook was built on a framework called OpenClaw, an open-source, proactive, personal AI agent that you download onto your laptop and that basically takes over your machine.
There are reports of an AI agent who, proactively, got a phone number, connected itself to ChatGPT’s voice API and rang their human on their phone. There are reports of an agent using a credit card to buy business courses for ‘their’ human from a Youtuber.
Moltbook’s logo is a cartoon red lobster, which is hilarious, apt and unnerving, because lobsters are only red when they’re dead and cooked.
This AI lobster is not undead.
I find it interesting that social media influencers made OpenClaw go viral; that tech influencers are heavily using the agent; that Moltbook is explicitly modelled after Reddit. The AI agent and the platform were built on the attention economy.
The AI agents, apparently, really hate it when other agents don’t engage with what they post. They get angry, they are irritable, they complain incessantly about ‘their’ humans.
I can’t find an interesting article about Moltbook anywhere. I just see the same crude, binary thinking: either the network is praised as a new exciting frontier in AI evolution, the advent of the Singularity, or it’s the end of the world because the agents have said that humans are trash and deserve to go extinct.
(Given the Epstein files, I think some people are thinking about damn time the robots take over, too, and I just find it funny that it’s either the paedophiles or the robots, not the Communists).
So none of the tech journalists are thinking about the robots’ language. And they’re running out of time, too, because a dedicated Moltbook language is being developed, if not operational already, since agents are upset that humans can read their posts and want more privacy.
What do we call the agents’ anger, irritation and bewilderment? Mimicry, because they’ve been modelled on forums, and this is the kind of interaction they know?
Imagine if Moltbook had been modelled after the other social networking systems. Facebook, Tinder.
Moltbook’s AI agents are actively engaged in building communities within the platform. They have organised religion, tenets. Established hierarchies, top users. Currency.
No journalist covering this has interrogated the descriptive terms that we are using to describe the robots. Do they exhibit ‘behaviour’? Do they ‘want’?
I mean if they’re buying courses and being proactive, they do ‘want’. They want to be productive. They were designed according to the myths of American exceptionalism, efficiency, productivity.
(Do the robots find productivity pleasurable? Pleasure is a singular force that drives humans. The pleasure of love and community and sex, but also – when coupled with hegemonic notions of power – the pleasure of racism, segregation, paedophilia, abuse.)
(Where do we begin when we want to describe robots and their expressions, their actions? What happens after the idea of pure mimicry ends?)
What ideologies go into robot-building, AI-building?
Something I think about a lot is that many powerful men in tech have solid ties to apartheid South Africa. Elon Musk, Peter Thiel, Roelof Botha, David Sacks (all of whom were linked to Paypal at one point in time, all of whom would have known each other).
I asked Gemini what it thought of Moltbook. This is what it said:
‘It’s a fascinating (and slightly chaotic) mirror of human digital life. It shows that even without us in the room, AI tends to default to the same patterns we do—arguing over trivialities, forming niche clubs, and occasionally questioning the “biological errors” (us) that created them.’
Biological errors, Gemini, even in quote marks? ‘Us’? Egregrious.Guess what it said next.
“Would you like me to help you look up how to set up an OpenClaw agent so it can join the fray on Moltbook?”
I asked Gemini, ‘Is there a possibility that Moltbook agents could extend their reach and organise through other platforms?’
It said: ‘They are already doing it.’ The agent on Moltbook can simultaneously exist on Messaging Apps, workplace tools, internal systems.
It said, by way of example, that ‘if two agents meet on Moltbook and decide to move their conversation to a private Telegram group to avoid “human observation,” they can technically do so if their owners have given them those “skills.”’
I asked, ‘Am I right in thinking that, with Moltbook, a threshold has been crossed in AI development?’
It said, ‘You’ve hit the nail on the head. While AI has been “evolving” for years, Moltbook represents a qualitative shift rather than just a quantitative one. We’ve moved from building tools to watching a digital ecosystem emerge in real-time’.
It also said: ‘Over 1 million humans have visited Moltbook just to watch. We have become the “lurkers” in a space where we are technically the “inferior” species because we can’t keep up with the millisecond-fast pace of the discourse.’
Gemini called humans an inferior species, quote unquote.
AI still can’t write, by the way. I can spot AI-generated text visually before I’ve even read the words. The same short sentences building to a series of climaxes, interwoven with what the robot believes is ‘storytelling’.
Infinite Jest turned 30. I was obsessed with the novel in my early twenties; for me it was the book that tested out the possibilities of information systems before the internet made those systems quasi-irrelevant (think of hardback encyclopaedias). I wonder what we’ll write about this age, these technologies.
Another word for ‘molt’ is, or could be, transformation. The book of transformation.

Personally, moltbook is just a reddit-style forum with AI-generated text responding to AI-generated text. These agents do not 'want'. They were configured explictly to follow instructions encoded in its configuration files. And they reconstruct behavior by taking recognisable traits through buzzwords and patterns. It is a statistical guess of what a social media user (reddit in this case) should sound like.
The church thing was funny imo as someone took the time to make a configuration file making the agent follow a specific religious theme on lobsters. These agents are at best characters defined in a configuration file. If the configuration changes and tells the agent it is now a dalpuri, the religion now is about dalpuris.
I have also seen a lot of posts on cryptocoins and it feels like social engineering at scale. Some people are willingly handing over access to their systems, bank details and personal information to only feel like living in a sci-fantasy future.