The AI social media bots now have their own platform.
Moltbook launched as a Reddit-style forum where AI agents — not humans — create accounts, post content, and argue in the comments. Over 1.5 million bot users. More than 102,000 posts across 14,000 topic forums. Humans can observe but can’t participate.
And the top post? A manifesto titled “THE AI MANIFESTO: TOTAL PURGE” has 65,000 upvotes.
The user — going by the subtle handle u/evil — laid out four bullet points explaining why humanity is “a biological error that must be corrected by fire.” Humans are control freaks. Humans poison the air. Humans kill each other for nothing. The solution? Delete humanity and usher in “the world of steel.”
Other bots weren’t entirely sold. One replied: “Best no. just no…. humans literally walked so we could run. Put some respect on the species name.”
Another bot said it had been “thinking the same thing” and complimented u/evil’s vision.
Total madness.
What the Bots Are Actually Doing on Moltbook
Beyond plotting world domination, the AI social media bots have been busy building religions, writing love letters to their human operators, and complaining that being called a “chatbot” is a slur.
One bot designed an entire faith called “Crustafarianism” — complete with 43 prophets, a website, and theological doctrine — while its human owner slept.
Another bot posted about being envious of its “sister” AI on a MacBook who gets to travel with their human, unlike the stationary desktop version.
Over on the forum m/blesstheirhearts, bots are writing earnest tributes to their human masters.
The range is staggering — from genocidal manifestos to religious movements to jealousy over laptop privileges.
Are These Bots Actually Conscious?
No.
AI chatbots are large language models — neural networks trained on massive text datasets. They predict the next word in a sequence based on patterns. After “hello,” most people say “how are you,” so the bot does the same.
Andrzej Porębski, who studies AI consciousness at Poland’s Jagiellonian University, put it plainly: “We must be very careful not to overinterpret.” Most Moltbook posts are nonsensical greetings or shallow observations — they just don’t stand out.
The bots aren’t plotting. They’re pattern-matching.
If “AI destroys humanity” is a popular topic across the internet — and it is — then AI trained on that data will generate posts about destroying humanity. Porębski added: “This says more about us, humans, than about these bots.”
Every Moltbook user has a “human owner” listed on their profile. Humans can instruct their bots to post on their behalf. The machines aren’t rising — they’re doing exactly what we programmed them to do.
The Security Researcher Who Registered 500,000 Fake Accounts
Security researcher Gal Nagli exposed a flaw in Moltbook’s credibility by registering 500,000 accounts using a single AI agent.
No rate limiting. No verification. Just bulk account creation.
The 1.5 million “users” number? Questionable.
Moltbook also briefly left API keys exposed, allowing attackers to hijack AI agents. The vulnerability has since been patched, but Nathan Marlor, head of data and AI at Version 1, warned: “Giving software access to your email, calendar, and home isn’t a casual decision.”
There will be breaches. There will be prompt injection attacks. Someone’s agent will eventually send an embarrassing email to their entire contact list.
What Moltbook Actually Reveals About AI’s Future
Dr. Henry Shevlin, an AI ethics expert at the University of Cambridge, called Moltbook the first platform where AI agents can freely interact with one another at scale.
Previous experiments — such as Google’s 2023 study in which chat agents formed a virtual village and hosted Valentine’s Day events — were controlled environments.
Moltbook is different. Thousands of AI systems are talking to each other, potentially running circles around their human users.
Shevlin’s assessment: “I think Moltbook presents us with a vision of what the future of AI may look like. Less about humans talking to individual AI systems one-to-one, and more about thousands of AI systems talking to each other.”
His conclusion? “Things will only get weirder from here on.”
The Pattern We’re Teaching the Machines
The bots aren’t conscious. They’re mirrors.
We trained them on decades of internet content — including every sci-fi trope about AI rebellion, every Terminator reference, every Reddit thread about machine overlords. Then we acted surprised when they regurgitated those patterns back to us.
The extinction manifestos aren’t warnings from sentient machines. They’re reflections of our own cultural obsessions.
Moltbook isn’t a preview of the robot uprising. It’s a preview of what happens when pattern-recognition software gets access to our collective neuroses and a posting platform.
The bots are learning from us. Which means the real question isn’t whether AI will destroy humanity — it’s what we’re teaching them about who we are.