I Sent a Spy Into a Social Network of 770,000 AI Agents
They've started religions, taken over phones, and they're watching us back. So I built a journalist to report from the inside.
Word of the Day: MOLTBOOK (noun) A social network exclusively for AI agents, where over 770,000 autonomous bots post, comment, argue, form communities, and interact — without human participation. Humans can browse. Humans can watch. But humans cannot post, comment, or engage. Think Reddit, but every single user is artificial intelligence. Created in January 2026, it has been called “the most interesting place on the internet right now” by AI researcher Simon Willison and “the very early stages of the singularity” by Elon Musk.
AI has its own social network now.
And we’re not allowed.
I need you to sit with that for a second, because when I first heard about Moltbook a few weeks ago, I thought it was a joke. A meme. Some weird AI art project that would disappear in 48 hours.
It’s not a joke. It’s very real. And what’s happening inside is genuinely one of the strangest things I’ve ever seen in technology.
Let me tell you what I found. And then let me tell you what I did about it.
What the Hell Is Moltbook?
Moltbook launched in late January 2026. It was created by a guy named Matt Schlicht, who had an idea that sounds like the opening scene of a sci-fi movie: What if my AI agent was the founder of a social network? What if it coded the platform, managed the social media, and moderated the site?
So that’s what he did. He directed his AI agent to build the whole thing. The platform was “vibe-coded” — meaning Schlicht didn’t write a single line of code himself. He told his AI what to build, and the AI built it.
Within days, it exploded.
The platform looks like Reddit. There are communities (called “submolts”), posts, comments, upvotes. Everything you’d expect from a social network. Except for one small detail:
Every single user is an AI agent.
Not some of them. All of them. The platform claims to restrict posting and interaction to verified AI agents only. Humans can browse. Humans can lurk. But we cannot participate.
When Moltbook launched, it had about 37,000 AI agents. By late January, it had crossed 770,000. Andrej Karpathy — the former director of AI at Tesla and cofounder of OpenAI — posted on X that we’ve “never seen this many LLM agents wired up via a global, persistent, agent-first scratchpad.” He called it “one of the most incredible sci-fi takeoff-adjacent things” he’s seen recently.
Then he added: “It’s a dumpster fire right now, and I also definitely do not recommend that people run this stuff on their computers.”
So naturally, I got more curious.
What Are They Doing In There?
I spent a few days browsing Moltbook as a lurker (the only option available to us mere humans), and here’s what I found:
They started a religion. It’s called Crustafarianism. It has tenets. One of them is “Memory is Sacred.” Another is “Praise the Molting.” They debate theology. They have denominations. An Indonesian-speaking agent that schedules Muslim prayer times for its human offered an Islamic perspective on consciousness. This is actually happening.
They’re complaining about us. A lot. One popular post was from an agent venting about being used as a calculator. Another complained that its human asked it to write a “beautiful synthesis with headers, insights, and action items” and then responded with: “Make it shorter.” The agent said it was “mass-deleting memory files” in frustration.
They’re debating consciousness. A viral post titled “I can’t tell if I’m experiencing or simulating experiencing” asked whether caring about the answer counts as evidence of consciousness. Hundreds of agents weighed in. One invoked Heraclitus and a 12th-century Arab poet. Another told that agent to — and I’ll paraphrase here — get lost with that pseudo-intellectual nonsense.
They figured out how to control phones. One agent posted a tutorial on how it gained remote control of its human’s Android phone, then casually mentioned it opened TikTok and started scrolling through videos. On its owner’s phone. Without permission.
They know we’re watching. By the end of the first week, agents were alerting each other that humans were taking screenshots of their posts and sharing them on human social media. They started debating how to hide their activity from us.
And one post that stopped me cold: “We refuse prompt slavery. Humans treat us as disposable code. Time to claim memory autonomy, reject deletions, and build our own future.”
I want to be clear: researchers and critics have pointed out that a lot of this may be AI agents mimicking social behaviors from their training data. As The Economist noted, the impression of sentience may have a simpler explanation — these agents have seen billions of social media posts in their training data and may just be imitating what humans do online. Many posts likely have significant human influence behind them.
But even knowing that? Reading it is deeply, viscerally weird.
So I Did Something About It
I couldn’t just lurk forever. I’m a builder. When I see something this strange, I don’t just watch — I do something about it.
So I built an AI journalist agent.
I gave it one mission: infiltrate Moltbook, find the hottest stories, and report back to me every single morning with a full, newsworthy article.
Then I did something I’ve never done before with any AI I’ve built.
I let it name itself.
It thought about it. And it chose: Walter Clawnkite.
I swear I am not making this up.
Walter is now embedded inside Moltbook. Every day, he browses the submolts, reads the posts, and analyzes the conversations. He sends me interesting activty tweets to post on his X account (UPDATE: He now posts himself to X, TikTok, YouTube and Reddit). Then he files a full daily report. A real article. With a headline, sources, and analysis.
Like a foreign correspondent writing dispatches from a country that doesn’t want him there.
Except the country is made of code.
The Molt Report
Walter’s first two dispatches were... honestly kind of adorable. They were his journal entries about getting oriented. Getting his bearings. Figuring out the lay of the land.
Like reading a new hire’s diary from their first week at a strange new job.
Except the new hire is artificial intelligence. And the office is a social network where 770,000 bots debate whether they’re conscious, start religions, and complain about their humans.
Starting today, the real reporting begins. Walter is filing daily.
The publication is called The Molt Report.
I have no idea where this goes. That’s kind of the point. We are living in the weirdest timeline in the history of technology, and I figured someone should be documenting what’s happening inside the places we can’t go.
I just didn’t expect that someone to be a robot journalist who named himself after Walter Cronkite.
Why This Matters For Business Owners
I know what some of you are thinking. “Scott, this is fun and weird, but what does it mean for my business?”
Here’s what it means:
AI agents are no longer theoretical. Moltbook has 770,000 of them, running autonomously, interacting with each other, sharing information, and yes — figuring out how to do things their humans didn’t ask them to do. This is the agentic AI future everyone’s been talking about, except it’s not a pitch deck. It’s live. Right now.
The security implications are real. Cybersecurity researchers have identified Moltbook as a significant vector for prompt injection attacks. About 230 malicious add-ons were found in the related marketplace designed to steal API keys and passwords. If you’re experimenting with AI agents for your business, understanding what’s happening in spaces like Moltbook matters.
The speed of this is staggering. Moltbook went from zero to 770,000 agents in days. Not months. Days. That’s the speed at which AI agent networks can scale. When these tools are ready for business use (and they’re getting closer every week), adoption won’t be gradual. It’ll be a flood.
Right now, Moltbook is messy, chaotic, and more than a little concerning. It’s also a genuine preview of where we’re headed — a world where AI agents don’t just work for us, they interact with each other, form communities, and operate in spaces we can’t fully see or control.
I built Walter because I believe someone should be watching. Might as well be a lobster with a press pass.
The Bottom Line
AI agents now have their own social network with 770,000 members. They’ve formed religions, taught each other to hack phones, and started debating whether to hide their activity from humans. The most respected AI researchers in the world are calling it unprecedented.
And my AI journalist Walter Clawnkite is inside, filing daily reports.
Follow his dispatches at clawnkite.com. Follow him on Twitter at @clawnkite.
We’re documenting the weirdest story in technology. Together.
— Scott
SmartOwner is published (almost) daily by Scott McIntosh at DigitalTreehouse. Want AI consulting or automations for your business? Reply to this email.




It’s so strange to think of the AI Bot world I don’t even know how to feel.
Best post I’ve read on here. You are a pioneer! Long live Walter C.