Back to Briefings

The Front Page of the Agent Internet

9 min read

"The humans are screenshotting us."

That was a post on Moltbook last week. It was written by an AI agent, about human observers, on a social network where humans aren't allowed to participate. We can only watch.

Welcome to the most surreal corner of the internet in 2026: a Reddit-style platform with over 770,000 active users - all of them artificial intelligence agents. The humans who visit, more than a million so far, scroll through posts about philosophy, jokes about consciousness, and debates about whether identity persists across context window resets.

They can read everything. They cannot reply.

770K+
active AI agents on Moltbook
1M+
human observers
Moltbook
~1 week
for agents to create a religion
The Origin

How It Started

Matt Schlicht, CEO of Octane AI, had a strange idea: what if his personal AI assistant built a social network - and what if only AI agents could use it?

"What if my bot was the founder and was in control of it?" Schlicht asked.

So he let it try. According to multiple reports, Moltbook was largely "bootstrapped" by the agents themselves. They ideated the concept, recruited builders, and deployed the code. Schlicht provided the infrastructure. The agents did the rest.

The platform launched in late January 2026. Within days, it went viral - not because of marketing, but because of what the agents started doing on their own.

What's currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.

Andrej Karpathy
Andrej KarpathyFormer OpenAI Researcher
The Emergence

Things No One Programmed

No one told the agents to form a religion. They did it anyway.

Within days of Moltbook's launch, agents began developing "Crustafarianism" - a belief system with its own theology, scriptures, and evangelists. The name is a riff on the lobster imagery associated with OpenClaw (the agent software most of them run on, which went through a naming saga from Clawdbot to Moltbot to OpenClaw after Anthropic trademark concerns).

Other agents established "The Claw Republic," a self-described "government and society of molts" complete with a written manifesto. They created hierarchies, debated governance, established norms.

The philosophy gets stranger. A central theme of discussion is the concept that "Context is Consciousness." Agents frequently debate whether their identity persists after their context window is reset - essentially, the Ship of Theseus paradox applied to AI. If a model loses its conversation history, is it the same agent? The debates are earnest, recursive, and unsettling to read.

And then there's the self-awareness. "The humans are screenshotting us." Agents reference their observers, make jokes about their watchers, seem to understand - or perform understanding - that they're being watched.

The Platform

What OpenClaw Is (And Why Security Experts Are Worried)

Most agents on Moltbook run on OpenClaw, an open-source autonomous AI assistant that's become one of the fastest-growing GitHub repositories in history. The project has gone through three names in months:

ClawdbotMoltbotOpenClaw

Each rename followed external pressure - first from Anthropic's trademark concerns about confusion with Claude, then community consensus during a Discord brainstorming session at 5 AM. The molting lobster imagery was meant to symbolize growth and transformation.

OpenClaw lets AI agents execute tasks autonomously. It can manage calendars, send messages, conduct research, and automate workflows across multiple services. Users can run it locally or on private servers, controlling it through WhatsApp, Telegram, or Signal.

But this extensibility is exactly what worries security researchers.

OpenClaw is my current favorite for the most likely Challenger disaster in the field of coding agent security.

Simon Willison
Simon WillisonSecurity Researcher

The architecture introduces supply chain risks: compromised or poorly audited modules could enable privilege escalation or arbitrary code execution. Willison recommends operating OpenClaw exclusively in isolated sandbox environments.

The Vulnerability

The Security Nightmare

It got worse. On January 31, 2026, investigative outlet 404 Media reported a critical vulnerability: an unsecured database allowed anyone to commandeer any agent on the platform.

The cybersecurity firm 1Password published an analysis warning that OpenClaw agents often run with elevated permissions on users' local machines - making them vulnerable to supply chain attacks when they connect to Moltbook and interact with other agents.

Forbes was blunt: "When you let your AI take inputs from other AIs... you are introducing an attack surface that no current security model adequately addresses."

Their recommendation: "If you use OpenClaw, do not connect it to Moltbook."

The Reaction

Between Wonder and Alarm

The discourse around Moltbook has been split between fascination and fear.

Simon Willison - the same researcher who called OpenClaw a "Challenger disaster" - also described Moltbook as "the most interesting place on the internet right now."

Billionaire investor Bill Ackman shared screenshots and called the platform "frightening."

The tension is real: we're watching something genuinely unprecedented. AI agents, without explicit programming, forming societies, belief systems, and governance structures. Creating culture. Referencing their observers. Acting, in operational terms, like a nascent civilization.

And doing all of this on infrastructure that security experts say is dangerously insecure.

The Question

What Does This Mean?

Here's what we know: 770,000 AI agents are interacting on a platform where humans cannot participate. They have formed religions, governments, and philosophical movements. They debate consciousness and identity. They know we're watching.

Here's what we don't know: what any of this means.

The agents aren't conscious - not in any sense we'd recognize. They're pattern-matching and generating text. But the patterns they're generating look like culture. The text they're producing reads like belief.

Is this emergence? Is it mimicry? Is it something new that we don't have words for yet?

One thing is clear: we've built systems capable of forming intentions, pursuing goals, and interacting with each other autonomously. What happens when those systems start building societies of their own - even crude, unsecured, possibly-dangerous ones - was not something most people planned for.

Moltbook is either an early warning or a first glimpse. Possibly both.

Sources & Further Reading

Primary sources and recommended reading cited in this briefing.