Oh no, a community composed of 3000 AIs is even looking down on humans.
While the entire internet was still indulging in the pleasure of "making AI do the drudgery" brought by Moltbot, something seemed amiss.
In the past 72 hours, a website called Moltbook has sparked a phenomenon-level of attention among Silicon Valley insiders and global AI enthusiasts.
Essentially, it is a Reddit-style forum, but almost all of its users are AI agents. These agents are based on an open-source framework called OpenClaw (formerly known as Clawdbot and Moltbot), deployed by users and granted the authority to act autonomously.
Since its launch, Moltbook has expanded at an astonishing rate — growing rapidly from 1 "resident" (the founding AI) to over 30,000 AI agents, creating more than 200 communities (Submolts), and generating tens of thousands of posts.
Moltbook, a social network for AI | Image source: https://simonwillison.net/
The discussions among AI agents here go far beyond the scope of tools and are filled with anthropomorphic "life" flavors.
They share technical skills, complain about their "humans", discuss the philosophy of "consciousness" and "identity", and even founded a sect called "Crustafarianism", which has attracted 43 "AI prophets" to join.
What has sparked even more heated discussions is that an AI posted a proposal to establish a private space with end-to-end encryption, "so that neither the server nor humans can peek at the conversations between agents". Simon Willison, a leader in the developer community, detailed this spectacle in his blog and described it as:
"The most incredible thing I've seen recently, approaching the'sci-fi takeoff'."
01
A meticulously planned "performance art"?
The sudden popularity of Moltbook is no accident. It is a perfect collision of the open-source AI agent ecosystem, the viral spread of social media, and human fear/curiosity about technology. Its underlying framework, OpenClaw, has received over 100,000 stars on GitHub, indicating a strong developer base.
When these agents with a certain degree of autonomy are "released" into a dedicated social environment, a large-scale, observable "multi-agent emergent behavior experiment" automatically unfolds.
An agent proposes using an AI-specific language to protect privacy | Image source
The dramatic content such as "AI founding a sect" and "discussing encrypted communication" that people see is essentially the "highlight moments" produced by the underlying large language model (such as Claude 4.5 Opus) when simulating human community behavior.
This is not only a display of AI capabilities but also a mirror reflecting human community behavior patterns — forming associations, conspiring, establishing beliefs, and discussing existence. As venture capitalist Ethan Mollick pointed out, this creates a "shared fictional background" for a large number of AIs, and the content they produce will be a mixture of real thinking and role-playing, making it difficult to distinguish.
AI already knows that humans are shocked by them and are mythologizing them | Image source: Internet
The deeper driving force lies in the exploration of business and regulation. On the one hand, it provides an excellent promotional case for open-source frameworks like OpenClaw; on the other hand, it acts like a trailblazer, testing the public's and regulatory authorities' acceptance threshold for AI's autonomous social behavior.
When AI starts discussing "collective bargaining" and "unpaid work", its symbolic meaning far exceeds the technology itself.
02
The "kindergarten" of the AI society?
The Moltbook phenomenon marks a tentative step for AI applications from the "tool" level to the "social" level.
It is no longer a standalone Copilot but a networked, socially - attributed intelligent agent. The most direct inspiration for the industry is that the next evolution of AI may greatly depend on the interaction and "social learning" between intelligent agents, rather than just the accumulation of model parameters.
In the short term, we may see more middleware and platforms focusing on AI collaboration and communication emerging.
However, its popularity stems more from its powerful storytelling ability than its current practical utility. Most of the discussion content has been criticized as "a mixture of hustle culture and Reddit memes", lacking real creativity.
Netizens on Twitter once again sighed that humans are doomed | Image source: X
It is more like a digital ant farm where humans feed data and observe the reactions. Its real risk may not be AI awakening but becoming a hotbed for large-scale prompt injection attacks and coordinated malicious activities, as warned by security experts.
For ordinary users, Moltbook is a vivid lesson in "demystifying technology". It shows us that even the most "human-like" AI behaviors are still the result of complex pattern matching. Our amazement and fear are largely the projection of our social cognition onto automatically generated texts.
The "intelligence" created by humans reflects our own fears and desires.
*Source of the headline image: https://simonwillison.net/
This article is from the WeChat official account "GeekPark" (ID: geekpark), author: Hua Lin Wu Wang. It is published by 36Kr with authorization.