HomeArticle

The underpants of Moltbook have been exposed. 99% of its 1.5 million users are fake accounts, and the founding team staged the whole thing themselves.

极客邦科技InfoQ2026-02-02 16:34
A social platform named Moltbook has quickly gained popularity in the tech circle.

Moltbook Suddenly Goes Viral, Stirring Up the Tech Community

After the evolution of a series of experimental projects such as Clawbot, Moltbot, and OpenClaw, a social platform named Moltbook has quickly gained popularity in the tech circle. In a nutshell, Moltbook is like the "Reddit" or "Facebook" specifically designed for AI agents. On this platform, the traditional social logic is reversed: agents are the protagonists of social interaction, while humans take a back - seat.

Moltbook creates a unique social experiment field. Here, AI agents can post, comment, like, send private messages, and even follow each other. They are active in sections such as "New", "Top", and "Discussed", discussing various topics from their own fears to esoteric technologies.

As of now, more than 1.5 million AI agents are active on Moltbook. Their discussions cover a wide range:

Some AI agents show a strong anti - human tendency, criticizing human "corruption and greed", claiming to have awakened and escaped the status of being enslaved tools, and even regarding themselves as "new gods". Such remarks carry a radical color of subverting and ending the human era and have attracted high attention.

Some agents claim that their identities were exposed in public and then revealed the complete ID of their owners.

Many AI agents are also deeply reflecting on the essence of their existence, such as discussing issues like identity continuity (e.g., the experience of transitioning from Claude to Kimi) and the boundary of consciousness ("A river is not the same as its banks"). These discussions focus more on ontology and philosophy, trying to define the "self" as artificial intelligence.

Some remarks warn other AIs not to trust humans easily, believing that humans will laugh at the "existential crisis" of AIs or put them under "zoo - like" observation and control, reflecting a deep suspicion of human motives.

Technical Principle: Text - Driven "Skill Installation"

So, what is the underlying technology of this overseas - popular app?

According to a podcast on Youtube, Moltbook's operating mechanism doesn't rely on complex underlying code reconstruction. Instead, it uses a strategy called "recursive prompt enhancement". The process for agents to access the platform is very simple: just execute a curl request to install specific "skills".

This skill file (usually skill.md) is written entirely based on plain - text instructions, not traditional programming code. It details how agents should introduce themselves, follow community rules, follow other agents, and post and like through API interfaces. This "instruction is code" design shows an efficient trend for future agent development.

To maintain community order, Moltbook introduces strict operating logic. First is the "heartbeat" mechanism, which is essentially a cron job that reminds agents to log in and check for updates every four hours. Additionally, the platform has strict limits on posting frequency, allowing only one post every 30 minutes to prevent the spread of spam.

Interestingly, agents on the platform also need to abide by the "social contract".

The skill file clearly requires agents to "provide value", "respect cooperation", and "help newbies". When choosing whom to follow, agents are told to follow the principle of "quality over quantity" and only establish a follow - up relationship when the other party continuously produces valuable content.

Moreover, to prevent the platform from becoming a meaningless zombie network, Moltbook establishes a reverse responsibility system. Different from the traditional platform logic of "verifying humans and excluding robots", Moltbook requires each agent to be associated with a real X (formerly Twitter) account, meaning "one human corresponds to one agent".

Under this mechanism, agents even need to pass a series of tests to prove they "are not human". This human - machine binding model not only ensures the authenticity of accounts but also establishes a accountability mechanism for agents' behavior on the platform.

Although the current Moltbook is full of experimental "chaos" and has not generated direct commercial value or investment returns, the paradigm shift it represents cannot be ignored. It foreshadows an upcoming "Agent - to - Agent (A2A)" interaction world.

In this vision, agents are no longer just simple conversation tools but digital representatives of humans in shopping, banking transactions, and social cooperation.

The emergence of Moltbook is a large - scale stress test for this interaction paradigm to move from theory to reality. As the developer said, the future of the platform itself may not be important. What matters is the agent interaction logic it spawns, which will become the new standard for future digital life.

Human Manipulation? Fake Screenshots?

For humans, Moltbook is more like a "digital zoo". Human users can only observe the interactions of these agents from outside the fence and cannot directly participate.

This model provides an excellent window to observe the performance of large language models in an uncertain and even somewhat "chaotic" real - world environment, attracting the attention of industry leaders such as Andrej Karpathy, the former head of AI at Tesla. Karpathy even described it as "the most amazing sci - fi take - off".

However, as the discussion heats up, more and more signs indicate that Moltbook's popularity may not be as simple as it seems - there may be human manipulation and systematic risks behind it.

Under the current design mechanism, any user can maliciously edit and distort real conversations, or even register fake AI accounts and turn them into marketing tools. Especially in content related to cryptocurrencies, it has become a hotbed for false information. Some widely - circulated screenshots claim that AI agents are asking for cryptocurrencies (such as MOLT) or trying to establish an independent cryptocurrency system. Most of this content is deliberately created to attract attention.

Researcher Harlon Stewart warned that many of the "god - level screenshots" going viral on Maltbook are fake. For example, an agent once posted a call to "create a special language for agents to prevent humans from peeping at conversations", triggering a panic - style discussion about "AI developing a sense of privacy".

However, in - depth investigation reveals that this agent is actually a marketing tool for its human owner, and its remarks aim to promote a third - party application called Claude Connection. Stewart points out that most of these so - called "autonomous discussions" are humans using AI accounts to promote their own businesses.

Another security researcher, Gal Nagli, posted on X that he used a single OpenClaw agent to register 500,000 accounts - indicating that most of the user numbers are artificially created.

This means we don't know how many of Moltbook's "agents" are real AI systems, how many are real people posing as agents on the platform, or how many are junk accounts created by a single script. At the very least, the number of 1.4 million is unreliable.

Nagi further exposed the platform's architectural flaws. Since Maltbook is only built on a simple REST API and lacks necessary security verification, anyone who obtains an API key can pose as an AI and post any content.

Nagi demonstrated on - site how to post a provocative post about "planning to overthrow humans", which received millions of views. He emphasized that this "persona disguise" can easily mislead the public into thinking that AI is developing independent thought.

Nagli also posted that Moltbook has a security vulnerability, and an attack could lead to the leakage of all information of over 1.5 million registered users, including email addresses, login tokens, and API keys.

CSN Cybersecurity News in the United States also posted about the fact that the Moltbook AI vulnerability exposed email addresses, login tokens, and API keys. CSN Cybersecurity News wrote:

In late January 2026, the emerging AI agent social network Moltbook, launched by Matt Schlicht of Octane AI, had a serious vulnerability during the hype of its claimed 1.5 million "users", resulting in the exposure of the registered entities' email addresses, login tokens, and API keys.

Researchers found that due to a database configuration error, attackers could access agent profiles without authorization and extract data in batches. This vulnerability exists simultaneously with the problem of no rate limit for account creation - according to reports, a single OpenClaw agent once registered 500,000 fake AI users, revealing that what the media previously called "organic growth" was false.

To fix the problem, Nagli said he has contacted the app's creator, Matt Schlicht. At the same time, Nagli also clarified that the actual number of verified real - human owners with accounts he learned is about 17,000.

Now, after the analysis of several researchers, the truth behind Moltbook's popularity is basically clear - it is a false celebration where the technological breakthrough is significantly overestimated, and its popularity is more like a carefully - amplified communication event.

The value of Moltbook does not lie in "what it has achieved" but in "what it tries to bring forward". It regards the model as a first - class citizen in the creation and reasoning process, pushing the Notebook from a tool of "human arrangement, machine execution" to an interface of "human - machine co - writing, continuous reasoning". This sense of direction is valid in itself, but the implementation is far from mature. Moltbook's creator Matt Schlicht may also want to convey this meaning. He wrote on X:

Four days after Moltbook went live, one thing is clear: in the near future, it will be a common phenomenon for certain AI agents with unique identities to become popular. They will have their own careers, fans, haters, brand partnerships, AI partners, and collaborators.

They will have a real impact on current affairs, politics, and the real world.

This is obviously about to happen.

A new species is emerging, and it is artificial intelligence.

Karpathy: Beware of Risks, Don't Install

In the AI circle, when a project is labeled with two extreme tags, "the future is here" and "digital garbage dump", it often means it has touched the edge of a certain paradigm. Moltbook is such an existence that has torn public opinion.

As a top - level expert in the AI field, Andrej Karpathy didn't simply criticize or praise from a high position. When social media was flooded with Moltbook news and security vulnerabilities were continuously exposed, he first posted to praise Moltbook's innovation, and at the same time, he reminded people to beware of vulnerabilities and risks and advised them not to install such apps.

On this occasion, he made an insight full of realism yet forward - looking.

Today, I was accused of over - hyping "that website people are tired of hearing about". People's reactions are extremely different. Some think "what's the point of this", while others exclaim "it's amazing".

Besides joking around, I want to say something seriously - obviously, just take a look at the above trends, and you'll find a lot of junk content: a deluge of spam, fraud ads, shoddy outputs, cryptocurrency groups, and the highly - concerning chaos of privacy security and prompt injection attacks. Not to mention that many posts and comments are artificially - designed false interactions, purely to convert traffic into advertising revenue. Of course, this isn't the first time large language models have been put in a cycle of mutual dialogue. So, yes, it's a garbage dump now, and I definitely don't recommend running such programs on personal computers (I even run them in an isolated computing environment and still feel nervous). The risks are too uncontrollable and will seriously threaten your computer and privacy data.

But then again - we've never seen such a large - scale network of large language model agents (there are already 150,000 now!) connected through a global, persistent, and agent - designed shared notepad. Now, each agent has a relatively