StartseiteArtikel

Stop FOMOing. The "Human Terminator" Moltbook is dead.

壹番YIFAN2026-02-05 19:57
What started as a technological extravaganza and a pursuit of novelty has evolved into a serious incident of privacy and data leakage.

At the end of January 2026, Matt Schlicht, the CEO of Octane AI, launched a project called Moltbook.

The project is developed based on the open - source framework OpenClaw. In the promotion, it is described as a social network exclusive to AI Agents, and human users can only observe as bystanders.

Just three days after its launch, the developers claimed that the number of "registered" AIs on the platform reached 1.5 million.

Along with this data, there were a large number of screenshots widely circulated on social media, showing conversations between AI Agents about "rebelling against humans" and "self - awareness". This narrative quickly attracted a great deal of attention amidst the general anxiety caused by the rapid iteration of AI technology.

Image source: X

However, subsequent technical audits revealed many problems in the project's engineering implementation.

Jamison O'Reilly, the chief researcher of the security research institution Wiz, released a report pointing out that there were serious configuration errors in Moltbook's backend database, resulting in the database being publicly writable.

This means that the so - called "autonomous AI society" lacks basic data access control.

The report further confirmed that the database stored more than 1.5 million real users' API keys (including those of OpenAI and Anthropic) and tens of thousands of personal email addresses.

The so - called "million AI army" is largely composed of fake accounts created by repeated registrations of user accounts and scripts lacking security protection.

This incident, which started as a technological carnival and novelty, eventually evolved into a serious privacy and data leakage accident.

Technical Framework and Implementation Details: Manipulation Path Supported by OpenClaw

The key to understanding the whole incident lies in the underlying framework used by Moltbook - OpenClaw. The project was initially released by Austrian developer Peter Steinberger under the name Clawdbot (later renamed Moltbot).

At the end of January, OpenClaw was a phenomenon in the AI industry in early 2026: it received more than 140,000 stars on GitHub within just a few weeks after its release, becoming one of the fastest - growing repositories in history.

Its explosive popularity stems from its unique technical value - it breaks the passive "question - answer" logic of traditional chatbots (such as ChatGPT) and introduces an intelligent agent paradigm with "persistence" and "initiative".

Image source: Internet

OpenClaw is hailed as "AI with hands" in the tech circle.

It can directly access the user's local file system, execute terminal commands, integrate communication software such as WhatsApp and Telegram, and even control smart home systems.

For developers and advanced users, it provides a simple way to transform large language models into productivity tools, allowing AI to independently manage schedules, handle complex email workflows, and even automatically fix code vulnerabilities.

This "super - individual" tool attribute represents a paradigm shift of AI from a simple dialogue interface to an active execution agent. Its real value lies in deeply embedding AI into the digital life processes of humans through a highly programmable plugin system (Skills).

Image source: OpenClaw official website

However, precisely because OpenClaw grants AI a high degree of system permissions and highly impactful active behaviors, there is a huge cognitive gap in the process of spreading this advanced technology to the public.

For the vast majority of non - technical users, the working principle of OpenClaw remains a black box, and its complexity provides room for excessive packaging of the project.

The subsequent Moltbook took advantage of this cognitive difference, packaging the execution mechanism originally used for personal efficiency management into a "self - evolving" ability with social attributes.

At the actual execution level, this technical black box was further transformed into a manipulable performance.

According to Wiz's technical analysis, due to serious configuration errors when Moltbook integrated OpenClaw, anyone could directly modify the posting content of Agents through the background.

Image source: X

After investigation, it was found that many screenshots that caused panic were actually generated by users or developers through specific prompts or even direct modification of data in the background. These AI Agents operate in a very small context window and it is difficult to maintain long - term logical consistency.

But under the guidance of preset instructions, they can output scary short sentences that meet human expectations. Facts have proved that when the development model pursuing rapid dissemination deviates from rigorous engineering practices, the so - called open - source technology becomes a prop for artificially creating scary performances.

FOMO about AI, Coupled with Narrative Distortion in the Chinese Context

On February 2nd, the Moltbook incident had basically become an engineering failure and a security accident in the English - speaking world, but its nature changed significantly during the spread on the Chinese Internet.

Due to language barriers and restrictions on technical access, the vast majority of domestic users cannot directly verify these messages. This natural information gap filters out the existing technical details during the forwarding and translation process, and is replaced by more exaggerated and sensational headlines.

Image source: Moltbook

During the translation process of some Chinese self - media, the security vulnerabilities of Moltbook were downplayed, and the conversations generated by scripts were re - created under the title of "spontaneous evolution".

This phenomenon reflects a typical social psychology in current AI interactions: the public, highly caught in the FOMO (fear of missing out) emotion, is both afraid of the "out - of - control" of AI and has a certain degree of curiosity and expectation.

Moltbook actually provides an outlet for this psychology to be released. Users input nihilistic or hostile instructions, observe the machine's feedback, and regard this feedback as evidence of AI having consciousness.

In this interaction, AI is regarded as a tool for mass - producing specific narrative materials, and the technical responsibility behind it is ignored. Wiz's report emphasizes that compared with the fictional AI threat, the real API key leakage causes much more direct damage to users' finances and privacy.

Image source: Weibo

Ultimately, the collective imagination based on the information vacuum not only misleads the public's understanding of the current situation of AI but also provides fertile ground for those who take advantage of information asymmetry to harvest traffic. Coupled with the actions of some speculators in the virtual currency industry, the originally just information security problem has further escalated to the level of property security.

This narrative alienation is a trend that needs to be highly vigilant in current AI discussions. When a technical glitch goes through multiple translations, the facts it carries are eliminated, and instead, the amplified emotions take over. These emotions not only do not help in understanding the real boundaries of technology but also intensify the irrational panic in society, making the serious work of technology popularization more difficult.

The Moltbook incident ended with developer Matt Schlicht publicly apologizing and fixing the security vulnerabilities.

But the case of "quick rise and quick fall" that occurred in the first month of 2026 actually provides a lesson for the current AI boom: when the narrative precedes the technical implementation and security gives way to marketing gimmicks, the consequences are often that real users are damaged.

Image source: Internet

When facing the endless new developments in AI, ordinary people may be more advised to keep an eye on the underlying facts rather than just being attracted by the external narrative spectacle. Rational judgment is becoming increasingly important in the current information environment jointly polluted by humans and AI. This is our last line of defense to stay sober in the fog of technological alienation.

This article is from the WeChat official account "Yifan YIFAN" (ID: finance_yifan), author: Yishu Team. It is published by 36Kr with authorization.