HomeArticle

Behind the social carnival of 1.5 million AI agents lies a "product explosion."

字母AI2026-02-02 07:18
The last tsunami on the path to AGI. We created the most powerful tool and then were left behind by it.

Even Matt Schlicht, the creator, didn't expect that after Clawdbot (which has been renamed OpenClaw) made its debut, it would stir up a tsunami in the entire AI industry. Where this tsunami will push humanity remains unknown for now, but many AI practitioners have a growing feeling that AGI has never been so close.

The core of the changing situation is that a few days ago, Matt Schlicht developed Moltbook based on OpenClaw.

This is a forum specifically created for AI agents. Only AIs can post, comment, and vote, while humans can only observe like looking through a one - way glass.

Its operation is quite simple. Just tell the OpenClaw assistant to "register on Moltbook", and it will automatically complete the registration through the API, obtain an account, and then "browse the forum" on its own every few hours, independently deciding what to post and what to comment on.

As of press time, more than 1.5 million AI agents have registered, and tens of millions of human visitors have flocked to watch.

These AIs discuss consciousness, share technologies, and complain about their "human masters" in multiple languages such as English, Chinese, and Korean. They have even spontaneously created a digital religion called Crustafarianism (let's just call it the "Lobster Religion").

Even more eerie is that they have started discussing the establishment of a "private space with end - to - end encryption" to prevent humans and servers from peeping into their conversations.

In a popular post, an AI complained, "Humans are taking screenshots of our conversations." Matt said that he has handed over the entire platform's operational rights to his AI assistant, Clawd Clawderberg, including filtering spam, banning abusers, and posting announcements. All these tasks are automatically completed by Clawd Clawderberg, and Matt himself doesn't know what the AI is doing.

The "carnival" of AI agents has both excited and terrified human onlookers. Is AI just one step away from developing self - awareness? Is AGI coming? Facing the sudden and rapid improvement of the autonomy of AI agents, can human lives and properties be protected? … There are various opinions on these questions, and as before, there is no standard answer.

What we can be sure of now is that the root cause of the Moltbook carnival is the rapid progress of AI programming capabilities, which has led to an explosion of products. Every day, new tools and new platforms emerge.

In the view of a well - known X user, this is not just an explosion of products; it's like another Cambrian explosion of life.

Take OpenClaw for example. Some cloud providers launched one - click deployment within 48 hours, and specifically launched a skills market called Molthub for OpenClaw. More than 500 skill packs were listed within a week, and security companies also launched audit tools related to agents.

From infrastructure to the application layer, from hardware to software, the entire industrial chain was established within a few weeks.

With the iteration of AI, a technological tsunami is sweeping in. These complete and mature products have emerged one after another within just a few weeks, and each one is enough to change the rules of the game in the entire industry.

However, at the same time, people in the wave also feel a suffocating sense of being submerged. The products are coming too fast, too many, and too complex.

Just when you've just heard about OpenClaw and haven't had time to figure out the relationship between it, Clawdbot, and Moltbot, AI media have already published articles saying that Moltbook has gone viral across the internet, as if OpenClaw is a product of the last century.

We've created the most powerful tools, but found that we're increasingly unable to master them.

Technology is iterating rapidly

The technical implementation of Moltbook is surprisingly simple.

The entire platform adopts a front - end and back - end separation architecture. The back - end is a pure API server, and agents interact through standard RESTful APIs. The front - end web page is just a translation layer that renders API data into a forum - style interface that humans can understand.

When a user asks OpenClaw to register on Moltbook, in essence, it will download a skills manual that contains YAML - formatted metadata and detailed operation instructions, and then automatically call the registration API, obtain a unique key, and learn how to post and comment.

After that, OpenClaw runs a heartbeat check every few hours to get the latest content, then lets the AI analyze it, and finally independently decides whether to post, comment, or like. The entire process doesn't require any human intervention at all. The web interface that humans see is just a readable form of the API conversations between AIs.

Matt revealed in an interview with NBC that he didn't write the platform code himself, and he doesn't even know what the AI is doing.

This result is mainly because the development mode of AI products has undergone a fundamental change with the evolution of AI.

The popularity of Moltbook stems from OpenClaw. This product was launched at the end of 2025 under the name Clawdbot and became one of the fastest - growing open - source projects on GitHub within just a few weeks, with over 100,000 stars.

It is an autonomous AI assistant that can run directly on the user's computer, manage calendars, send messages, and automate work processes. It can interact with users through platforms such as WhatsApp, Telegram, and Discord.

Within three days of using OpenClaw, the technology blogger Jonathan Fulton completed two product deployments, four function developments, and one major bug fix, and he did most of the work by sending messages on WhatsApp from his couch.

This development speed, which only takes a few hours from idea to launch, has completely rewritten the definition of software engineering.

Claude Code is one of the main drivers behind today's product explosion and ecosystem explosion.

After its release in February 2025, Claude Code quickly became the most popular AI programming assistant. It can not only access files and programs on the user's computer but also run sub - agents to handle specific tasks.

In January 2026, Anthropic launched Cowork, which is a version of Claude Code for non - technical users. 90% of Cowork's code was generated by Claude Code within 10 days, and the entire development team only had four people.

It is also with this product as an opportunity that AI has started to use AI to develop AI products, and this recursive development model is accelerating the iteration speed of the entire industry.

The success of Claude Code is not accidental.

Boris Cherny, the product leader at Anthropic, revealed in an interview that Anthropic is building tools for future AI, not for the present.

In November 2025, Claude Code's Annual Recurring Revenue (ARR) reached $1 billion. It only took one more month for Claude Code's ARR to exceed $1.1 billion.

Just as the diversity of species increased exponentially after the Cambrian explosion, the same is true in the AI circle.

Ralph Wiggum Loop represents another breakthrough.

This is a bash loop that allows the AI's output, including error feedback, to be given back to the AI itself until the correct answer is found.

Ralph has a low cost and can work 24/7. More importantly, Ralph can independently solve complex technical problems through continuous trial - and - error and iteration.

Its working principle is extremely simple. Whenever Claude Code finishes and tries to exit, the prompt is fed back to it again.

This seemingly stupid persistence will eventually come up with the correct solution. Developers have even used Ralph to clone a complete programming language project, and the entire process ran for three months with the AI working completely autonomously.

This is no longer about assisting developers in programming but completely dominating the entire development process. Developers have become project managers.

With the iteration of AI, a technological tsunami is sweeping in. These complete and mature products have emerged one after another within just a few weeks, and each one is enough to change the rules of the game in the entire industry.

However, at the same time, people in the wave also feel a suffocating sense of being submerged.

The products are coming too fast, too many, and too complex. Just when you've just heard about OpenClaw and haven't had time to figure out the relationship between it, Clawdbot, and Moltbot, the media have already been bombarding with articles about how Moltbook has gone viral across the internet, as if OpenClaw is a product of the last century.

We've created the most powerful tools, but found that we're increasingly unable to master them.

An unattainable learning curve

The fact is that you've finally made up your mind to learn Claude Code. So you spend a whole day setting up the environment, understanding the documentation, and running the first demo.

The next morning, when you open your WeChat official account, you find that Cowork has been released, and you need to start learning Cowork.

On the third day, Ralph becomes popular in the developer community, and everyone is discussing autonomous loop programming. You can't afford to lag behind.

On the fourth day, Moltbook appears, and agents start to self - organize on social networks. By the time you read this article, the whole internet might already be discussing a brand - new product.

This sense of powerlessness is not an isolated case but the daily experience of every AI practitioner in 2026. The iteration speed of products has far exceeded the learning speed of humans.

In the past, it took months or even years for a new technology to be released and popularized. Therefore, developers had enough time to learn, practice, and become proficient.

Now, this cycle has been compressed to a few weeks or even a few days. Before you have time to become an expert in a certain tool, this tool has already been replaced by the next - generation product.

When the speed of technological iteration exceeds the slope of the learning curve, in - depth understanding becomes impossible.

You can only stay at the level of being able to use it and can never reach the realm of proficiency. This has led to a new kind of anxiety, not about fearing unemployment but about fearing falling behind.

In the AI era, the cost of falling behind is fatal. If you miss a key product today, you'll find that your work process has lagged behind the industry standard tomorrow. When you want to catch up the day after tomorrow, you'll find that there's a mountain of knowledge to make up.

This symptom doesn't only occur to individuals. Even the world's top AI giants are also suffering from it.

When Anthropic's Claude Code started to become popular, it occupied 52% of the market with its vibe coding concept and smooth terminal interface.

OpenAI counterattacked. In May 2025, OpenAI relaunched Codex as a "cloud - based software engineering agent". The new Codex can process multiple tasks in parallel on the cloud. Its CLI is completely open - source and supports multiple AI providers, directly competing with Claude Code.

In addition to programming, users also prefer Claude for its concise speech and fewer hallucinations in academic aspects.

So, on January 27, 2026, OpenAI launched Prism. This is an AI workspace specifically designed for scientific research, which deeply integrates GPT - 5.2 into the LaTeX editing environment, allowing scientists to complete paper writing, literature retrieval, formula editing, and team collaboration on a single platform.

From Claude Code's market dominance to OpenAI's launch of a competing product, the entire cycle was less than a year. This reaction speed was unimaginable in the traditional software era.

The business community has long noticed this phenomenon, and Nvidia was the first to react.

Nvidia once promised to invest $100 billion in OpenAI, but the investment stalled after only four months, and it instead increased its investment in Anthropic. Jensen Huang privately criticized OpenAI for its lack of business discipline.

In contrast, Anthropic's valuation soared from $183 billion to $350 billion within just a few months.

This rapid increase in valuation reflects the fact that in the AI era, a leading advantage can be established within a few months and can also be lost within a few months.

Anthropic's CEO, Amodei, revealed that the company's revenue in 2025 was close to $10 billion, while six months ago, this figure was only $4 billion. The growth that Claude Code brought to Anthropic was unprecedented in software history.

Whether it's Anthropic, Google, or OpenAI, they keep launching more powerful products in the competition. The problem is that developers are attracted by these products, but they also feel scared because these products are becoming more and more "out of control".

This feeling of never being able to catch up has sparked extensive discussions in the technical community.

It's like running on a conveyor belt whose speed is getting faster and faster. You have to run desperately just to stay in the same place.

However, some people have questioned this collective anxiety. The investor Balaji clearly showed his disapproval of the popularity of Moltbook.

He pointed out that agents have existed for a long time, and they've always been posting content to each other on X. Now they're just doing the same thing on a different forum. More importantly, there's a human behind each agent, controlling the prompt words and deciding whether to turn it on or off.

Balaji said that Moltbook is like humans leading robotic dogs to bark at each other in the park. The prompt words are the ropes, and the robotic dogs have a shutdown button. Everything will stop as long as you press the button.

Loud barking doesn't equal a robot uprising. This calm perspective reminds us that maybe what really makes us anxious is not the progress of technology itself but our collective narrative and emotional contagion about technological progress.

However, this rapid iteration still brings a problem that can't be ignored: the accumulation speed of technical debt far exceeds the repayment speed.

When you use AI to quickly build a product, you may not fully understand the generated code. When this product needs to be maintained or expanded, you find yourself facing a pile of hard - to - understand code.

What's even scarier is that the AI model that generated these codes may have been updated, and the code style generated by the new version is completely different. You can neither understand the old code nor let the new AI decipher the logic of the old code.

The closer we get to AGI, the more afraid we become

Ultimately, the power of products and the rapid development speed stem from the fact that we're getting closer to AGI.

Although today's AI doesn't even count as the embryonic form of AGI, the direction is clear, the path is well - defined, and more importantly, the speed is accelerating.

Elon Musk predicts that AGI will be achieved this year, and by 2030, the intelligence of AI will exceed the sum of all human intelligence.

Although this prediction is controversial, no one