HomeArticle

It's been two months. It's time to define OpenClaw.

锦缎2026-03-06 08:45
The old world, we can never go back to it again.

Two days ago, a "red lobster" quietly climbed onto Weibo.

But long before that, it had already been at the center of the storm.

From its open - source release in January, to being accused by Anthropic, forced to change its name, being impersonated, and then being "recruited" by OpenAI and transitioning to foundation operation - OpenClaw tossed and turned for a full two months before finally entering the vision of Chinese developers in an official capacity.

But domestic netizens couldn't wait for the official release.

Open Bilibili, Douyin, or Xiaohongshu, and the screens are full of "step - by - step tutorials". Some people are shouting, "Learn it and earn hundreds of thousands a month." On e - commerce platforms, teaching people how to deploy it step by step has become a business worth hundreds of yuan. Strangely enough, almost everyone in the comment section of each video is asking the same question: "I've installed it. Then what? What can it do?"

On one hand, ordinary users are looking blankly at the terminal, and programmers are heartbroken at the rapidly burning tokens. On the other hand, the servers of cloud providers are sold out.

Two months have passed. It's time to define OpenClaw.

01

The 'Computing Power Black Hole' for Users, the 'Inventory Savior' for Models and Cloud Providers

In the past two months, I don't know how many people were tempted by the overwhelming promotion of OpenClaw.

But when you click on those step - by - step tutorials and are ready to deploy an AI assistant by yourself, the first threshold you encounter is something the tutorials can never solve:

Insufficient hardware.

Different from the stateless web - based Q&A of ChatGPT, OpenClaw is a full - duplex, stateful daemon process that requires an environmental sandbox. Remember how people described it when it was first born? An assistant that is online 24/7.

This means that it must constantly monitor the message interfaces of Feishu and DingTalk.

It means that the security issues that emerged in the past two months have forced reliable tutorials to suggest running it in a Docker container - packaging a separate operating environment, which obviously consumes a huge amount of memory.

It also means that it can't do anything on its own. It must be mounted with an underlying large - scale model and equipped with various skill plugins to really get things done. And for each additional plugin you open, there is an extra thread quietly burning your resources in the background.

For ordinary users, OpenClaw is a computing power black hole. The configuration of home computers is not enough, and once the computer is shut down, it loses contact. If you want it to be truly online 24/7, you can only rent a cloud server.

So, during the days when OpenClaw became extremely popular, the lightweight servers of major cloud providers were snapped up.

But for cloud providers, the significance of OpenClaw is far more than "selling a few more servers" - it is a long - awaited rain after a drought. In the past one or two years, the demand for large - scale model training has been strong, but the computing power consumption on the inference side has never picked up. Large enterprises build their own computer rooms, and small and medium - sized enterprises' adoption of cloud services has fallen short of expectations. Those low - configuration lightweight servers have been sitting in warehouses, unsold. And the application form of OpenClaw, which consumes a lot of computing power and memory and needs to be always online, has just become the perfect outlet for digesting inventory.

For model providers, OpenClaw is even more of a pillow at the right time for a sleepy person. Domestic large - scale models have the ability to be called via API, but they have never found a C - end scenario that can stably consume tokens. Having people download apps during the Spring Festival to get freebies and then uninstall them is not a long - term solution. And the Agent logic of OpenClaw is naturally a token crusher: to complete a task, it needs to interact with the model dozens or even hundreds of times, consuming tens of thousands of tokens. Using open - source community projects to boost the call volume of their own models is a very cost - effective deal.

So, looking back at the overwhelming praise in the past two months, on the surface, it's the course sellers' carnival, but behind it is the pushing of cloud providers and model providers.

OpenClaw is not just an application. It is a computing power black hole for users, an inventory savior for cloud providers, and a token feast for model providers.

02

The 'Lobster' Crawls into the Dialog Box, and WeChat, QQ, Feishu May Start to Fade Away

If we only examine OpenClaw from the perspective of the marketing bubble, it's easy to conclude that "this software is not successful in terms of functionality".

Of course, this is not objective. In fact, OpenClaw has indeed achieved a milestone - like breakthrough in its technical architecture: its development process may also be the process of the decline of WeChat, QQ, and Feishu.

People are no strangers to chatbots.

Enterprise WeChat opened its robot API early on, and QQ robots have even become a system function. But these robots all have a common systemic problem: ecological fragmentation.

Domestic QQ and Feishu, and foreign Discord and WhatsApp use completely different development frameworks. A robot on Platform A has to have its code rewritten to be used on Platform B. Skills developed for Platform C can only be envied on Platform D. Each robot is an isolated island, and every cross - platform migration is a complete reconstruction from scratch.

The root cause of this architectural fragmentation is that all IM robots are locked in their respective APIs. Developers are not developing for AI but for a specific IM platform.

OpenClaw is different.

Based on the MCP protocol proposed by Anthropic, it disassembles the Agent into three standardized levels:

Core (Core Layer): Responsible for calling the underlying large - scale model to achieve inference and planning. This is the brain of AI and has nothing to do with any IM platform.

● Adapter (Adapter Layer): The bridge connecting different IM platforms. OpenClaw abstracts all message sending and receiving into unified events - whether it's QQ, WeChat, or Feishu, the input and output are in standard formats. The platform differences are encapsulated in this layer, and the upper - level logic doesn't need to care which IM it is communicating with.

● Skill (Skill Layer): The module for performing specific tasks. Based on standardized interfaces, once a set of Skills is written, it can be directly reused on all supported IM platforms without modifying a single line of code.

The essence of this architecture is the first complete decoupling of AI capabilities from IM platforms.

From now on, developers no longer need to "develop a robot for WeChat" or "develop another robot for Feishu", but develop a set of skills for OpenClaw and let it run automatically on all IMs.

This means that no matter whether users open WeChat, QQ, or Feishu, they are facing the same AI assistant: with the same memory, the same skills, and the same conversation context. What wasn't finished chatting (or doing) on WeChat today can be continued on Feishu tomorrow, with seamless connection from the AI.

More importantly, when all IMs become entrances to AI, the logic for users to choose IMs will be fundamentally reversed.

In the past, IMs were containers for relationships and moats for ecosystems. You stayed on WeChat because your friends were all there; you opened Feishu because your work required it. IM platforms controlled the entrances to users, and AI was just an appendage on them.

But when AI truly achieves seamless roaming across platforms, the weight of the entrance starts to tilt towards AI. Users no longer care "in which IM I'm chatting with the AI", but only care "whether I can find my AI anytime and anywhere". IMs gradually degenerate into simple displays and microphones, becoming just pipelines.

Historically, telecom operators have experienced this: when WeChat emerged, text messages and calls were marginalized, and operators became "pipelines". Today, the same scenario may happen to IMs: when AI crosses the boundaries of all IM platforms and truly achieves "one - time access, everywhere available", what flows in the moats of IMs will no longer be users' relationship chains, but AI's dialogue flows and work flows.

03

No BAT in the Hidden War

Domestic AI giants certainly couldn't sit still when OpenClaw caught fire. But those at the forefront are not BAT.

Take a look at the main players in this field - Dark Side of the Moon, MiniMax, Step Star, DeepSeek - and you'll find a fact: in this war, BAT is no longer the obvious protagonist.

A hidden power change is taking place. Why is this so?

To answer this question, we need to first understand the underlying business logic of Agents like OpenClaw.

This kind of product has an inherent characteristic, which can be called the "intelligent agent cycle". Different from the one - time Q&A of traditional large - scale models, to complete a task, an Agent has to go through a complex recursive process: break down the task → search the web → read materials → find insufficient information → search again → call tools → feedback information...

In this process, the number of interactions between the Agent and the underlying large - scale model ranges from dozens to hundreds, and a single task can consume tens of thousands of tokens. If using high - end models like GPT - 5.2 and Gemini - 3.1 Pro, the inference cost of a complex task may directly soar to dozens of dollars. This is also the root cause of the complaint that OpenClaw "burns money" too fast.

But in China, this "money - burning" pain point has precisely turned into a business opportunity for new players.

The opportunity consists of two ends. On the supply side, after two years of competition, domestic large - scale model companies have driven the token price down to rock - bottom, but they still lack a C - end scenario that can stably consume tokens. Having people download apps during the Spring Festival to get freebies and then uninstall them is not a long - term solution.

On the demand side, products like OpenClaw naturally require a huge amount of tokens to run their workflows, but calling foreign models is too costly to scale up. On one hand, there is a supply of computing power waiting to be released, and on the other hand, there is a huge demand for consumption. The gap in the middle is exactly filled by domestic model companies with the "one - click deployment version" of OpenClaw: they use self - developed products to lower the usage threshold and cheap models to reduce the operating cost. With this combination of measures, a perfect business closed - loop is achieved.

Data proves that this path is viable. According to OpenRouter's statistics on the API call volume of the underlying large - scale models of OpenClaw, the top ones are not OpenAI, nor Google, and even less BAT, but Kimi K2.5 of Dark Side of the Moon, MiniMax M2.5, Step 3.5 Flash of Step Star, and DeepSeek V3.2.

In this era of severe AI over - capacity, the unexpected popularity of OpenClaw has pointed out a differentiated path for domestic large - scale models: the operating logic of Agents determines that it is a scenario with high token consumption and high - frequency interaction. In this field, whether one can catch up with OpenAI's state - of - the - art level is no longer the decisive factor - extreme cost - effectiveness is the real core competitiveness.

And the war of cost - effectiveness has never been the patent of giants.

Back then, BAT built their moats through search, e - commerce, and social media. What flowed in those moats were users' relationships, transaction closed - loops, and content ecosystems. But the core elements of this new field today - computing power cost control, model inference efficiency, and open - source ecosystem operation - are exactly the capabilities that companies like Dark Side of the Moon have been polishing in the past two years. When the game rules change from "who has more users" to "who has cheaper tokens" and "whose code is better", the players at the table naturally change.

No BAT in the hidden war.

04

Conclusion: The Old World, Never to Return

Three years ago, when ChatGPT 3.5 was first launched, few people believed that it would change the world.

Today, after OpenClaw has broken through the circle, more people are asking the same question: "What can I do with it?"

This scene seems familiar. OpenClaw is retracing the path that large - language models have taken: geeks see the potential to change the world, while ordinary people only feel confused and alienated - technology takes a step forward, while demand stays put. This is a typical stage of technological over - supply, the no - man's land that all revolutionary products have to cross.

History has repeatedly proven this rule. When Henry Ford asked people what they wanted, the answer was a faster horse; when Steve Jobs released the iPhone, people questioned how to type without a physical keyboard. We are always used to enduring the cumbersome status quo and can't imagine a life reconstructed by automation.

The road hasn't been built yet, but OpenClaw is already forcing the construction of a car.

But history also proves that once you've experienced sitting in a car instead of walking, you'll never go back.

This article is from the WeChat official account "Silicon - Based Stardust", author: Siqi. Republished by 36Kr with permission.