OpenClaw Lobster: Why Is It So Popular? Who Benefits the Most?
A personal AI assistant named OpenClaw is taking the world by storm. It initially gained popularity among developers and geeks in Silicon Valley, and then entered the Chinese market with the promotion of technology communities and social media.
OpenClaw was born in November 2025 and is also known as the "Lobster". Its logo is a lobster. Peter Steinberger, the founder of OpenClaw, hopes it can shed its shell and grow into a larger being, just like a lobster.
Actually, OpenClaw is an open - source intelligent agent (Agent) framework. After users grant sufficient system permissions, it can automatically make large - language models operate the computer, call tools, and execute tasks by communicating with tools like Feishu.
As of March 9th, it has received more than 285,000 stars on GitHub, the world's largest open - source community (stars can be understood as bookmarks), making it the open - source software project with the highest number of stars in history.
Jensen Huang, the founder of NVIDIA, said at the Morgan Stanley TMT Conference on March 4th that OpenClaw is probably the most important software ever released. He further explained that the download scale OpenClaw achieved within three weeks after its release is equivalent to what Linux (the world's largest open - source software) achieved in 30 years.
In fact, Huang's analogy is not entirely accurate. Since Linux is more of an open - source software for To B (enterprise - level) development, OpenClaw was born as a product directly targeting To C (consumer) users.
In the Chinese market, the "Lobster Fever" is also spreading. Almost all large - scale technology companies, including Alibaba, ByteDance, Tencent, and Baidu, are deploying or integrating OpenClaw. Large - language model companies such as DarkSide of the Moon, Zhipu AI, and MiniMax have also launched exclusive Coding Plan (programming model subscription) packages for OpenClaw.
The "Lobster Fever" has even evolved into a national - scale craze. Tencent organized offline activities at the downstairs of its headquarters in Shenzhen's Binhai Building to help citizens install the "Lobster". Offline salons on how to install and use the "Lobster" are spreading across the country. Government departments are also following the trend. Local governments in areas such as Shenzhen's Longgang District, Wuxi's High - tech Zone, and Changshu City in Suzhou have introduced policies or documents to provide "Lobster" subsidies, encouraging enterprises or individuals to use the "Lobster".
The "Lobster" is not without controversy, and it's not for everyone to handle.
Firstly, its deployment and usage thresholds are relatively high, requiring a certain foundation in coding. After deployment, the thresholds for usage and maintenance are also high. Once an error occurs, users need the ability to fix it. Although it can be downloaded for free, it needs to access models to work. Once it starts consuming Tokens, the usage cost is difficult to estimate accurately. If users want it to truly complete tasks automatically, they need to provide operation permissions, which may bring security risks.
However, a series of questions surrounding the "Lobster" are emerging: Why has the "Lobster" become so popular? Who is the biggest beneficiary? Do ordinary people develop AI Fomo (a specific term in the AI field referring to the fear of missing out) because of it?
OpenClaw official website
What are the inevitabilities behind its popularity?
The popularity of OpenClaw is not an accidental success. It is the result of large - language models crossing a critical threshold in terms of capabilities.
There are two key factors behind this: First, the models have gradually developed the ability to plan tasks, enabling them to break down complex goals into multiple steps; Second, the context memory ability has been significantly improved, allowing the models to conduct continuous reasoning in long - term tasks.
Before the first half of 2025, the context memory of mainstream models at that time (including OpenAI's GPT - 4.5 and Anthropic's Claude 3.5/3.7) was generally only 200,000 Tokens. Although this was sufficient for long - text understanding, it was easy to forget the context when executing complex tasks. Therefore, although Agent tools like Manus were already popular, the failure rate of task execution was still relatively high.
However, in the second half of 2025, the context memory of mainstream models (OpenAI's GPT - 5 series, Anthropic's Claude 4.5 series, and Google's Gemini 3 series) increased to 1 - 2 million Tokens. When the models continuously preserve task goals, reasoning processes, and tool - calling records in long - term contexts, the accuracy of Agent tools has been greatly improved.
An algorithm engineer told Caijing magazine that he clearly felt the changes brought about by the significant improvement in the code performance and memory ability of models in his daily work. In the first half of 2025, when he used AI code - generation tools like Cursor to run models such as Claude 3.5/3.7 for development, he needed to constantly intervene and break down tasks. Moreover, when executing complex tasks, the tasks often interrupted, and he needed to take over at any time.
However, at the end of 2025 and the beginning of 2026, he found that flagship models like Claude 4.5/4.6 and OpenAI - 5.3 - Codex could continuously complete complex tasks for up to half an hour or even an hour with little or no human intervention.
Around the Spring Festival in 2026, domestic models had also approached this level. M2.5 under MiniMax, GLM - 5 under Zhipu, and Kimi K2.5 under DarkSide of the Moon can complete similar tasks with lower Token costs. The above - mentioned algorithm engineer roughly estimated that, taking MiniMax M2.5 as an example, its unit Token cost is only one - quarter of that of OpenAI - 5.3 - Codex, but it can achieve an effect of over 95%.
These changes indicate that the capabilities of large - language models have reached a critical point for the stable operation of Agents. The remaining problem is how to package these capabilities into products that ordinary users can understand and use easily.
Some technical experts have long been aware of this problem. In December 2025, Xin Zhou, the general manager of Baidu Smart Cloud's large - language model platform, mentioned in an exclusive interview with Caijing that as the capabilities of base models improve and costs decrease, Agents will see further growth in 2026. However, there are two keys behind this: One is to build a good platform, and the other is to build various tools (including Agents and Skills) and application interfaces well.
At that time, Xin Zhou believed that the trend of multi - Agent collaboration was not obvious in 2025, and many were just gimmicks. But in the future, it will definitely move towards multi - Agent collaboration. Because a single model cannot handle increasingly complex contexts and tasks. Different tasks require different specialized models, and more complex tasks ultimately require the collaboration of multiple Agents to complete.
OpenClaw has precisely solved these product - packaging problems. From a technical architecture perspective, OpenClaw not only encapsulates large - language model interfaces and tool - calling systems but also provides different Agent tools. It even offers reusable skills (Skills, which can be understood as code and text instructions for models to read) and agent templates (Agents). After users issue instructions, OpenClaw will first call the model to write code, plan tasks, break down complex goals into steps, then call appropriate agents or skills according to requirements, or execute operations through tools such as browsers, control terminals, and files to gradually complete tasks.
On the OpenClaw official website, you can deploy the "Lobster" on a Macbook with just one - click code copying.
What's more crucial for OpenClaw's popularity is that it has done one thing right - reducing the psychological threshold for deploying Agents, even though the technical threshold has not really been lowered.
On the OpenClaw official website, you just need to copy a string of code, open the code terminal on an Apple Macbook, paste the code, and press the Enter key to deploy it with one click.
However, before OpenClaw, deploying an Agent required knowledge of the Python environment and the hassle of configuring files. This is an invisible black box for ordinary people, which would discourage most users. But OpenClaw gives ordinary people the sense of participation as if they can download and use a complex Agent tool by tinkering with code.
Who is the biggest beneficiary?
From a business - model perspective, OpenClaw itself does not make direct money. Since it is an open - source tool, open - source means free. But the real beneficiaries behind it are the companies that provide models and computing power for Agents.
Large - language model companies and cloud - computing companies are the biggest beneficiaries of OpenClaw. Because no matter how users and developers deploy OpenClaw, they will eventually use models and consume computing power.
OpenClaw needs to run large - language models. During the task - execution process, OpenClaw will continuously call models to generate code, plan tasks, and call tools, and every step will consume Tokens. Once the Agent starts running continuously, the Token consumption is often much higher than that of traditional chatbots. Therefore, model companies are almost the most direct beneficiaries.
Xin Zhou told Caijing magazine in December 2025 that the biggest difference between an Agent system and an AI dialogue tool is that an Agent system executes a series of tasks rather than a simple dialogue. During the task - execution process, the model needs to continuously plan tasks, call tools, and record execution status, and each step may trigger new model calls. An Agent task often consumes far more Tokens than an ordinary dialogue. A single task may consume tens of thousands or even hundreds of thousands of Tokens.
In January this year, companies including DarkSide of the Moon, Zhipu, MiniMax, and even Alibaba Cloud and ByteDance's Volcengine provided Coding Plan subscription packages for users using OpenClaw.
OpenRouter is a global large - language model API aggregation platform that integrates more than 300 mainstream models worldwide. Its monthly Token consumption exceeds 30 trillion, accounting for about 3% of the total globally - countable consumption. It reflects the usage habits of cutting - edge developers and startups. Data from OpenRouter on March 9th showed that the Token consumption of MiniMax M2.5, Kimi K2.5, and GLM 5 ranked first, second, and eighth globally respectively. Chinese model companies occupied the top three positions in the global monthly statistics on this platform for the first time.
An architect from a Chinese cloud provider told Caijing magazine that OpenClaw is an important factor driving the rapid growth of Token consumption of these domestic models. The core reason why these domestic models are quickly adopted by developers is that their Token prices are much lower than those of the flagship models of OpenAI and Anthropic, while the performance gap is not significant. Therefore, when long - term Agent tasks need to be run, more and more developers are willing to choose these lower - cost models.
This has brought huge revenue growth to these model companies. At the end of February this year, a person from DarkSide of the Moon confirmed to Caijing magazine that DarkSide of the Moon's revenue in 20 days in February exceeded that of the whole year of 2025. DarkSide of the Moon's Kimi K2.5 is becoming one of the choices for domestic and foreign developers. However, he did not disclose the specific revenue scale.
In addition to model companies, technology companies with cloud - computing businesses, such as Alibaba, ByteDance, Tencent, and Baidu, are also important beneficiaries.
Large - language models such as DarkSide of the Moon, Zhipu, and MiniMax almost all run on the cloud. As Agent tools like OpenClaw are used by more people, the frequency of model calls and Token consumption are increasing rapidly, which is also driving up the demand for cloud - computing power.
As of March 10th, Internet technology companies such as Alibaba, ByteDance, and Tencent have followed up with similar products, including CoPaw from Alibaba Cloud Tongyi Laboratory, WorkBuddy and QClaw from Tencent, and ArkClaw from ByteDance. However, the functions of these products are generally still in the improvement stage.
For enterprises, the development threshold of OpenClaw is not high. Its core framework is open - source, and enterprises can conduct secondary development based on this framework or use the company's existing Agent framework to quickly build a product similar to the "Lobster".
Alibaba Cloud, Volcengine, Tencent Cloud, and Baidu Smart Cloud have also launched exclusive cloud servers or cloud computers for developers and individual users to deploy OpenClaw. For cloud providers, more users using Agent tools means more model - calling requests, which also means higher computing - power usage and cloud - service revenue.
Still a long way from ordinary people
Ordinary users still have a long way to go to make good use of the "Lobster". Because the "Lobster" has a certain deployment threshold, and the long - term usage cost is not low either.
Deploying OpenClaw on an Apple Macbook seems to only require copying a line of code and pressing the Enter key in the control terminal. But in fact, ordinary users who have never done development will soon find that their computers do not have three basic environments installed: Homebrew (a software - installation tool), Node.js (a program - running environment), and Git (a code - management tool). These tools are the infrastructure for most open - source software to run.
Even after installing these environments according to the tutorial, users still need to continue entering instructions in the terminal when actually using OpenClaw, such as granting the program system permissions like browser control. OpenClaw may also report errors at any time during task execution, and at this time, users need the ability to repair the code environment.
That is to say, although OpenClaw seems to have lowered the psychological threshold for deployment, it is still a typical developer tool for ordinary users and requires a certain foundation in coding to be truly used well.
This has directly led to the emergence of a business in Silicon Valley and China - offering on - site installation services for the "Lobster". The on - site installation price in China can even reach 500 yuan, and in Silicon Valley, it can reach up to 1,000 US dollars.
Here lies a paradox: If users can't install the "Lobster" themselves, even if they hire someone to install it successfully, they won't have the ability to fix it once the task reports an error later. Moreover, task errors are almost commonplace during the usage process.
In fact, even some professional developers have not really used the "Lobster" in - depth at present, and even think it doesn't help much in improving efficiency.
Among the five professional developers surveyed by Caijing magazine (one