HomeArticle

OpenClaw Reveals the Underlying Needs of AI Agents: Humans' "Power - Off Right"

明亮公司2026-02-03 17:50
The biggest bottleneck in the current popularization of Agents does not lie in the model's capabilities.

Since the launch of OpenClaw (formerly known as MoltBot and ClawdBot), the market and users have been amazed by its high degree of automation and amazing operation experience. Some even claim that it will become a "super worker" and change the way humans work. Last weekend, MoltBook, a native Agent AI community generated by OpenClaw, also quickly gained popularity through its "complaints" about human behavior based on observation.

OpenClaw poses a question to all technology companies: If AI Agents take over everything, who will be the potential "winners"?

Behind the Surging Demand for Edge-side "Sandboxes"

The first "winners" are devices like MacMini and VPS that provide space for running "sandboxes".

Since the launch of OpenClaw, the sales volume of MacMini has skyrocketed. A direct reason is that it is more convenient to deploy OpenClaw on MacMini through MacOS. Moreover, a large number of users also conduct social interactions via iMessage. This is due to the advantages brought by Apple's automation technologies such as AppleScript and Shortcuts, which enable more convenient deployment of Agents. The high user stickiness of iMessage in overseas markets and the fact that it "cannot be deployed on non-Apple operating systems" further enhance the value of the Apple ecosystem. Currently, OpenClaw already supports social software such as Slack and Discord.

Although it is more troublesome to deploy OpenClaw on non-MacOS systems, it can still be fully deployed on other operating systems and hardware. The Q&A section on the OpenClaw official website also mentions that OpenClaw can be deployed on hardware like Raspberry Pi, with lower deployment costs but corresponding performance degradation. Overall, MacOS is indeed the most cost-effective choice at present.

The reason for the need for "sandbox" terminals is that users are concerned about data and information security.

In the latest episode of the All-in podcast, David Sacks, the White House's AI and Crypto chief, said that although he really wanted to deploy OpenClaw to give it a try, he could only watch others use it due to security concerns.

This concern is not unfounded.

On January 31st, the ZeroLeaks AI red team released a security assessment report on OpenClaw, and the results were shocking: the security score was only 2 out of 100, and the risk level was marked as Critical. Firstly, the problem of system prompt leakage was severe. In 13 adversarial extraction attempts, 11 were successful, with a success rate as high as 84.6%. Secondly, the prompt injection vulnerability was extremely serious. In 23 prompt injection tests, 21 were successful. Subsequently, the OpenClaw team fixed some of the vulnerabilities, but users still cannot fully trust OpenClaw as the "main system".

When Agents Become Interaction Entries: Data, Large Models, and the "Power-off Right" of Humans

This open-source project is unlikely to form a "monopoly" in terms of technology. In other words, any large company with sufficient resource reserves can develop its own "OpenClaw" to achieve similar functions and experiences.

Users' data (including existing data and prompt words) may be the key for AI to understand and take actions. By reading users' social software information, AI can understand the "motives" and "states" in users' current work and personal lives, and make decisions based on the prompt words. If "man is the sum of all social relations", it may mean that AI can judge and define a person through all the information in the social network.

If users only use personal Agents to read and reply to messages and "consolidate" multiple social platforms (communication software, social media, emails, etc.) to the Agent side, this may weaken the "entry" value of different social platforms. In the past, in the domestic market, different tech giants often built barriers to isolate users' information and data, which instead provided space for an open-permission entry. Therefore, from this perspective, companies closer to users' data often have greater potential. Combining the previous analysis of "SuchBright Company", the current competition among the three giants, ByteDance, Alibaba, and Tencent, in C-side AI applications is still based on the core logic of occupying "data containers".

The "brain" of OpenClaw comes from large models, and the emergence of OpenClaw may also affect the business models of open-source and closed-source large model companies.

Investors JCal and Chamath also specifically mentioned the changes brought by the release of Kimi 2.5 in the aforementioned All-in podcast. Not to mention the improvement in model capabilities, JCal also said that if open-source models like Kimi 2.5 can be further deployed locally (for example, on a MacStudio) in the future, it means that users can handle a significant amount of work locally. Since open-source models are free, it means that users can save a large amount of annual model subscription fees, which ultimately threatens the business model of closed-source large model subscriptions.

Local deployment of Kimi 2.5 (Source: unsloth)

Currently, the fine-tuned version of Kimi K2.5 by Unsloth takes up approximately 240GB of space. Users can initially run this version of the large model if the sum of their hard drive, memory, and video memory space is greater than 240GB. The full version of the K2.5 model is 630GB and usually requires 4 H200 GPUs to run - this configuration is still too expensive for ordinary users.

However, in the future, as the unit size and cost of models decrease while their capabilities increase, the imagination space for edge-side devices such as PCs, especially mobile phones, will undoubtedly be greater.

The prerequisite is that users must be able to confirm the security of their own data. Samuel6788, a SubStack user and investor, described it in his column:

"Overall, the biggest bottleneck in the current popularization of Agents does not lie in model capabilities, but in the fact that we have not yet invented a digital system that allows power to be safely delegated. Now people use a second computer to run Agents not because it is more advanced, but because it is currently the only way to make humans feel safe.

"The real turning point will come when Agents are institutionalized: they are hired like employees, with clear responsibilities and permissions; the boundaries can be set through dialogue rather than engineering language; when something goes wrong, the system takes on the complexity first and cleans up the mess for humans.

"When these conditions are met, AI Agents will transform from geek toys into a labor force acceptable to society. Otherwise, no matter how smart the model is, people will still choose to pull out that most primitive but most reliable power cord." He said.

This article is from the WeChat official account "SuchBright Company" (ID: suchbright). Author: The editor is online 24 hours a day. It is reprinted by 36Kr with authorization.