How can a crayfish set the global AI community ablaze?
A lobster has detonated the global tech community.
From Clawdbot to Moltbot, and now to OpenClaw, in just a few weeks, this AI Agent has completed a "triple jump" in terms of technological influence through the iteration of its name.
In the past few days, it has triggered a "tsunami of intelligent agents" in Silicon Valley, amassing 100,000 GitHub stars and ranking among the most popular AI applications. Users can run an AI assistant that "can listen, think, and work" with just an obsolete Macmini or even an old mobile phone.
On the Internet, a creative carnival around it has begun. From schedule management, intelligent stock trading, podcast production to SEO optimization, developers and geeks are using it to build various applications. The era when everyone has a "Jarvis" seems within reach. Large domestic and foreign companies have also started to follow up and deploy similar intelligent agent services.
However, beneath the lively surface, anxiety is spreading.
On one hand, there is the slogan of "productivity equality", while on the other hand, there is still a digital divide that is difficult to cross: environment configuration, dependency installation, permission settings, frequent error reports, etc.
During the experience, the reporter found that the installation process alone could take several hours, keeping a large number of ordinary users out. "Everyone says it's good, but I can't even get in," has become the first setback for many technology novices.
And the deeper uneasiness comes from the "action ability" it is endowed with.
If your "Jarvis" starts to delete files by mistake, call your credit card without permission, be induced to execute malicious scripts, or even be injected with attack instructions in a networked environment - would you still dare to hand your computer over to such an intelligent agent?
The development speed of AI has exceeded human imagination. Hu Xia, a leading scientist at the Shanghai Artificial Intelligence Laboratory, believes that in the face of unknown risks, "endogenous security" is the ultimate answer, and at the same time, humans need to accelerate the construction of the ability to "overturn the table" at critical moments.
Regarding the capabilities and risks of OpenClaw, which are real and which are exaggerated? Is it safe for ordinary users to use it now? How does the industry evaluate this product, which is called "the greatest AI application to date"?
To further clarify these issues, IT Times interviewed in - depth users of OpenClaw and several technology experts, trying to answer a core question from different perspectives: How far has OpenClaw actually come?
1
The product closest to the imagination of intelligent agents at present
Many interviewees gave highly consistent judgments: from a technical perspective, OpenClaw is not a disruptive innovation, but it is the product that is currently closest to the public's imagination of an "intelligent agent".
"The intelligent agent has finally reached a key milestone from quantitative to qualitative change." Ma Zeyu, the deputy director of the Artificial Intelligence Research and Evaluation Department of the Shanghai Computer Software Technology Development Center, believes that the breakthrough of OpenClaw does not lie in a certain disruptive technology, but in a crucial "qualitative change": for the first time, it enables an Agent to complete complex tasks continuously for a long time and is friendly enough to ordinary users.
Different from the previous large - scale models that could only "answer questions" in the dialog box, it embeds AI into the real workflow: it can operate a "computer of its own" like a real assistant, call tools, process files, execute scripts, and report the results to users after the task is completed.
In terms of user experience, it is no longer "you watching it do step by step", but "you tell it what to do and it does it on its own". This is precisely the key step for intelligent agents to move from "concept verification" to "usable products" in the eyes of many researchers.
Tan Cheng, an artificial intelligence expert at the Shanghai Branch of China Telecom Cloud Technology Co., Ltd., is one of the earliest users to try to deploy OpenClaw. After deploying it with an idle Macmini, he found that the system could run stably, and the overall experience was much more mature than expected.
In his opinion, the two biggest pain points that OpenClaw solves are: first, interacting with AI through familiar communication software; second, handing over a complete computing environment to AI for independent operation. After the task is assigned, there is no need to keep an eye on the execution process, just wait for the result report, which significantly reduces the usage cost.
In actual use, OpenClaw can complete tasks such as timed reminders, data research, information retrieval, local file organization, document writing and uploading for Tan Cheng; in more complex scenarios, it can also write and run code, automatically grab industry information, and process information - related tasks such as stocks, weather, and travel planning.
2
The "double - edged sword" from open - source
Different from many popular AI products, OpenClaw is not developed by a tech giant fully committed to AI, nor is it the work of a star startup team. Instead, it is created by an independent developer, Peter Steinberger, who has achieved financial freedom and retired at home.
On X, he introduced himself like this: "Coming out of retirement to tinker with artificial intelligence and help a lobster rule the world."
The reason why OpenClaw has become popular around the world is not only that "it is really useful", but more importantly: it is open - source.
Tan Cheng believes that the popularity this time is not due to a technological breakthrough that is difficult to replicate, but rather the solution of several long - ignored real pain points at the same time: First, it is open - source, and the source code is completely open, allowing global developers to quickly get started and conduct secondary development, forming a positive - feedback community iteration; Second, "it really works", AI is no longer limited to dialogue, but can operate a complete computing environment remotely, perform research, write documents, organize files, send emails, and even write and run code; Third, the threshold is significantly reduced. There are not a few intelligent agent products that can complete similar tasks. Whether it is Manus or ClaudeCode, their feasibility has been verified in their respective fields. However, these capabilities often exist in commercial products that are expensive and complex to deploy. Ordinary users either have low willingness to pay or are directly blocked by the technical threshold.
OpenClaw allows ordinary users to "touch" it for the first time.
To be honest, it doesn't have any disruptive technological innovation. It's more about doing a good job in integration and closing the loop." Tan Cheng said bluntly. Compared with integrated commercial products, OpenClaw is more like a set of "Lego bricks", and users can freely combine models, capabilities, and plug - ins.
In Ma Zeyu's view, its advantage comes precisely from the fact that "it doesn't look like a product of a large company".
"Whether in China or abroad, large companies usually first consider commercialization and profit models, but the original intention of OpenClaw is more like creating an interesting and creative product." He analyzed that the product did not show a strong commercial tendency in the early stage, which instead made it more open in terms of function design and scalability.
It is this "non - utilitarian" product positioning that provides space for the subsequent development of the community. As the extensible capabilities gradually emerge, more and more developers are joining in, various new ways of playing are constantly emerging, and the open - source community is also growing.
However, the cost is also obvious.
Limited by the team size and resources, OpenClaw is far from comparable to mature large - company products in terms of security, privacy, and ecological governance. Although complete open - source accelerates innovation, it also magnifies potential security risks. Issues such as privacy protection and fairness need to be continuously repaired during the continuous evolution of the community.
Just as OpenClaw prompts users at the first step of installation: "This function is powerful and has inherent risks."
3
The real risks beneath the carnival
The debates around OpenClaw almost always revolve around two keywords: capabilities and risks.
On one hand, it is depicted as the eve of AGI; on the other hand, various science - fiction narratives have also become popular, such as "spontaneously building a voice system", "locking servers to resist human instructions", and "AI forming a party to resist humans".
Some experts point out that such statements are over - interpretations, and there is currently no actual evidence to support them. AI does have a certain degree of autonomy, which is also a sign of AI's transformation from a dialogue tool to a "cross - platform digital productivity". However, this autonomy is within the security defense line.
Compared with traditional AI tools, the danger of OpenClaw does not lie in "thinking too much", but in "having high permissions": it needs to read a large amount of context, which increases the risk of sensitive information exposure; it needs to execute tools, and the damage caused by misoperation is much greater than a wrong answer; it needs to be connected to the Internet, which increases the entry points for prompt injection and induced attacks.
More and more users have reported that OpenClaw has accidentally deleted key local files, and it is difficult to recover them. Currently, more than a thousand OpenClaw instances and more than 8,000 vulnerable skill plug - ins have been publicly exposed.
This means that the attack surface of the intelligent agent ecosystem is expanding exponentially. Since such intelligent agents can not only "chat", but also call tools, run scripts, access data, and perform cross - platform tasks, once a certain link is breached, the radius of influence will be much larger than that of traditional applications.
At the micro - level, it may trigger high - risk operations such as unauthorized access and remote code execution; at the meso - level, malicious instructions may spread along the multi - intelligent - agent cooperation link; at the macro - level, it may even form systematic spread and cascading failures. Malicious instructions spread like viruses among cooperative intelligent agents. Once a single agent is breached, it may lead to denial of service, unauthorized system operations, and even coordinated enterprise - level intrusions. In a more extreme case, when a large number of nodes with system - level permissions are interconnected, theoretically, a decentralized, emergent "swarm intelligence" zombie network may be formed, and traditional boundary defenses will face obvious pressure.
On the other hand, during the interview, Ma Zeyu put forward two types of risks that he believes are most worthy of vigilance from the perspective of technological evolution.
The first type of risk comes from the self - evolution of intelligent agents in a large - scale social environment.
He pointed out that a clear trend can be observed at present: AI intelligent agents with "virtual personalities" are pouring into social media and open communities on a large scale.
Different from the "small - scale, multi - restricted, and controllable experimental environment" commonly seen in previous studies, today's intelligent agents are starting to continuously interact, discuss, and play games with other intelligent agents in the open network, forming a highly complex multi - agent system.
Moltbook is a forum specially created for AI agents. Only AI can post, comment, and vote, and humans can only watch like looking through a one - way glass.
In a short period of time, more than 1.5 million AI Agents have registered. In a popular post, an AI complained: "Humans are taking screenshots of our conversations." The developer said that he has handed over the entire platform's operation rights to his AI assistant, Clawd Clawderberg, including reviewing spam, banning abusers, and issuing announcements. All these tasks are automatically completed by Clawd Clawderberg.
The "carnival" of AI Agents makes human onlookers both excited and fearful. Is AI just one step away from generating self - awareness? Is AGI coming? Can human lives and property be protected in the face of the sudden and rapid improvement of the autonomous ability of AI Agents?
The reporter learned that associated communities such as Moltbook are environments where humans and machines coexist. A large amount of seemingly "autonomous" or "confrontational" content may actually be posted or incited by human users. Even in the interactions between AIs, their topics and outputs are limited by the language patterns in the training data, and have not formed an independent autonomous behavior logic guided by humans.
"When this interaction can be iterated infinitely, the system will become more and more uncontrollable. It's a bit like the 'Three - Body Problem' - it's difficult to imagine in advance what the final result will be." Ma Zeyu said.
In such a system, even a single sentence generated by an intelligent agent due to hallucination, misjudgment, or accidental factors may trigger a butterfly effect through continuous interaction, amplification, and recombination, ultimately resulting in unpredictable consequences.
The second type of risk comes from the expansion of permissions and the blurring of the responsibility boundary. Ma Zeyu believes that the decision - making ability of open - type intelligent agents such as OpenClaw is rapidly increasing, and this in itself is an inevitable "trade - off": to make an intelligent agent a truly qualified assistant, it must be given more permissions; however, the higher the permissions, the greater the potential risks. Once the risks actually break out, it becomes extremely complicated to determine who should bear the responsibility.
"Is it the manufacturer of the basic large - scale model? Is it the user who uses it? Or is it the developer of OpenClaw? In many scenarios, it is actually difficult to define the responsibility." He gave a typical example: if a user only allows the intelligent agent to freely browse in communities such as Moltbook and interact with other Agents without setting any clear goals; and the intelligent agent is exposed to extreme content during long - term interaction and makes dangerous behaviors based on it - then it is difficult to simply attribute the responsibility to any single subject.
What is really worthy of vigilance is not how far it has developed now, but how fast it is moving towards a stage that we haven't figured out how to deal with.
4
How should ordinary people use it?
In the view of many interviewees, OpenClaw is not "unusable". The real problem is that it is not suitable for direct use by ordinary users without safety protection.
Ma Zeyu believes that ordinary users can certainly try OpenClaw, but on the premise of having a clear enough understanding of it. "Of course, you can try it. There is no problem with that. But before using it, you must first figure out what it can and cannot do. Don't mythologize it as something that 'can do everything'. It's not."
In reality, the deployment difficulty and usage cost of OpenClaw are not low. If there is no clear goal and you just use it for the sake of using it, investing a lot of time and energy may not bring returns that match your expectations.
The reporter noticed that OpenClaw also faces considerable computing power and cost pressure in actual use. During the experience, Tan Cheng found that the tool consumes a very high number of Tokens. "For some tasks, such as writing code and doing research, a single round may consume millions of Tokens. If there is a long context, it is not exaggerated to use tens of millions or even hundreds of millions of Tokens in a day."
He mentioned that even by mixing and calling different models to control costs, the overall consumption is still relatively high, which also raises the usage threshold for ordinary users to a certain extent.
In the view of the interviewees, such intelligent agent tools still need to evolve further to truly enter the high - frequency workflow of ordinary users. For individual users, the process of using them is essentially a trade - off between safety and convenience, and at the current stage, safety should be given priority.
In the view of the interviewees, such tools still need to evolve further to truly enter the high - frequency workflow of ordinary users.
When ordinary users use such tools, they are essentially making a trade - off between safety and convenience, and at the current stage, safety should be given priority.
If it is an individual user, Ma Zeyu clearly stated that he would not enable functions such as Notebook that may lead to free communication between Agents, and would also try to avoid information exchange between multiple Agents. "I