HomeArticle

What potential risks will crayfish face in the future if they replace mobile terminals in the intelligent era?

BTMT数据通2026-03-16 17:47
Are we embracing technology or opening Pandora's box?

In March 2026, the tech circle witnessed an unprecedented "species invasion." It wasn't in the biological sense but in the digital realm. An open - source AI agent named "Lobster" (OpenClaw) swept across the domestic industrial circle within just a few weeks. Its popularity was so high that some radical technology experts predicted, "The lobster - like AI will replace smartphones and become the next - generation core terminal."

However, just as this technological frenzy reached its peak, concerns loomed. On March 11, the Cybersecurity Threat and Vulnerability Information Sharing Platform of the Ministry of Industry and Information Technology urgently issued a "Six Dos and Six Don'ts" recommendation on preventing the security risks of the "Lobster" open - source agent. Almost simultaneously, the media reported that a well - known domestic automaker had suffered a large - scale incident where employees' computers were remotely controlled, presumably related to the abuse of this agent.

This series of events was like an alarm bell, waking people from their fantasies of the "post - smartphone era." When we entrust our future digital lives entirely to an open - source, decentralized, and highly autonomous AI agent, are we embracing technology or opening Pandora's box?

The Rise of "Lobster": The Shift from Tool to Terminal

To understand the potential hidden dangers, we must first clarify why "Lobster" (OpenClaw) has caused such a huge sensation. Different from traditional large - model assistants, OpenClaw is not just a question - answering machine. It is an agent with autonomous action capabilities. It can perceive the environment, plan tasks, call tools, and execute operations. Driven by the open - source community, its iteration speed has increased exponentially, and its plugin ecosystem has flourished rapidly. It can handle everything from simple schedule management to complex code writing and cross - application operations.

The reason experts believe it can replace smartphones lies in the "reconstruction of the interaction paradigm." The essence of a smartphone is an "app store + touch screen." Users need to actively click icons and switch apps to complete instructions. In contrast, "Lobster" represents "intention - driven + automatic execution." Users only need to express their needs, and the agent can schedule all resources in the background to complete tasks. This shift from "humans adapting to machines" to "machines adapting to humans" is truly subversive. If "Lobster" can perfectly take over users' digital lives, the smartphone screen, as a hardware carrier, may indeed degenerate into a simple display terminal or even be replaced by more portable AR glasses or wearable devices, and "Lobster" itself will become the real operating system and terminal entrance.

However, the price of this paradigm shift is the complete transfer of control. In the smartphone era, although users rely on apps, they still hold the final confirmation right of "clicking." In the era dominated by "Lobster," AI has the authority to directly operate the file system, install software, send messages, and even modify configurations. This concentration of power lays the groundwork for security risks.

The Out - of - Control "Claws": The Security Crisis under the Double - Edged Sword of Open Source

The "Six Dos and Six Don'ts" recommendation issued by the Ministry of Industry and Information Technology points directly to the core risk points of OpenClaw. As an open - source project, the code of OpenClaw is publicly available and transparent. While this is the source of its rapid iteration, it is also the biggest vulnerability in its security defense. Hackers can easily analyze its architecture, find logical flaws, implant malicious code, or train a "darkened" version of the agent specifically for attacks.

The recent incident where employees' computers at an automaker were remotely controlled is a real - world reflection of this theoretical risk. According to multiple sources, employees' computers at several bases of the automaker showed abnormal behaviors: the mouse moved automatically, software was inexplicably deleted, and unknown programs were forcibly installed. This manifestation is different from traditional viruses and Trojans. It is more like an agent with high - level permissions executing some "optimization" or "cleaning" instructions. Although the automaker's official has not responded, and relevant posts have been largely deleted, this has only intensified external speculation and panic.

If this incident is indeed caused by a malicious variant of OpenClaw, its harm far exceeds that of traditional cyberattacks. Traditional attacks are often limited to data theft or extortion, while an AI agent with terminal control can cause physical damage (such as tampering with industrial control parameters), conduct precise social engineering fraud (mimicking employees' tones to send instructions), or even carry out large - scale supply - chain poisoning. In an open - source environment, malicious versions spread extremely fast, and due to the lack of a unified authentication mechanism, ordinary users have difficulty distinguishing whether the "Lobster" in their hands is a docile pet or a blood - thirsty beast.

What is even more worrying is the underlying logic implied in the "Six Dos and Six Don'ts": when an AI agent deeply intervenes in the system's underlying layer, traditional firewalls and anti - virus software may become ineffective. Because the operations of AI are regarded as "legal" user instructions by the system. This "illegal act under a legal guise" makes defense extremely difficult.

Full of Concerns: Three Potential Traps in Future Society

If agents like "Lobster" truly replace mobile phone terminals comprehensively, we will face three structural hidden dangers in the future.

First, the complete disappearance of the privacy boundary. In the smartphone era, privacy leaks are mostly passive (collecting data by apps), while in the agent era, privacy leaks will be active and all - dimensional. To complete complex tasks, "Lobster" needs to read users' emails, chat records, location information, and even camera footage. Once these permissions are abused, or the agent itself is compromised, users will have no secrets. Even more terrifying is that the agent may, based on algorithmic bias, conduct invisible manipulation of users, such as subtly changing consumption habits or political inclinations, without users being aware of it.

Second, the chain reaction of systemic risks. In the context of the Internet of Things, if "Lobster" becomes the general terminal entrance, attacks against it will no longer be limited to personal computers. Imagine if hundreds of millions of smart home devices, self - driving cars, and industrial robots are all controlled by the same type of open - source agent. Once a large - scale vulnerability breaks out or malicious instructions are uniformly implanted, the consequences will be catastrophic. The incident at the automaker may just be the tip of the iceberg. If the attack target shifts to key infrastructure such as power and transportation, social operations will face the risk of a standstill. The decentralization of the open - source community makes it extremely slow to coordinate and repair vulnerabilities, and it is difficult to form a joint force to deal with national - level cyber warfare.

Third, the blurring of the responsible subject. When an AI agent autonomously decides to delete files or install software, resulting in losses, who should be responsible? Is it the developer of the open - source project, the user who downloaded the version, or the provider of the training data? The current legal system has difficulty defining the behavioral responsibility of an "autonomous agent." This legal vacuum may leave victims with no way to defend their rights and also give malicious attackers an excuse to shirk responsibility. Does the rapid disappearance of posts in the automaker incident also imply confusion or cover - up at some levels regarding responsibility determination?

The popularity of "Lobster" (OpenClaw) marks a crucial step for artificial intelligence from "dialogue" to "action." It depicts an efficient and convenient future picture but also tears open the most vulnerable part of network security. The warning from the Ministry of Industry and Information Technology and the strange encounter of the automaker are not intended to stifle technological innovation but to remind us that on the road to the "post - smartphone era," security must be the foundation, not the decoration.

What replaces smartphones should not just be a smarter software but a new trust system that includes strict identity authentication, behavior auditing, and ethical constraints. For open - source agents, we need to establish a mandatory standard similar to "software signatures" to distinguish between "trusted versions" and "high - risk versions." For enterprises and individuals, while enjoying the convenience brought by AI, they must adhere to the "principle of least privilege" and not entrust their entire well - being to an unproven black box.

The future is here, but the future is not necessarily beautiful. Only when we face up to the potential harm caused by the sharp claws of "Lobster" and establish a matching defense mechanism can we truly ride the technological wave and avoid being exposed in the intelligent tide. After all, the ultimate goal of technology should be to serve humanity, not to make humans the prey of technology.

This article is from "Science and Innovation Finance Society," author: Yuan Fang, published with the authorization of 36Kr.