I advise you not to blindly follow the trend of this wave of OpenClaw.
After coming back from the Spring Festival, I feel like I've missed out on five billion dollars.
It's okay that I missed the gold rush. But the situation with AI is even more outrageous - there are ten major news stories, twenty new products, and thirty money-making opportunities every day. After going back to my hometown and catching up on my WeChat Moments, I was stunned: I felt like I'd missed at least forty opportunities to get rich.
Take the equally popular Seedance 2.0 for example. Everyone says that the video industry is going to change because of it. But when I paid to try it out, I had to wait in line for at least four hours. By then, it was too late.
Fortunately, these trends come and go quickly. You can console yourself by saying that these fads won't last long, that they're just about making quick money, and that they're all hyped up.
But OpenClaw is different.
It was incredibly popular throughout February, and its popularity only increased. Even if you're not interested in the AI industry, it's hard to avoid it on the internet: there are tutorials, case studies of how to make money, and user experiences everywhere. You can get traffic on Xiaohongshu just by adding the #OpenClaw hashtag; post a couple of opinions, and you'll instantly become an AI tech expert.
It's known as the "next-generation AI Agent framework." It's a star on Github, developers love it, KOCs strongly recommend it, and everyone is paying attention.
Even more surprisingly, big companies like Tencent, ByteDance, and Alibaba have publicly announced that they're starting to lay out plans to support OpenClaw. Some companies have even developed hardware to go with it.
Now you can't sit still. One night, you open Yuanbao and type: What is OpenClaw?
Whether you're interested in AI or not, you can't avoid being bombarded with OpenClaw information.
For a moment, it seems that all a company needs to have its own AI digital employees is an engineer who can deploy OpenClaw and a budget.
But is that really the case?
If you're the boss of a consumer company, Lijin, as a professional think tank, will give you a piece of advice that goes against the current trend: It's okay to play around with AI on a personal level, but don't introduce OpenClaw on a company-wide basis.
Why is OpenClaw so popular around the world?
Before OpenClaw, from ChatGPT to DeepSeek, and then to Yuanbao, AI was an incredibly intelligent advisor.
It could answer your questions, help you analyze stocks, and write articles for you.
But it always stayed on the screen, passively waiting for your next command.
Gradually, companies realized that if AI can't truly integrate into their organizational systems and do the work for them, it will always be just a useful chat partner and won't translate into productivity.
OpenClaw aims to solve this pain point. It enables AI not only to answer questions but also to take action and execute tasks on its own.
OpenClaw took the global internet by storm in February this year.
Its popularity is well-deserved. Because OpenClaw fully meets people's expectations for the next step of AI.
If you ask a regular AI to book a restaurant table for tonight, it will provide you with strategies, suggestions, and relevant steps. It might give you some links to reservation apps, and that's about all it can do.
Even when you're extremely busy, you still have to open the app yourself and get a number.
But if you entrust this task to OpenClaw.
It will log in to your account on its own, find a restaurant that meets your needs. It will check if there are any available tables on the app. If not, it won't give up. OpenClaw will download a voice plugin, call the restaurant, and try to negotiate a reservation for tonight.
Its capabilities don't stop there.
It can be a personal negotiation assistant. Someone used OpenClaw to automatically send an email to a car dealership and got the AI to negotiate a discount of over $4,200 for him. And all of this happened while he was sleeping.
It can be an inventory prediction officer. Chain supermarkets are starting to use AI Agents to predict the external environment and inventory trends and place orders autonomously based on the results.
It can be a market intelligence officer, an operations supervisor, a recruitment assistant, a 24/7 customer service coordinator...
By now, you should have realized that the real strength of OpenClaw lies in:
1. The ability to break down tasks on its own;
2. The ability to call relevant tools;
3. The ability to execute step by step until the goal is achieved.
It sounds like a plot from a science fiction novel has come true. Cool, right?
The automated execution ability of OpenClaw combined with cutting-edge AI models makes people see the possibility of AI employees.
Then why don't we recommend that companies use it now?
First, there's the economic aspect.
Since February, as more and more AI developers and enthusiasts have actually deployed and started testing OpenClaw, people have found that if you choose to pair it with high-end models, the cost is extremely high; if you choose more affordable models, the results won't meet expectations.
There are more and more feedbacks about the high cost of OpenClaw.
Second, there's the issue of stability.
We all love Lego, but no one would use it to build a skyscraper.
As a cool new tool, OpenClaw is still evolving at a high speed.
This is exciting news, but it also shows that it's not stable enough.
Let's go back to the previous example. If it's helping you book a restaurant and finds that all the steak houses are fully booked. And instead of trying other methods, it decides for you that maybe Japanese cuisine is a better choice?
What if the reservation app blocks it for security reasons? What if it deletes your account without your knowledge?
We've all experienced AI hallucinations. AI models can confidently generate information that seems reasonable but is actually wrong, fictional, or unverified.
The current capabilities of AI still have a lot of uncertainty.
Sometimes it can produce amazing results, and sometimes it can make mistakes in judgment.
When this uncertainty moves from the individual level to the corporate level, the risks it brings will increase exponentially.
Finally, there's the issue of security.
With the widespread deployment of OpenClaw, security concerns have started to receive attention. An AI that has root access to the system and can make autonomous decisions is a double-edged sword that's hard to control.
Here is a timeline of the controversies regarding the security of the OpenClaw proxy project, sorted out with the help of WeChat Yuanbao:
Companies like Google, Anthropic, and Meta have started to ban OpenClaw.
Lijin predicts that in the future, corporate SaaS systems will undergo revolutionary changes as AI capabilities evolve.
Corporate organizational systems will gradually transform from fixed, long - running, and rigid systems to fluid systems that automatically optimize, learn in real - time, and continuously evolve.
This future may come faster than we think.
But at least based on OpenClaw's performance at the beginning of 2026, it's not suitable for corporate use.
Meta's AI security experts had their work emails cleared by OpenClaw. Elon Musk posted a tweet with a picture from "Rise of the Planet of the Apes" to mock it mercilessly.
What should companies do now?
Many people stop doing anything when they hear about risks, that it's not ready, or not to use it.
This is a dangerous and conservative mindset.
In 2026, it has become a global consensus that AI equals the future. Humanity has invested unprecedented amounts of capital and manpower in AI data infrastructure, large - model research and development, and AI application exploration.
Companies should go with the trend.
Specifically, there are nine words: Don't bet all your chips, learn first, and test the waters first.
In the next three to six months, companies shouldn't spend most of their funds on an all - in AI deployment.
Lijin has been helping consumer companies with AI co - creation and exploration for the past six months. We recommend that companies start to build their AI awareness and experimental capabilities.
Lijin's AI empowerment service is divided into three major sections:
Culturally, enhance AI thinking;
Organizationally, establish an AI experimental mechanism;
In terms of application, choose non - core scenarios for implementation, get results, and form a positive cycle.
Step one, enhance the company's overall AI thinking.
We've found that the most effective way is not to set strict targets and KPIs but to improve employees' AI usage abilities through systematic corporate training.
After our customized AI training service, employees can understand what AI can and can't do.
They'll use it voluntarily rather than just following orders passively.
This is a positive cycle: Employees learn AI → Work efficiency improves → Personal abilities are enhanced → Willingness to use increases → Overall corporate efficiency rises.
Just like when online collaborative documents first appeared, companies didn't force employees to use them. Instead, employees found them more useful and naturally switched to using them.
Humans will naturally choose more efficient tools.
So the short - term strategy is not to force deployment but to improve the company's overall AI thinking level.
Step two, establish an AI experimental mechanism.
Many of our clients are from traditional companies. These companies often face a real problem: they don't have a budget for AI innovation, a suitable organizational structure, or a dedicated technical team.
In this case, we don't recommend setting up an in - house AI department.
We offer a more lightweight solution: set up an external AI consulting group.
The group's responsibility is not to make radical, comprehensive changes to the company.
Instead, it's to evaluate AI scenarios that can be implemented in the short term, analyze business processes and risk structures, design low - cost experimental plans, and help the company understand the capabilities and limitations of AI in various ways.
The purpose is to conduct customized verification for the company.
Step three: Choose non - core scenarios for verifiable experiments
Based on our consulting experience, when AI first enters an organization, the biggest resistance often comes from the core system and internal transaction chains. The risks are high, the impact is significant, and the tolerance for error is low.
Therefore, we recommend starting with non - core but quantifiable scenarios for small - scale verification.
The key is not just to use AI but to get results.
Based on Lijin's on - the - ground experience in multiple projects, we've summarized nine verifiable scenarios for corporate AI implementation. These scenarios have the common characteristics of controllable risks, predictable cycles, and quantifiable results.
1. AI - generated traffic occupancy (GEO)
2. AI intelligent customer service and service efficiency improvement
3. AI corporate knowledge base
4. AI sales strategy expert
5. AI creative and product R & D consultant
6. AI market intelligence automatic monitoring (to improve decision - making quality)
7. AI overseas risk companion
8. AI vertical professional field expert
9. AI recruitment and talent screening
Many companies have realized that AI is a trend, but the trend itself won't automatically translate into a competitive advantage. What really makes a difference is whether they can break down their understanding of the trend into a series of verifiable and replicable point - like layouts to build the organization's ability to continuously evolve.
We've found in practice that non - AI native companies can also achieve quantifiable optimization results in the short term, as long as the path is well - designed and the pilot scope is controllable.
Here are two real - world AI cases delivered by Lijin, which are specific examples of this path.