Wang Xingxing, Zhu Xiaohu and others shared some candid insights on AI entrepreneurship.
Text by | Zhou Xinyu, Fu Chong
Edited by | Su Jianxun
The Inclusion Bund Conference, which opened on September 11, 2025, gathered AI entrepreneurs, scholars, and investors for an open - hearted exchange.
Commercialization is undoubtedly the most concerning topic for current entrepreneurs.
Zhu Xiaohu, the managing partner of GSR Ventures, who once said, "I believe in immediate commercialization," advised entrepreneurs at the conference: If you pursue commercialization, don't use the latest technology. "Use some seemingly unremarkable but more stable technologies."
△ Zhu Xiaohu, managing partner of GSR Ventures. Photo provided by the Bund Conference.
Moreover, if you can only look at one indicator for an AI application, he doesn't look at the impressive ARR (Annual Recurring Revenue), but only at user retention.
As for how to determine the direction of AI entrepreneurship, the commonly - discussed methodology in the industry is: Don't stand on the extension line of model capability iteration, otherwise, the application is likely to be "eaten" by the model.
Now, Zhu Xiaohu has a clear answer about which applications will be replaced: Don't invest in no - code or low - code AI applications; in the future, the demand for collaborative tools like Figma will decrease.
For AI entrepreneurs, there is a quick consensus on organizational management: AI companies should establish flat and efficient "small organizations".
△ Wang Xingxing, founder and CEO of Unitree Technology. Photo provided by the Bund Conference.
At the opening - ceremony round - table, Wang Xingxing, founder and CEO of Unitree Technology, admitted that expanding the team would actually reduce efficiency.
Wu Yi, a former OpenAI researcher and an assistant professor at the Institute for Interdisciplinary Information Sciences of Tsinghua University, also expressed a similar view: If an organization needs 300 people, is it possible that the intelligence density is not high enough?
△ Wu Yi, assistant professor at the Institute for Interdisciplinary Information Sciences of Tsinghua University. Photo provided by the Bund Conference.
In the eyes of many entrepreneurs, the successive open - sourcing of high - performance models such as DeepSeek V3/R1 and Qianwen Qwen is the fuse for the explosion of AI applications this year.
At the opening ceremony, Wang Jian, the founder of Alibaba Cloud and the director of Zhejiang Lab, offered suggestions to base - model manufacturers from the perspective of the upstream of AI applications.
What is more valuable open - sourcing? In Wang Jian's view, when model training is still costly, open - sourcing code is not the key. More importantly, open - source the resources for model training (data and computing resources).
△ Wang Jian, founder of Alibaba Cloud and director of Zhejiang Lab. Photo provided by the Bund Conference.
Here is a summary of the main views of Zhu Xiaohu, Wang Xingxing, Wu Yi, and Wang Jian at the 2025 Inclusion Bund Conference by Intelligent Emergence:
Zhu Xiaohu: If you pursue commercialization, don't pursue the most cutting - edge technology
As long as the Transformer architecture can't solve the hallucination problem, as long as there is a 1% hallucination rate, complex process - management software can't be replaced by AI.
Simply put, low - code and no - code software will definitely be replaced by AI. This phenomenon is already occurring on a large scale. Many low - code companies, which raised a lot of funds and had high valuations during the bubble period three or four years ago, have basically disappeared.
However, it's unrealistic to rely on AI under the Transformer architecture to solve complex process and highly logical problems.
The emergence of a super - entry point in the AI era is inevitable. The relatively clear form now is Voice. Apple's Siri performs poorly, with weak AI capabilities. But the AI capabilities of Android or Google allow us to get direct feedback through voice input.
In the future, not only voice but also cameras will be added, and multi - modality will be combined for input.
For example, if I think the flower I took a picture of is nice and tell my phone to help me buy one, then I can place an order directly. This is an obvious future trend.
But there is still a chance for Agents. We always say that in the wave of mobile Internet in the United States, half of the successful startups were engaged in offline hard - work, such as Uber and Airbnb, which large companies were reluctant to do.
I think there will also be such opportunities in the AI era: Scenarios combined with the real world still need Agents.
It can help you implement and deliver results in real life. I think AI companies and some software companies are reluctant to do that kind of work. I think startups may still have some opportunities.
It's already obvious that low - code and no - code software will be completely replaced by large models; the demand for many editing and collaborative software will decrease significantly.
For example, we used to invest in software like Figma, which requires hundreds of people to collaborate on a project.
Now AI can significantly reduce the need for human resources, changing from hundreds of people collaborating to ten people collaborating. So the demand for collaborative software will drop significantly.
This will also have a huge impact on the market. That's why we couldn't understand why Figma's stock price was hyped so high when it went public, and it has recently declined.
It's not that software is replaced by AI, but the demand and the number of users have decreased. A 10% decrease in the number of users has a huge impact.
So we definitely want to avoid collaborative software now. The market for collaborative software will still exist in the future, but it will be much smaller.
When judging an AI product, we always only look at one indicator: retention. From the PC Internet to the mobile Internet to AI, it's the same, just the retention indicator.
The reason why many people laugh at AI application companies this year is that they have no user retention.
According to our experience in the mobile - Internet era, it may cost more than ten times as much to recall a user, which is almost impossible. So whether your user retention is good or not can prove whether the company has potential for future development.
I've noticed that some currently popular companies seeking follow - on financing only talk about ARR (Annual Recurring Revenue), calculated as a certain day's data multiplied by 365 days, and don't mention "retention" at all.
Recently, we were discussing future investment directions internally. We found that the technologies really suitable for commercialization are "Boring Technology", that is, technologies that are not very flashy but seem rather ordinary.
For example, last year, the most successful AI commercialization worldwide was transcript, including various types of meeting minutes, both vertical and general.
I think the best case last year was Plaud, which is now valued at one billion US dollars, and almost all Chinese companies are following this direction this year.
But is there anything particularly difficult about this technology? There are no technical difficulties at all. But it's very easy to commercialize.
This year, we feel that Voice Agents have almost reached a stage where they can be commercially scaled up. There are applications in customer - service centers, on - call sales, and toys.
So I think relatively speaking, if you pursue commercialization, don't use the latest or trendiest technologies. Use some seemingly unremarkable but relatively stable ones.
Wang Xingxing: A good model can improve data utilization
For the field of AI in practical work, the whole area is still in a desolate stage.
Although current language models perform extremely well in the text and image fields, even better than 99.99% of people, the eve of large - scale and explosive growth has not arrived yet. Now there are just a few "grass blades" in the desert.
The AI era is a very fair era. Even if you're still a student, as long as you're smart and willing to work, you can achieve your goals and grow "tall trees" in the desert.
Regarding how to break through the data bottleneck of embodied intelligence, some people misunderstood my speech and thought I was denying the importance of data. In fact, I want to say that both data and models are very important. If the model performs well, it can improve data utilization.
Currently, there is a lot of noise in data. How should we collect truly high - quality data? What are the standards for data quality? Or what types of data should we collect now and in what scale? These are still relatively vague issues.
So if the model has a stronger ability to understand data, less data can achieve better results.
Secondly, from the perspective of the model, we can collect data in a targeted way. For example, for language models, people have found that in many cases, some characteristic data is needed, rather than just a large quantity. So for the robotics field, we can also evaluate from the model's perspective how to collect data, which actions or scenarios are of higher quality.
Moreover, current models are not ideal in multi - modality fusion.
For example, now if we generate a video of a robot doing housework, the effect is okay. But it's very challenging to align the generated effect well with the robot's control modality.
Simply put, to better align the robot's movement with video and language models, we need to improve the model structure.
Currently, the hardware is sufficient. The biggest problem is that the model's capabilities are not strong enough. It's very difficult to control a dexterous hand, for example.
In the AI era, small teams can have increasingly powerful capabilities. Especially in the pure - AI field, if a team has a few top - notch and innovative talents, they can achieve a lot.
After all, organizational and management challenges are huge for companies that expand their teams significantly or large companies.
For future suggestions related to AI, everyone can forget as much as possible what has happened in the past. Re - learn the latest things happening now, even in the past six months. I think this can bring more new inspiration.
Relying on past experience is not good for future decision - making. Because there may be more people with past experience than you. Instead, making new decisions based on what is happening now is more likely to lead to new creations.
Wu Yi: There is a lot of noise in the AI era. You can close your eyes first
Wu Chenglin (founder and CEO of DeepWisdom) and I have opposite styles. He says he reads papers on Archive every day, while I don't.
Because I think there is a lot of noise in the AI era, and it's very important to reduce noise now.
Sometimes, we need to insist on doing the right thing. For example, in my opinion, if we can do reinforcement learning correctly, we don't need so many modules. Or if we can train agents using reinforcement learning, the modules may be much simpler because capabilities can emerge.
I think many current views may not represent the future direction, so we need to persevere and can even close our eyes. Manus also said some time ago that sometimes when others don't like your product, you can close your eyes first.
Recently, I developed an embodied - intelligence brain that allows a robot to play football with people. It's actually quite fun.
I have a concept of embodied agents. We talk about Agents as intelligent agents. I want to say that there may be a concept called Embodied Agents in the physical world.
Assuming we solve all the VLA and hardware problems, what's next? Will you tell an agent to "help me do something", and then the agent spends a day to complete it for you?
So I think one day the concept of agents will also be implemented in the real world as embodied agents.
If we look at it in layers, many models and skills in embodied intelligence can be regarded as Function Calls or Tools.
Then, when this more abstract agent has a physical form, it will become an embodied agent that can complete tasks 24 hours a day. This is my vision.
To be honest, I miss the time when I worked at OpenAI from 2019 to 2020. At that time, OpenAI was a very small organization with only dozens of people.
I remember chatting with a senior. I said I really hoped the organization would still be a small one with dozens of people. He said, "Have you ever thought that if you have 300 or 3000 people, you can achieve much more than with 30 people?" I think this statement was completely correct in the Internet era.
But now I have a question: Is this statement still true in the AI era?
Is there a possibility of a radical model where 30 people can do things well?
In the virtual world or the information world of agents, if you need 300 people, is it possible that the agent density or intelligence density is not high enough?
So is there a new type of organization that only needs 30 people and can do what required three to five hundred or a thousand people ten years ago? This may be true in the AI era.
So this is my very radical view, and I want to try it out in my team.
I agree with Xingxing that first, you need to forget the past, but don't forget history. Because human history keeps repeating itself. It's good to know about past mistakes.
Wang Jian: Don't just open - source code; OpenAI has admitted its mistake
People have different understandings of the term "open - source". We are experiencing a revolutionary change from code - opening and open - sourcing to resource - opening and open - sourcing.
Actually, a lot of things have happened in the past year. From the perspective of artificial intelligence, 2025 is destined to be a very extraordinary year.
On January 13, 2025, the United States announced export controls on artificial intelligence. There is an interesting thing, or we can say a loophole, in this control order. It only explicitly mentions export controls on "closed - source" weights, and specifically emphasizes that "open - source" weights are not subject to control.
At that time, the world's best base models were owned by several top - tier US companies.
However, on January 31, with the open - sourcing of Qianwen Qwen and DeepSeek, on