ZHAO Jiehui of DeepTech: From technological exploration to practical application in scenarios, the path for enterprises to empower with AI through "mountains and seas" | WISE2025 Kings of Business Conference
From November 27th to 28th, the 36Kr WISE 2025 Business King Conference, hailed as the "annual technology and business trendsetter," took place at the Conduction Space in the 798 Art Zone, Beijing.
This year's WISE is no longer a traditional industry summit but an immersive experience centered around "tech-infused short dramas." From AI reshaping the boundaries of hardware to embodied intelligence opening the door to the real world; from the globalization of brands in the wave of going overseas to traditional industries equipping with "cyber prosthetics" - what we present is not just trends but the insights honed through numerous business practices.
In the following content, we will dissect frame by frame the real logic behind these "thrilling dramas" and witness the unique business landscape of 2025.
In 2025, the focus of the large - model craze has shifted from the parameter race to implementation. However, the AI industry this year generally faces a fundamental question: Are the products truly usable? What are the real bottlenecks for enterprises when applying AI?
"Artificial intelligence is moving from the technological peak to the vast ocean of application scenarios. Productization is the core direction for the next three years." At the WISE 2025 Business King Conference, Zhao Jiehui, the founder, chairman of the board, executive director, and CEO of DeepData Technology, clearly pointed out the core proposition of the industry's development. He further explained: "In an enterprise, if a model cannot solve the deconstruction of complex data, high - precision training, and knowledge modeling, it can only be called a sample, not a product."
Putting a general large model directly into an enterprise is like hiring a novice employee who doesn't understand the business - it can see the blueprints but can't read the formulas; it can hear the instructions but can't understand them or mobilize the data.
DeepData Technology completed its listing journey in 2025. As a representative in the domestic Data + AI field and the first stock in enterprise - level large - model AI applications, DeepData Technology is further focusing on the digital transformation needs of the real - economy industries, deepening the integration of AI technology and business scenarios with a "productization" mindset, and continuously releasing the value of technology in empowering industries. In his speech, Zhao Jiehui dissected the path of AI productization.
He said that the essence of enterprise - level AI is not a general model but an accurate replication of the knowledge system and data permissions of specific positions. Behind this are three inevitable challenges: the ability to process "non - standardized" data such as blueprints and process documents, the ability to build models across knowledge systems, and the ability to ensure 100% accurate data integration under complex queries.
Currently, based on the enterprise - level large - model AI application solutions, DeepData Technology has carried out numerous practices in industries such as manufacturing, consumer retail, transportation, and healthcare. From assisting management positions in optimizing business decisions to helping professional positions such as engineers and doctors and providing professional judgments based on enterprise models, AI technology has been deeply integrated into the entire business chain of various industries, realizing the value from core decision - making assistance to key - link optimization.
However, all of this is based on the premise that the core abilities of deconstructing complex data, establishing knowledge systems, and accurately assembling real - time data are indispensable. Zhao Jiehui believes that if an intelligent entity that integrates "model, data, and interaction" cannot be formed, even the most advanced technology is just a non - scalable sample. The ultimate goal of AI in the industry is a systematic project focusing on precision and productization.
Zhao Jiehui, Founder, Chairman of the Board, Executive Director, and CEO of DeepData Technology
The following is the speech of Zhao Jiehui from DeepData Technology at the WISE Conference, edited by 36Kr:
Zhao Jiehui: Thank you all!
This is the third time I've shared on this stage. Different from the previous two times, this year we have our own stock code - 1384.HK. DeepData Technology just completed its listing on the Hong Kong Stock Exchange on October 28, 2025.
Today, I'd like to share the real - world implementation practices of artificial intelligence in enterprises. In addition to the applications in marketing, advertising, and customer service mentioned by previous guests, in fact, our implementation in many large enterprises has far exceeded these positions. According to the information publicly disclosed in the prospectus, in enterprises like China Haisum Engineering Co., Ltd., we have made in - depth progress in replacing on - site technicians in manufacturing processes such as design and construction. In the retail industry, AI has also deeply penetrated into multiple aspects of business decision - making.
Before sharing the cases, I'd like to talk about some thoughts. The large - model technology has been popular for three or four years, and in the IT industry, few technologies can maintain high attention for more than five years. The core logic for its long - term popularity is that as the technology moves from the "peak" to the industrial "ocean," real productization is essentially a highly systematic project. It requires the full - chain collaboration of technology R & D, scenario adaptation, data governance, and knowledge precipitation. In this process, enterprises, industries, and technology practitioners can all gain tangible benefits from value co - creation.
However, not all technology implementations can achieve the expected results.
In fact, for any sample or demo to be truly implemented in a position that can generate real value, systematic productization refinement is indispensable. Transforming from a sample to a product is actually a very systematic task.
Before 2024, when people talked about large models, they mainly focused on the number of parameters, computing power clusters, etc. This year, those voices have faded away, and more people are talking about what job functions can be completed with this model?
Have you ever thought about what the first thing to do is if you want AI to perform a job function in an enterprise? It is to systematically sort out the professional knowledge and work logic accumulated by practitioners in that position, inventory what data permissions that position has. Then, use this knowledge and data permissions to continuously post - train a model, enabling the model to precisely match the work requirements and professional scenarios of the position, and have sufficient knowledge systems and process adaptation abilities to efficiently respond to and support the core work requirements of the position.
Of course, for blue - collar workers, in addition to job knowledge, work logic, and data permissions, the model also needs to have visual and voice capabilities (i.e., VLM). Through the collaboration of multi - modal technologies, the model can generate a series of precise operation instructions to replace the manual remote control of traditional embodied devices, realizing intelligent collaborative support for front - line operation scenarios.
At this point, you'll find that regardless of what kind of artificial intelligence is implemented in the industry, the first step is inevitably to process all the knowledge and data in the position scenario. The second step is to use this data to post - train the model to achieve sufficient accuracy before it can truly penetrate the industry.
At this time, we can clearly see that there are significant differences between the implementation of industrial models and consumer - level large models. In the consumer - level "hundred - model battle," people use various technical architectures for training, but it's difficult to make a big difference because most of the data used in the training is Internet data.
In contrast, enterprise data may be a bunch of blueprints, process documents, or even in unrecognizable file formats. Although the governance of such enterprise data is difficult, once completed, it will, with its high adaptability to business scenarios, precisely match the core needs of positions and become the core driving force for the model to quickly realize the industrial value of AI and achieve scenario - based implementation. Therefore, how to convert these materials into corpora for the model to conduct position training and enable the model to be continuously trained in this position is the first challenge.
The second challenge lies in cross - knowledge system modeling. Internet web data often forms a closed - loop. If users want to obtain relevant views and knowledge, they can complete a complete information - acquisition loop on a single web page. However, in an enterprise, the formulas on a blueprint often require another knowledge system for explanation, and the relevant parameters are scattered in other documents. How to efficiently re - model these complex knowledge systems, form a logical knowledge network, and inject it into the model parameters so that the knowledge of the position can be trained into those parameters and enable the model to work is the key.
The third challenge is the ability to accurately assemble data. Many people think that ChatBI is very simple. By understanding a certain data combination and data analysis of an enterprise and inputting through natural language, accurate business analysis can be generated. However, in fact, whether it's an open - source or closed - source model, the accuracy of cross - table association queries across 4 to 5 tables within an enterprise is currently probably no more than 70%.
For example, when we analyze "the reasons for the decline in sales volume of 500 stores in a certain area in June," the model not only needs to understand the enterprise's analysis logic but also accurately assemble real - time status data with 100% accuracy. This is still a huge challenge.
However, without the ability to deconstruct complex - structured data, high - precision model training ability, knowledge - modeling ability, and accurate data - assembly ability, any model in an enterprise can only be called a "sample," not a "product."
Photography: 36kr
DeepData Technology has become the first stock in enterprise - level large - model AI applications precisely because we have solved these core problems and adhered to the product logic.
Our underlying enterprise - level AI infrastructure, FastData, first solves the processing of enterprise multi - modal data. Whether it's blueprints or process documents, we can quickly convert them into corpora, model them, and assemble them. On this basis, based on FastAGI, we promote the evolution of open - source models towards enterprise - specific positions, achieving extremely high accuracy to support the efficient operation and value deepening of these position functions.
The first type is the function of business decision - making positions, which is our DataDense product. As long as data permissions are granted and historical analysis logic is input, the model can quickly generate analysis reports according to the enterprise - recognized thinking logic.
The second type is for professional practitioners. For example, in fields such as architecture and machining, after training the project and product design logic into the model, it can quickly obtain accurate knowledge without consulting construction workers or engineers.
Take a manufacturing customer we serve as an example. After the equipment is sold, the model can quickly generate process logic based on production tasks and convert it into OC code to be directly sent to the machine head. Since AI can precisely undertake the functions of engineers, the path for AI to empower knowledge - intensive professional positions such as doctors and lawyers is also clearly feasible.
In addition to the above two types, we are also researching and developing some enterprise - level large - model AI application solutions for front - line operation positions, aiming to optimize work processes and lower the operation threshold through technology empowerment. Please continue to follow the iteration and implementation progress of our products.
It can be said that as long as the enterprise's job knowledge is "poured" in, a precise model trained based on the knowledge scope can replace the corresponding positions. This includes business decision - makers, professional knowledge practitioners, and manual workers, ultimately forming a visual model that can be accurately implemented in large enterprises and industries. This is what we are doing.
Currently, there are three keys for enterprises to realize the value of AI: complex data governance, modeling, and accurate data assembly. Moreover, if it's not precise, the model is meaningless.
To sum it up in one sentence: Artificial intelligence + does not simply mean having a basic model. For artificial intelligence to be implemented in the industry, it must go through the process of productization. No matter in what form, robots are also a type of Agent. Only when the model, data, and interaction are deeply integrated can it be called a real intelligent entity.
This is our view. Thank you all!