Zehntausendwörter-Analyse von "Intelligent +": Was wird hinzugefügt und wie?
The wave of large models is sweeping across the globe, and we are standing at the critical juncture of a technological paradigm shift. Intelligence is no longer just a tool but a new driving force for industrial evolution. "Intelligence +" is not only about technological grafting but also a cognitive revolution and ecological reconstruction. Its essence is to implant the genes of the new era into all industries.
China's intelligent economy is on the verge of an explosion. We not only need to clarify what to add (new cognition, new data, new technologies) but also solve the problem of how to add (cloud intelligence, digital trust, π-shaped talents, full - staff participation, and mechanism reconstruction) to reach the singularity of industrial upgrading.
I. What to add?
+ New Cognition: Embrace the Paradigm Revolution, Clarify the Boundaries, and Have Both Confidence and Patience
The core of "Intelligence +" is a cognitive transformation. We can see that the management of various industries is generally highly motivated by this wave of artificial intelligence transformation. Whether it is the relevant policy orientation or the overwhelming articles in the self - media, they have made the management feel the excitement brought by the rapid progress of technology and the anxiety about missing opportunities or being disrupted by technology. In a recent AI + research survey organized by the research institute, many enterprises chose the option "AI is a major trend, and we are worried that we will fall behind the times if we don't transform", indicating that many enterprises have the FOMO (fear of missing out) mentality.
There are also two typical contradictory mentalities regarding the attitude towards artificial intelligence. On the one hand, the senior management of enterprises is extremely eager to implement AI, hoping for quick application and immediate results. The top leaders of some enterprises strongly promote large - model projects, listen to special reports weekly, and iterate the industry's large model quarterly, full of hope for the application prospects. On the other hand, some enterprises stall and lose enthusiasm after a period of promotion. Since the current applications are mostly limited to scenarios such as knowledge Q&A and simple customer service, and the application effects are difficult to evaluate, there is a large gap between expectations and reality, which easily leads to a significant psychological gap.
Therefore, we should neither overestimate "Intelligence +" nor rely on a momentary impulse. Instead, we need a profound upgrade in understanding and a transformation journey involving all employees.
"Intelligence +" means upgrading from relying on human experience for decision - making to human - machine collaboration. It is not about using AI to create new tools but using AI to achieve new cooperation between people and between people and machines. Humans are good at intuitive judgment, ethical trade - offs, and innovative breakthroughs, while AI excels in massive data analysis, pattern recognition, and full - time response. In the medical field, for example, AI can quickly screen imaging data and mark abnormalities, but the final diagnosis still requires doctors to make a comprehensive judgment based on clinical experience and the individual situation of patients. This division of labor is not a simple form of assistance but a reconstruction of the decision - making chain - humans focus on the strategic level (such as value calibration and complex problem definition), and AI executes at the tactical level (data mining, solution generation, etc.). In the future, the ultimate form of decision - making is not for machines to replace humans but for humans to harness the large - scale intelligence of machines. When doctors save more lives with the assistance of AI and managers use AI data to see through business puzzles, human - machine collaboration will not only be an upgrade of tools but also an expansion of cognitive boundaries.
"Intelligence +" means shifting from the pursuit of deterministic thinking to dynamic and continuous optimization under uncertainty. With the continuous upgrading of large - model capabilities, the depth of applications is gradually unlocked. The first wave of large models, represented by ChatGPT, is good at dialogue, giving rise to the rise of new AI search engines like Perplexity, role - playing applications like Character.ai, and Talkie. The second wave, represented by Claude 3.5 Sonnet, is good at programming, driving the popularity of Cursor, valued at billions of dollars, and popular programming stars like Windsurf and Devin. The third wave, represented by Open AI o1, is good at in - depth reasoning, making Agent applications possible. Chinese intelligent agents such as Manus, Genspark, and Lavart have attracted global attention. In the future, the fourth wave will probably be the upgrade of model capabilities such as spatial intelligence and physical AI, which will give birth to more potential applications and bring greater imagination space for improving the quality and efficiency of all industries.
Of course, large models are not omnipotent. We should objectively view the boundaries of AI's capabilities and be more patient with AI's evolution. In the history of technological development, overestimating in the short term and underestimating in the long term is a common mistake made by humans. The current AI technology still has limitations. For example, although it performs well in creative generation, complex pattern recognition, and multi - modal understanding, it has limitations and hallucinations in rigorous logical reasoning, professional knowledge presentation, accurate numerical calculation, real - time dynamic decision - making, and long - term memory preservation. In the creative field, large models can generate a wide variety of pictures and videos, but it is difficult to make fine - tuned adjustments to the generated content. In the financial scenario, large models can predict trends and provide investment references but are not competent for actual financial operations. At the same time, the use of large models also incurs corresponding reasoning costs. It is not that the larger the model, the better. Instead, according to the scenario, using a combination of large and small models and sharing judgment and reasoning models is an economical and efficient way to solve problems.
+ New Data: Dig Deep into Domain Knowledge and Build a Data Flywheel
Data, especially high - quality industry datasets, is the key to the successful implementation of large models. To upgrade data from the current production factor to innovative fuel, three major problems still need to be solved.
First, break down departmental barriers and let data flow. The real value of data lies in its mobility and real - time nature. In the past, many enterprises generated a large amount of data in their daily operations. However, due to departmental barriers, this data was often isolated between different departments and various systems built over time, creating so - called data silos. Data silos limit information sharing and greatly reduce the efficiency of enterprise decision - making. Some enterprises have achieved good results in breaking through data silos. For example, LexisNexis, a global leading legal information service company, successfully broke the information isolation among different departments in traditional law firms by acquiring the Belgian legal technology company Henchman and applying the Retrieval - Augmented Generation (RAG 2.0) technology. This technology connected tens of millions of contract templates with external legal databases, enabling lawyers to instantly retrieve accurate clauses and precedents instead of manually searching page by page. Moreover, with the continuous emergence of new technologies such as privacy computing and federated learning, there are new solutions to the problem of data silos. The medical data platform launched by the Mayo Clinic contains rich data, including 644 million clinical notes of 5.3 million patients over more than 40 years, 3 million echocardiograms, 111 million electrocardiograms, 1.2 billion laboratory test results, 9 billion pathological reports, 595 million diagnoses, and 771 million surgical records. The platform uses various privacy - computing methods such as homomorphic encryption, differential privacy, and multi - party secure computing to complete computing tasks while ensuring data privacy. For example, the homomorphic encryption technology used by the clinic allows calculations to be performed on the encrypted patient medical data, which not only protects patient data from leakage but also enables data analysis and medical model training.
Second, dig deep into "dark data" and make data come alive. Inside enterprises, unstructured data such as text, images, voice, and video account for more than 80%. The unstructured data that has not been fully mined and utilized is becoming a new factor in enterprise decision - making. Epic, a medical information giant, uses GPT - 4 to automatically extract key information from medical records and doctor's order records, enabling doctors to quickly grasp the core medical data of patients. Similarly, Amazon uses large language models to analyze a large number of user reviews and automatically generate accurate summaries to help consumers make purchase decisions quickly. These applications show the great potential of dark data. It can not only free enterprises from the daily cumbersome data processing but also generate real - time decision - support information to help enterprises make more accurate decisions. There is also a type of high - value dark data, which is the experience hidden in the minds of senior employees in enterprises. In the past, this experience was often passed down through the mentoring system, containing a lot of ineffable wisdom. If this experience can be digitized, it will play a huge role in enterprises. Currently, some enterprises have converted some of the high - value experience of senior employees to a certain extent by inviting senior experts to annotate data and write Q&A pairs, but there is still a lot of room for exploration. Perhaps in the future, every enterprise will have several digital senior - employee avatars to serve as career mentors for new employees and accelerate talent growth in a coaching way.
Third, form a positive feedback loop and make data spin. The core of the data flywheel is to promote the continuous optimization and evolution of intelligent systems through continuous user interaction and feedback. Take GitHub Copilot as an example. It learns and optimizes programming suggestions through each interaction with developers, gradually forming a continuously strengthened positive feedback mechanism. This not only improves the adaptability and accuracy of the model but also develops the relationship between enterprises and AI from simple tool use to long - term collaborative partnership.
+ New Technologies: From Knowledge Engines to Intelligent Agents, from Tools to Partners
As the name suggests, the new technologies to be added primarily include the currently hottest generative artificial intelligence, which is commonly referred to as large models. However, in industry implementation, business scenarios, pain points, and IT maturity vary greatly. Therefore, what needs to be added is not limited to large models but also includes traditional AI technologies. It is the result of the collaborative action of core enabling technologies (AI, edge computing, federated learning, spatial intelligence, embodied intelligence, etc.), data - layer support technologies (cloud computing, big data, blockchain, etc.), and connection - layer technologies (5G/6G, Internet of Things, digital twin, etc.).
Among them, large models are the core driving force for this wave of intelligent transformation. The progress of AI technology brings more new possibilities for "Intelligence +". In the evolution of AI technology, we are witnessing a transformation from "tools" to "actors". AI is no longer just a tool that provides information and decision - making support for humans. It is becoming a digital partner that can actively execute tasks and promote the intelligent transformation of industries. Behind this transformation is not only a technological breakthrough but also a profound change in the way humans interact with AI.
The knowledge engine is one of the areas where "Intelligence +" can be most easily implemented and has the best results. It is the top choice in the construction of industry - specific large models. Introducing knowledge - engine technology can effectively solve problems such as disconnection from enterprise - specific knowledge, long output time, overly broad answers, and poor performance in vertical business scenarios. It can also significantly reduce large - model hallucinations. Take FAW Toyota as an example. In the past, the traditional customer service had a slow response, and the knowledge base was scattered. The independent problem - solving rate of the robot customer service was only 37%, and the cost of manual answers was high. Based on the large - language - model + RAG technology framework, the large - model knowledge engine can comprehensively use capabilities such as OCR, multi - modality, and long - text embedding to solve the full - chain difficulties in knowledge processing and answer generation, improving the accuracy and efficiency of services. At the same time, FAW Toyota further refines the information in the historical customer - service knowledge base using the knowledge engine as an effective supplement to the enterprise knowledge base, further enriching the professional customer - service knowledge system. Since connecting to Tencent Cloud's large - model knowledge engine in January this year, the independent problem - solving rate of the intelligent online customer - service robot has increased from 37% to 84%. It automatically solves 17,000 customer - consultation problems per month on average, significantly improving the efficiency of customer - service agents and customer satisfaction, and optimizing the user experience and service efficiency. Another example is Mindray Medical's intensive - care large model. By constructing a knowledge graph and inputting the mapping relationships of all intensive - care knowledge, pre - set inspection and test indicators, drugs, etc., it can combine the massive doctor experience accumulated in intensive - care treatment with high - quality medical literature. It can not only assist doctors in decision - making by quickly predicting the progression of the disease but also assist in medical - record writing, patient - information retrieval, and intensive - care knowledge retrieval. This solution can enable intensive - care doctors to respond to the patient's condition within 5 seconds, greatly improving the diagnosis and treatment efficiency and freeing doctors from cumbersome mechanical work, thus "leaving more time for patients".
AI intelligent agents are the most promising area in the future and the key to the multiplier effect of "Intelligence +". Take Microsoft's 365 Copilot as an example. It can extract information from emails and schedules, generate meeting minutes, task lists, and even directly create data reports. This reflects the evolution of AI - from a simple "question - answering" tool to an intelligent agent that can actively undertake tasks and help humans work more efficiently. The application of intelligent agents enables human - machine collaboration to go beyond simple information provision and move towards in - depth integration of task execution. Many industries around the world have started to connect intelligent agents. Hemominas, the largest blood bank in Brazil, cooperates with Xertica to develop a chat - robot agent for searching and arranging blood donors, which simplifies the process and improves efficiency. By attracting more blood donors and optimizing blood - supply management, it saves 500,000 lives every year. HomeToGo, a vacation - rental company, creates a new AI travel assistant, AI Sunny, which can provide consultation to guests during the booking process and plans to build it into an end - to - end intelligent travel companion, Super AI Sunny. AES, a global energy company, uses Google Vertex AI and Anthropic's Claude model to build an agent to automate and simplify energy - safety audits. This reduces the audit cost by 99%, shortens the audit time from 14 days to 1 hour, and improves the accuracy by 10 - 20%. The Formula E racing team creates a driving agent that can analyze a large amount of multi - modal data generated during the race and provide practical driving - reference solutions for drivers. The rich and diverse data includes text, tabular data, telemetry charts, and heat - map images, as well as key indicators such as lap time, speed, braking, acceleration, gravity, downforce, latitude and longitude, and steering, which helps drivers effectively improve their racing performance.
II. How to add? Five steps to crack the code of intelligent implementation
Expand Cloud Intelligence - Cloud adoption is the optimal solution for cost - effectiveness and high performance
As large - model technology gradually shifts from "model competition" to "application implementation", cloud services have become the most critical infrastructure for carrying large - model capabilities. Compared with private deployment, cloud - based large models not only have the advantages of high cost - effectiveness, easy access, and elastic expansion but also support the continuous upgrade of models and the smooth transition of versions in the context of rapid technological iteration.
The cloud is the most efficient and economical way for large - model implementation. In terms of price, the Token - call prices of mainstream large - model clouds have been continuously decreasing. Last year, DeepSeek, Alibaba Cloud, Tencent Cloud, Baidu Smart Cloud, and Volcengine successively reduced the inference - computing power prices of large models by more than 90%. Some enterprises even pushed their gross profit margins into the negative. Compared with foreign models of the same specification, domestic models generally cost only 5% - 20% of their price. Many API - call prices have dropped below 10 yuan, and the input and output prices per million Tokens of some models have been as low as a few cents. Although the conversion standards of Tokens for Chinese characters vary among different providers, it basically only costs a few cents to process a one - million - word "Dream of the Red Chamber".
The price advantage is just the first step. More importantly, "cloud - based models have the ability to be continuously upgraded". Traditional models are difficult to update once deployed, especially in the case of private all - in - one machine deployment. After a new model is released, it is almost impossible to upgrade the existing all - in - one machine. However, cloud - based models support dynamic updates and hot - version switching, which are particularly suitable for new AI scenarios that pursue high - frequency interaction, real - time generation, and cloud - edge - end collaboration.
In the future, the competition among large models will not only be about parameter scale or accuracy but also focus on "cost - effectiveness + sustainable evolution + service ecosystem". In this round of transformation, the in - depth integration of Chinese large models and cloud services will build a globally competitive digital infrastructure system.
Rebuild Digital Trust - Use service level as the yardstick
Trust is the foundation of the business society. We have entered the digital society, but in many cases, our trust still remains at the agricultural - era level, which is based on blood and geographical relationships. In the new era, we need to build a trust mechanism centered on service - ability standards that matches the intelligent era.
The "digital trust" in the new era should break away from the path dependence on traditional relationship networks and shift to institutionalized trust based on quantitative indicators such as service level, technological transparency, and response efficiency. Its core is no longer "I trust who you are" but "I trust what you can do".
The Service Level Agreement (SLA) is not just a performance - commitment tool in IT operations but should