首页文章详情

Lenovo SSG: AI implementation enters an acceleration phase, with scenarios and ROI becoming the core focus

苏建勋2025-09-03 15:27
Different from the highly competitive and rapidly evolving ToC segment, in the ToB segment, rather than blindly pursuing larger parameter scales, enterprises need to focus more on how to leverage AI to achieve business results. In short, it's about "cost-effectiveness."

“The AI industry is entering a new phase: shifting from a parameter race to a return to value.” At a recent media communication event held by Lenovo's Solution & Services Group (SSG), Hu Guanzhong, Senior Vice President, Chief Information Officer, and Chief Technology and Delivery Officer of SSG at Lenovo Group, and Chen Minyi, Vice President of Lenovo Group and Head of Business Application Service Delivery, shared Lenovo's latest insights and practices on topics such as the slowdown in large - model iteration, the challenge of hallucination rates, ROI orientation, and localization differences.

Application value rises, and ROI becomes the core indicator

This year marks the fifth anniversary of the establishment of Lenovo SSG. These five years have also witnessed generative AI moving from the “feverish period” to a rational implementation stage. Hu Guanzhong mentioned that AI is still in an explosive development stage, with new progress emerging almost every three to five days. However, ROI has gradually become the key criterion for enterprises to measure input - output.

Hu Guanzhong emphasized that without a solid digital foundation, rashly promoting AI projects often fails to yield results. “Enterprises with real experience and capabilities should be able to identify suitable tools, find the right scenarios, and use technology where it matters most to truly create value.”

In his view, different from the highly competitive and accelerating ToC end, in the ToB end, rather than blindly chasing larger parameter scales, enterprises need to focus more on how to use AI to create business results, in short, “cost - effectiveness”. The slowdown in the iteration of large models does not mean that the pace of enterprises' AI application is restricted.

“In the ToB end, what customers value is not the model itself, but whether it can release value in specific scenarios,” he said. “Large models often come with high latency and high costs, while small and medium - sized models are sufficient for many business processes. The real focus of competition lies not in technological limits, but in who can find suitable scenarios and realize value.”

Taking agents as an example, Hu Guanzhong pointed out that whether it is a single - point agent, a super - agent, or multi - agent collaboration, the overall development direction points to stronger intelligence, higher autonomy, and wider coverage. “But ultimately, whether it can be implemented depends on the 'scenario' itself to determine whether it is really suitable to use agents to solve problems.”

This logic is confirmed in Lenovo's “LeXiang” super - agent. This system does not rely on the latest and most powerful large models. Instead, through intention recognition, task orchestration, and cross - system execution, it significantly improves customer experience and operational efficiency in retail and e - commerce scenarios.

Data shows that “LeXiang” has achieved a 30% increase in order conversion rate, a 15% growth in GMV, and a 30% improvement in process efficiency per person. This indicates that “sufficient model + scenario matching” is the key to releasing AI value.

Hallucination rate is inevitable, and a systematic solution is the answer

Beyond ROI, the hallucination rate remains another major core challenge in large - model applications. Hu Guanzhong pointed out: “Hallucination is a common feature of this generation of architectures and cannot be completely eliminated in the short term. The key lies in how to design solutions in different scenarios: high - risk processes must be verified by humans, while low - risk links can be gradually handed over to agents.”

Chen Minyi added that solving the hallucination rate problem cannot rely on single - point breakthroughs but requires full - link optimization of system engineering. “From model engineering, multi - modal interaction, to RAG retrieval and multi - model collaboration, each link needs continuous refinement.” She emphasized that Lenovo's advantage lies in the integrated full - stack capabilities of hardware, software, and services, which can provide end - to - end reliability guarantees for enterprise customers.

Key industries such as manufacturing and supply chain are becoming the main battlefields for the accelerated implementation of AI. Lenovo's supply - chain agent “iChain” is a representative case. It does not rely on a single large model but realizes risk prediction, anomaly warning, and inventory optimization in complex supply - chain scenarios through multi - agent collaboration and real - time data connection.

Practical results show that iChain has increased the risk - identification accuracy rate to 90% and shortened the risk - response cycle by four times, significantly enhancing the resilience of the enterprise's supply chain. This “end - to - end, scenario - driven” solution directly addresses the industry's concerns about hallucination rates and accuracy.

From technological breakthrough to value realization

From the slowdown in large - model iteration, to the challenge of hallucination rates, and then to ROI and localization differences, the logic repeatedly emphasized by SSG at the communication event is: The breakthrough of AI is not just a technological innovation but a systematic project. Truly promoting enterprises towards intelligence requires not only algorithmic capabilities but also a full - stack layout and practical experience from infrastructure, platform tools to industry scenarios.

When comparing the Chinese and overseas markets, overseas customers prefer to directly consume AI through the SaaS model, while domestic customers emphasize “localization + hybrid deployment” to ensure data security and flexibility. Lenovo uses the “internal development and external expansion” strategy. After verifying its business practices in more than 180 countries, it then outputs solutions externally and adapts them flexibly according to local needs.

Hu Guanzhong concluded: “Whether it is traditional machine learning, deep learning, or generative AI, they are just tools. The key is to find suitable scenarios and release value in the right way. Blindly chasing large models does not equal long - term competitiveness.”