StartseiteArtikel

Technologietest auf Höchstleistung + Umsetzung von Projekten: H3C bricht sich einen Weg mit zwei Spuren und rekonstruiert die offene Ökosystem der Künstlichen Intelligenz-Rechenleistung.

36氪品牌2025-08-12 10:20
New H3C bricht die Stagnation in der Branche der großen Modelle mit Hypernodes und All - in - One - Geräten und baut ein offenes Ökosystem auf.

In the past twelve months, large models in China have made a remarkable leap from being a "technological wonder" to an "industrial necessity."

According to a report by the Cyberspace Administration in January 2025, by the end of 2024, over 300 generative large models in China had completed the filing process. However, the "2024 Collection of Large Model Application Cases" by the China Academy of Information and Communications Technology pointed out that less than 15% of the models could operate stably in real - world production environments and provide continuous external services. Most models are still in the internal testing or PoC stage.

The core issue is no longer "whether there are models" but "whether the models can run stably, efficiently, and at low cost." On the training side, the effective computing power utilization rate of thousand - GPU clusters still needs to be improved. On the inference side, many enterprises are forced to turn to public clouds due to long deployment cycles and complex operations and maintenance. Although the short - term cost is low, it poses potential risks to data compliance and business continuity in the long run.

Against this backdrop, the 2025 World Artificial Intelligence Conference (WAIC) has set the theme of "In the Intelligent Era, We Are All in This Together," aiming to call on the industry chain to jointly solve the problems of computing power inclusiveness and engineering implementation.

H3C Group participated in the exhibition with the theme of "Aggregation · Intelligent Leap" and launched a series of important products such as the super - node. Facing the imbalance between supply and demand of computing power in the industry, Xu Run'an, the senior vice - president of H3C Group and the president of the Cloud and Computing Storage Product Line, explained the solution logic to 36Kr: "The AI industry needs to pursue both technological advancement and engineering implementation." In his vision, the super - node represents the ultimate breakthrough in computing power density, while the large - model all - in - one machine serves as an engineering carrier for the inclusive implementation of AI. The two form a closed - loop through an open ecosystem, providing scalable computing power solutions for enterprises of different sizes.

Technological Advancement: Fitting 64 GPUs into One Cabinet, How to Further Improve Computing Power Efficiency

The H3C UniPoD S80000 super - node, which made its debut at WAIC, is H3C's latest answer to breaking the "single - machine computing power ceiling." A single cabinet can accommodate up to 64 GPUs. Meanwhile, it achieves high - speed GPU interconnection through the Scale - up south - bound interconnection architecture based on the Ethernet interconnection protocol. This density is not simply about stacking but involves a complete redesign of the overall machine architecture in terms of communication efficiency, energy consumption, and delivery.

In terms of communication efficiency, H3C uses its self - developed liquid - cooled high - speed backplane, enabling high - density deployment of 64 GPUs in a single cabinet and high - speed interconnection between the cards. Tests show that when running the same large model on a 256 - GPU cluster, the inference efficiency can be improved by about 15%–20%, and "performance gains can be achieved without additional tuning." The overall liquid - cooling system keeps the PUE below 1.1, and official calculations show significant energy savings under current industrial electricity prices.

"Technological advancement addresses the issue of 'whether it can compute,' while engineering implementation addresses the issue of 'whether it can be used boldly,'" Xu Run'an emphasized. H3C is reconstructing the industrial paradigm with the multiplier effect of "computing power × connectivity," which brings a leap in training efficiency for clusters of over a thousand GPUs. He revealed that a certain intelligent computing center has completed a small - scale verification of 256 GPUs in 4 cabinets and will expand to the thousand - GPU level in the next stage, mainly supporting the iterative training of scenarios such as large models for autonomous driving. Two key indicators that customers care most about, the effective computing power utilization rate and the unit computing power cost, are both better than traditional solutions in the tests. Currently, H3C has established joint laboratories with several domestic GPU manufacturers to continuously promote the prosperity of the computing power ecosystem and create domestic computing power that is both available and easy to use.

Engineering Implementation: The Large - Model All - in - One Machine Reduces the AI Deployment Cycle from Months to Hours

While training computing power is concentrated at high density, inference computing power needs to be "widely distributed." Many customers in universities, manufacturing, and healthcare do not have the conditions to build their own intelligent computing centers but hope to quickly integrate large - model capabilities into their businesses. To address this pain point, H3C launched the LinSeer Cube large - model all - in - one machine, which pre - integrates four levels of capabilities: hardware, framework, model, and application, achieving "out - of - the - box" usability. Xu Run'an told 36Kr about the deployment difficulties: "Even in well - known universities, only a very small number of people can independently deploy large models, and the cost for enterprises to recruit relevant talents exceeds one million yuan." The all - in - one machine will completely change this situation. The hardware covers three specifications, meeting various usage scenarios from 7B to 671B models. On the model side, mainstream models such as DeepSeek and Qwen are pre - installed, and the model library is updated weekly through the LinSeer Hub platform. Users can complete model switching and API publishing within 30 minutes through the visual interface. The platform comes with more than 20 orchestratable components, supporting application construction within minutes and significantly reducing the trial - and - error cost for enterprises.

The pricing strategy is also carefully considered. The entry - level model is priced below 200,000 yuan, equivalent to the procurement budget of a commercial vehicle. This pricing precisely targets the annual IT budget threshold of small and medium - sized enterprises, making AI deployment no longer exclusive to large enterprises. Wu Jiachun, the vice - president of the Cloud and Computing Storage Product Line of H3C Group and the general manager of the Product Support and Solution Department, described the positioning of the all - in - one machine as "bridging the last mile." Customers no longer need to piece together servers, networks, storage, and software, nor do they need to recruit scarce AI architects. They only need to focus on business logic. Currently, the product has been included in the national channel sales system and is being promoted first in the education, healthcare, and manufacturing industries.

It is worth noting that the all - in - one machine is not a "low - end version" of the super - node but the result of productizing engineering experience. It uses the same AI acceleration cards and management platform, but the scale is reduced to a single machine or half - cabinet, extending the computing power from data centers to building machine rooms, laboratories, and even workshops. Xu Run'an emphasized, "Inclusiveness is not just about low prices but also about reducing the usage threshold to the hourly level."

Openness and Ecosystem: Returning the Choice to Customers and Minimizing Risks

Facing the reality of diverse GPU brands and rapidly evolving technological routes, H3C clearly stated at WAIC that it will not build a closed ecosystem. Instead, it will return the "choice" to customers through standardized interfaces and joint design. As of now, H3C has adapted to more than 80 types of GPUs and jointly designed OAM modules with more than 10 chip manufacturers. At the hardware collaboration level, H3C participates in the specification definition of several chip manufacturers 12 months in advance to ensure that the overall machine architecture is synchronized with the chip design.

Standardized hardware interfaces are the prerequisite for openness. Tang Tao, the director of the Smart Computing Product Marketing Department of the Cloud and Computing Storage Product Line of H3C Group, told 36Kr that the company has completed the expansion of the interconnection protocol based on PCIe to ensure that different brands of GPUs can be replaced in the same cabinet. The next - generation products will support Ethernet interconnection, allowing customers to upgrade flexibly according to the GPU iteration rhythm without replacing the entire machine.

The other end of the ecosystem is operation and maintenance. The LinSeer ICT Intelligent Agent productizes H3C's more than 20 years of operation and maintenance experience. Network faults can be diagnosed within seconds and repaired within minutes, and the actual - measured prediction accuracy for optical module degradation scenarios exceeds 90%.

The mismatch between supply and demand of computing power remains the biggest variable in the Chinese AI industry in 2025. H3C's solution is "dual - wheel drive": relying on its own "computing power × connectivity" capabilities to improve training efficiency and lower the inference threshold; using an open architecture and ecosystem strategy to allow customers to switch freely between different technological routes.

Meanwhile, the LinSeer ICT Intelligent Agent will open its API to the entire industry, allowing ISVs to develop industry - specific operation and maintenance plug - ins on the platform. Xu Run'an emphasized to 36Kr, "Domestic computing power is in a stage of upward development. There is a lot of computing power, but not enough of it is easy to use. Only an open ecosystem can bridge this gap."

Only when computing power is as accessible as water, electricity, and gas can large models transform from "technological showpieces" into "productive tools." What H3C needs to do next is to further narrow the gap between "technological advancement" and "engineering implementation" - making thousand - GPU and ten - thousand - GPU clusters run more stably and enabling customers with a budget of 200,000 yuan to dare to buy, use, and iterate.