HomeArticle

36Kr Exclusive | The open-source heterogeneous computing power scheduling platform "Migua Intelligence" has received tens of millions of yuan in investment from Fosun Capital, providing enterprises with efficient and flexible computing power solutions.

瀚海2026-01-06 12:30
Heterogeneous computing power pooling technology is not only a tool to improve efficiency, but also the "last mile" for domestic chips to enter the mainstream production environment.

The advent of the large - model era has made GPU computing power a more scarce hard currency than gold. However, on one hand, enterprises are starving for computing power, while on the other hand, computing power resources are being wasted. Due to the lack of efficient virtualization management tools, the average utilization rate of GPUs globally often hovers between 10% and 20%. A large amount of video memory and computing power remains idle under the "static allocation" model.

36Kr recently learned that Dynamia.ai, a platform for heterogeneous computing power virtualization, efficient scheduling, and management, has completed its angel - round financing. This round was led by Fosun Capital, with Zhuopu Capital and seed - round investors following up. It is reported that the angel - round financing amount is several thousand yuan, and the funds will be mainly used for the construction of the HAMi open - source ecosystem and the industrialization of the heterogeneous computing power scheduling platform.

The "Fragmentation" Dilemma of Heterogeneous Computing Power

With the continuous development of domestic computing power and diverse AI chips, the internal computing power environment of enterprises has become more diverse and complex. GPUs and AI accelerator chips with different architectures and from different manufacturers coexist in the same infrastructure, posing new challenges in the management, scheduling, and utilization of computing power resources.

In the actual implementation process, enterprises generally need to address issues such as the difficulty in unified scheduling of heterogeneous computing power resources, low efficiency of resource sharing, and low utilization rate of computing power. These have also become key issues awaiting resolution in the current AI infrastructure construction. The core breakthrough of Dynamia.ai lies in the CNCF (Cloud Native Computing Foundation) open - source project - HAMi that it initiated and leads. As the world's only CNCF project focusing on heterogeneous computing power virtualization, HAMi aims to become the "unified language" in the field of computing power scheduling.

Computing Power Allocation

Heterogeneous Computing Power Pooling: From "Static Exclusive" to "Dynamic Decoupling"

Dynamia.ai has built a deep - level virtualization and pooling management system through HAMi, achieving a deep decoupling of computing power resources from physical hardware. Its core technical capabilities are reflected in the following aspects:

  • Fine - grained segmentation and video memory over - selling: It supports the segmentation of the video memory and computing power of a single GPU with a precision of up to 1/10 or even smaller, and introduces the "video memory over - provisioning" mechanism to ensure that multiple high - concurrency tasks do not interfere with each other when sharing resources, significantly increasing the single - card carrying density.
  • Cross - manufacturer heterogeneous unified adaptation and dynamic MIG: It has completed the adaptation of more than 9 types of chips, including NVIDIA, Huawei Ascend, Muxi, Moore Threads, Cambricon, Hygon, and Suyuan, and supports dynamic MIG (Multi - Instance GPU) flexible configuration, enabling computing power of different architectures to enter the same resource pool for standardized management.
  • Automatic elastic scaling and priority mechanism: It supports automatic elastic scaling of video memory and OOM suppression, and cooperates with the task priority preemption mechanism to ensure that core businesses are given priority protection when resources are scarce.
  • Cloud - native zero - intrusion and high - performance Turbo mode: It optimizes the scheduling efficiency through the high - performance Turbo mode and achieves native integration with the Kubernetes ecosystem. Users can achieve automatic perception and allocation of computing power in the production environment without modifying the code.

In the application case of SF Technology, Dynamia.ai successfully deployed 19 test services on only 6 GPUs. Tasks that originally required 19 cards can now be run with 13 fewer cards, and the resource efficiency has been increased by more than 2 times. In the case of the Vietnamese AI learning platform PREP EDU, in the complex heterogeneous environment with a mix of RTX 4070 and 4090, the HAMi vGPU scheduling ability combined with a large number of optimizations of the workflow by PREP EDU's Devops team has reduced the pain points of the GPU cluster by 50% and optimized the GPU infrastructure by 90%.

In addition to the open - source products, Dynamia.ai also provides enterprise - level paid products. Within just one quarter of its establishment, the company obtained product order contracts worth 2 million yuan and received active adaptation support from AWS inference chips.

In practical applications, as an open - source project, HAMi has been used by many enterprises and development teams in scenarios of heterogeneous GPU resource sharing and scheduling. In the application case of SF Technology, 19 test services were successfully deployed on only 6 GPUs. Tasks that originally required 19 cards can now be run with 13 fewer cards, and the resource efficiency has been increased by more than 2 times. In the case of the Vietnamese AI learning platform PREP EDU, in the complex heterogeneous environment with a mix of RTX 4070 and 4090, the HAMi vGPU scheduling ability combined with a large number of optimizations of the workflow by PREP EDU's Devops team has reduced the pain points of the GPU cluster by 50% and optimized the GPU infrastructure by 90%.

Computing Power Scheduling

On this basis, Dynamia.ai has created commercial products and technical services for enterprise customers around HAMi, providing more comprehensive engineering capabilities, stability support, and continuous operation and maintenance guarantees for enterprises to implement heterogeneous computing power scheduling in the production environment. Currently, the company has carried out paid cooperation with many enterprise customers and is gradually promoting the commercial implementation from the open - source project to the enterprise - level solution.

From Open - source Genes to a Commercial Closed - loop

The core founding team of Dynamia.ai has long been deeply involved in the fields of cloud computing, cloud - native, and AI infrastructure. Zhang Xiao, the CEO, once served as the head of the container team at DaoCloud, a leading enterprise in the cloud - native field. Li Mengxuan, the co - founder and CTO, was the head of heterogeneous computing power technology at Fourth Paradigm. Both founders are core contributors to Kubernetes and maintainers of multiple CNCF projects (long - term participants in open - source projects related to the Kubernetes and CNCF ecosystems). In recent years, with the rapid development of artificial intelligence, cloud - native infrastructure has become the first choice in the AI era. Container management, as the cornerstone of building a cloud - native platform, has developed into a key technology for promoting the implementation of artificial intelligence applications. The Dynamia.ai team explored the direction of heterogeneous GPU resource sharing and unified management and founded Dynamia.ai on this basis to promote the implementation of relevant capabilities in engineering and enterprise - level scenarios.

Zhang Xiao, the founder of Dynamia.ai, said: "In the context of computing power autonomy, heterogeneous computing power pooling technology is not only a tool for improving efficiency but also the 'last mile' for domestic chips to enter the mainstream production environment. Even with the financing, computing power scheduling and ecosystem construction require 'patient capital'. We do not pursue radical short - term commercialization but insist on establishing the industry's 'de facto standard' through the open - source community HAMi. Our vision is to make heterogeneous computing power as simple and useful as water and electricity through open - source technology, truly build a globally leading computing power scheduling ecosystem, and empower the efficient implementation of the AI industry."

Investors' Views:

Ye Lijuan, the executive general manager of investment at Fosun Capital, said that heterogeneity will become the long - term pattern of the computing power market. Whether it is GPUs or new - type computing power chips, they are the most important foundation for AI. Dynamia.ai is an indispensable link between the computing power end and the application end in the AI ecosystem, greatly improving the computing power efficiency for customers and saving expensive computing power costs. The open - source HAMi has established a large - scale developer and user ecosystem - this path is also highly consistent with the open - source and collaborative development trend of the AI industry. The flexible, elastic, on - demand, and reliable virtualization technology provided by HAMi can achieve efficient segmentation and scheduling of computing power, significantly improving the computing power utilization rate, and thus bringing a highly competitive return on investment (ROI) for global customers.

Chen Minjie, the investment director of Zhuopu Capital, mentioned in the communication with Dynamia.ai that in the previous cloud - computing era centered on CPUs, virtualization giants like VMware emerged. Now, in the AI intelligent - computing era centered on GPUs, there is also a huge mismatch between the computing power requirements of AI task loads and the underlying hardware allocation method. Virtualization is the key to the popularization of AI.

The current situation of diverse and heterogeneous domestic computing power also gives HAMi open - source a more profound meaning. Open - source is no longer just a sentiment but a necessity for survival and development and a reshaping of the current computing power order. HAMi aims to break the hardware barriers and make computing power a public infrastructure as accessible as water, helping diverse and heterogeneous chips resonate with the global ecosystem. In this trend, HAMi is expected to become the global universal standard for heterogeneous computing power scheduling virtualization.