StartseiteArtikel

Das Duell der Zwei Giganten unter dem Strom von zehn Milliarden US-Dollar: Zwei "Lebensweisen" von DeepSeek und der Dunkelseite des Mondes

新博弈2026-05-13 20:45
Richtungssplitter unter der Open-Source-Konsens, Bewertungsdivergenz zwischen Infrastruktur und Fähigkeitsdienstleistungen

On May 7th and 8th, the Chinese Large Language Models (LLMs) industry sent out financing signals worth over $10 billion in just 48 hours.

Yuezhi Anmian first announced the completion of a new round of financing of about $2 billion. The post - financing enterprise value has exceeded the $20 billion mark (about 135.8 billion yuan). Just one day later, insiders told several media outlets that DeepSeek is advancing its first external equity financing since its establishment, with a target amount of up to 50 billion yuan (about $7.3 billion). If this, the largest single financing round in the history of Chinese artificial intelligence companies to date, is successfully completed, DeepSeek's enterprise value could rise to 350 billion yuan ($51.5 billion).

At the same time, the news of Jieyue Xingchen's financing of nearly $2.5 billion quickly spread in the market. The three financings together have brought this wave of capital to the order of magnitude of $10 billion.

Five years ago, the enterprise value of Chinese AI start - ups was still limited to the category of the "10 - billion - yuan enterprise club". Today, the billion - yuan range is already the standard for leading players. The financings of DeepSeek and Yuezhi Anmian can represent a real watershed in the industry, not only because of the financing amounts themselves, but also because of the two completely different survival strategies they show.

Yuezhi Anmian pursues the typical strategy of high financing, rapid commercialization, and global expansion: Since its establishment more than two years ago, it has received a total of more than 37.6 billion yuan in financing, which is the highest amount among Chinese start - ups in the LLMs industry. The revenue structure has changed from a C - to - C subscription - dominated one to a B - to - B API - driven structure in the past six months. After the introduction of the Kimi K2.5 model, overseas profits have exceeded domestic revenues. In contrast, DeepSeek follows a path from non - financing, non - commercialization, and non - pitching to capital - based corporate management.

The two financings reflect the same underlying signal: The valuation of Chinese LLMs is changing from "technological vision" to "strategic asset valuation".

Under the overlay of the progressive global GPU regulations, over 10 billion downloads in the open - source community, and the in - depth participation of industrial capital, these two companies stand at the same watershed. On the one hand, there is the market test of whether the Chinese LLM can really establish a closed commercial cycle. On the other hand, there is the industrial lower limit of whether the process of computing power autonomy can support the continuous expansion of models with billions of parameters. Their financing stories together rewrite the value anchors of the Chinese AI competition.

The Split of Strategies under the Consensus of Open - Source, the Valuation Divergence between Infrastructure and Services

In the spring of 2026, the completely different technological strategies of DeepSeek and Yuezhi Anmian directly shaped the valuation logic of the capital market for the two companies. The core of this financing round lies not in the number of parameters or the ranking in performance tests, but in what kind of revenue structure and cost model the two technological strategies have respectively. This is the basic framework for investors' valuation.

The preview version of DeepSeek V4 was launched online on April 24th and simultaneously released as open - source. With its self - developed CSA + HCA hybrid attention architecture, the computing power requirement for millions of contexts was reduced to one - tenth of that of its predecessor. One day after the release, DeepSeek cut the prices by 75%. The price of V4 - Flash is only $0.0029 per million tokens. This marks the beginning of the era of affordable AI with millions of tokens.

The valuation story revealed by this series of numbers is that DeepSeek positions itself as the infrastructure layer for LLMs. Open - source means giving up direct monetization at the model level. The extreme price cut means gaining binding to ecosystems through scaling.

The valuation logic of the capital market for this model is related to operating systems or cloud platforms. That is, the open - source model is spread for free, the API is sold in large quantities at low prices, and finally, a long - term competitive advantage is built through developers' dependence on the toolchain and the value - added potential of corporate services. The growth rate of the number of users and calls can support the valuation more strongly than short - term revenue figures.

But the "infrastructure" path has its costs. Open - source means the continuous dissolution of technological barriers. Competitors can perform distillation, fine - tuning, or even direct commercial use based on the open - source model. To maintain leadership in the ecosystem, DeepSeek must always stay ahead in the iteration speed of models, and behind each iteration lies an enormous computing power effort.

At the same time, although the strategy of extremely low API prices is very successful in user acquisition, the gross margin per unit is squeezed to a very thin interval. Whether the scale effects can cover the fixed costs is still an open question.

The capital market is ready to assign a high enterprise value to this model. The implicit assumption is that after establishing a sufficiently large developer ecosystem, DeepSeek can gradually monetize in value - added services such as toolchains, corporate deployments, and solutions for vertical industries. Investors are betting on the classic Internet scenario of "first take the land, then harvest", which can be repeated in the field of LLMs.

Yuezhi Anmian's technological development points to a different valuation standard. After the release of the Kimi K2.6 model, the company abandoned its previous price - cut strategy and structurally increased the API prices: The impact on corporate customers with a high cache hit rate is small, while the price increase for individual customers is significant. This price adjustment is itself a strong price signal that Yuezhi Anmian is shifting from "selling models" to "selling capabilities".

The K2.6 model uses the billion - parameter MoE architecture, with 8 out of 384 experts activated per token. It is equipped with the MLA attention mechanism and has achieved the first place in code ability and visual ability among global open - source models in the LMArena evaluation. But what really supports the confidence in the price increase is the transition towards an agent cluster. The K2.6 model supports the cooperation of 300 agent clusters and has changed from a simple dialogue model to a multi - agent scheduling system for the implementation of complex projects.

This means that the commercial value of the model is no longer calculated according to token consumption, but according to the "execution result" of solving complex tasks. The capital market's valuation of this model is related to the markup logic of corporate SaaS. Customers pay for business results. The stronger the model capabilities and the more manual processes that can be replaced, the greater the price range.

But the implementation of agent clusters depends heavily on the digital foundation of corporate customers and their willingness to restructure business processes. The supply chain is much longer than for simple API calls, and the cash - burning rate is much higher than for pure model companies.

It follows that the real valuation differences in the future will occur at the application and ecosystem levels. Yuezhi Anmian is already planning the K3 model with a 2.5 - billion - parameter MoE architecture, and the standard context length will be increased to about 1 million characters. But due to computing power costs and operating expenses, it is still uncertain whether it can be fully opened to users. DeepSeek has more room for price cuts due to cost advantages through adaptation to Chinese computing power.

Which of the two models can finally find the optimal balance between revenue and cost is the key parameter for whether this round of valuation can be accepted in the secondary market.

The Co - competition and Development of Computing Power Ecosystems, from Chip Adaptation to a Closed Commercial Cycle

Today, the global market situation of LLMs is undergoing a structural restructuring with "value density" as the core. Data from Counterpoint Research shows that in the first quarter of 2026, the monthly active user number of global Large Language Models (LLMs) exceeded the 3.8 billion mark, and the quarterly revenue was about $2.07 billion.

The most remarkable signal is that Anthropic, with a market share of 31.4%, outperforms OpenAI with 29% and takes the lead. Its monthly active user number is less than one - seventh of OpenAI's, and the average monthly income per user is as high as $16.20. This shows that size is no longer the only standard for valuation. The ability to achieve higher commercial profit per user is the key to market power.

This trend is also reflected in leading Chinese AI companies. Investors no longer simply pay for model parameters or the number of users. They are more interested in whether a company can build sustainable computing power cost efficiency and a closed commercial revenue cycle.

When rumors about DeepSeek's financing first circulated in early April, its enterprise value was about $10 billion. Just one month later, this value rose to $51.5 billion. Behind this sharp increase in valuation lies the fact that DeepSeek has for the first time established an industrial closed cycle of "model technology optimization + adaptation to Chinese computing power". The V4 series of models was fully adapted to Huawei Ascend chips on the day of release, and the inference delay was controlled to the order of magnitude of 10 milliseconds. This is the first time for a billion - parameter MoE model.

What is even more important is that this adaptation is not a technical demonstration in the laboratory but directly supports the drastic reduction of inference costs. While competitors still rely on NVIDIA's high - performance GPUs, DeepSeek has already achieved an equivalent or even better economic model with Chinese chips.

For primary - market investors, there are two very attractive stories. First, the company has the ability to survive in an environment with restricted high - performance chips. Second, its cost structure allows for a more aggressive price strategy to capture market share.

According to rumors, the National Integrated Circuit Industry Investment Fund will lead DeepSeek's financing round. The entry of investors from the semiconductor field is a valuation of this cooperative ecosystem of model and chip. It marks the emergence of a real commercial synergy between algorithm companies and computing power hardware companies.

Yuezhi Anmian's computing power strategy shows a different type of industrial integration. The K2.6 model uses the billion - parameter MoE architecture, with 8 out of 384 experts activated per token. Its demand for graphics memory and inference resources is enormous.

The company's $2 billion financing round is led by Meituan Longzhu, and China Mobile, Shuimu Capital, CPE Yuanfeng, and others participate. As one of the largest telecommunications providers in China, China Mobile has a computing power system that covers the whole country and resources from government and corporate customers. Its entry as a strategic investor means that Yuezhi Anmian deeply links "model capabilities + the telecommunications provider's computing power system". This cooperation model directly serves its expansion in the B - to - B market. Specifically, the support of 300 agent clusters by the K2.6 model requires reliable and timely computing power allocation.

At the same time, access to the telecommunications provider's resources not only reduces the marginal cost of its mass - serving but also creates channels for developing large customers in industries such as finance, manufacturing, and energy. This is essentially the integration of computing power costs, customer acquisition, and service delivery into a complete commercial chain through the connection of industrial capital.

Looking at the global perspective, this round of the financing wave of Chinese LLM companies comes exactly at the time when OpenAI's valuation story faces challenges.

SoftBank originally planned to take out a $10 billion loan using OpenAI shares as collateral. But lenders were unable to determine its appropriate enterprise value and finally had to reduce the loan amount to $6 billion. OpenAI's revenue in 2025 was about $13.1 billion, and the growth is still impressive. But its market share in the coding and enterprise market is gradually being eroded by Anthropic. The share of ChatGPT's web traffic has dropped from about 86% a year ago to about 64% at the beginning of 2026.

This phenomenon reveals a harsh reality: Even for a...