The "Fog" of Domestic Computing Power: Is It a Receding Demand or the Eve of "Dawn"? | Zhike
Author | Ding Mao, Fan Liang
Editor | Zheng Huaizhou
In mid - May, the strong upward trend of the A - share AI sector came to an abrupt halt, and market sentiment suddenly shifted.
Looking at the underlying reasons, the direct trigger was the AI - related capital expenditure data disclosed in the first - quarter financial reports of Tencent and Alibaba, the two leading cloud providers, in 2025. These figures were not only significantly lower than the market's previous optimistic expectations but also showed signs of a sequential contraction. This "expectation gap" quickly sparked investors' concerns about the sustainability of the industry's prosperity, leading to a collective correction in the sector.
Different from last year's logic of simply following the "Nvidia chain", the A - share market has now set "AI localization innovation" as the new anchor point for valuing the domestic AI industry. Therefore, when the capital expenditure of leading Internet giants, a crucial link in demand verification, wavers, the blow to the confidence of the entire industrial chain is magnified exponentially.
However, after the violent fluctuations in market sentiment, some notable questions have emerged: Is a single - quarter capital expenditure figure sufficient to judge the long - term prosperity of the industry? What exactly are the current market concerns? Is there still an opportunity for domestic computing power in the future?
What are the market's concerns exactly?
To clarify this issue, we must first understand the similarities and differences between the AI investment cycles in China and the United States. The U.S. stock market generally evolves along a clear path of "model breakthrough → hardware first → infrastructure expansion → application implementation", and the value rotation is traceable. In contrast, the main line of AI investment in the A - share market has been constantly competing between "overseas mapping" and "independent narrative". Its path is more tortuous and more susceptible to the influence of phased data and sentiment.
The lower - than - expected AI - related capital expenditure of Internet giants was the direct trigger for the recent correction in the valuation of domestic computing power. The deeper reason is that the market is worried that the pace of domestic AI application implementation may slow down. In other words, participants' confidence in the short - term development of AI applications and the actual investment in computing power infrastructure is not as strong as it seems.
Previously, the underlying logic behind the explosion of the domestic computing power sector was that after the emergence of DeepSeek, the actual usage threshold of large - scale models dropped precipitously, equivalent to an "AI equalization". The market expected that this would stimulate enterprises to invest in AI application development, thereby driving continuous growth in computing power demand, which would be reflected in the performance of computing power enterprises and an increase in capital expenditure. Eventually, like the U.S. stock market from 2023 to 2024, it would bring a double - whammy effect to the entire domestic computing power sector.
However, first of all, looking at the actual situation in the past few months, according to the AI product list data, after a sharp increase in the monthly active users of various AI applications in February, the month - on - month growth rates in March and April both declined significantly, and some applications even saw a decline in monthly active users. That is to say, the actual progress of AI application implementation is far lower than expected. Users' attempts at AI applications are mostly out of curiosity about DeepSeek's capabilities in February rather than a necessary demand. This also explains why, as the first - round impetus from DeepSeek weakened, the number of new users and monthly active users of related AI applications also decreased simultaneously.
As for the reasons for the slow implementation of AI applications, firstly, although the current leading large - scale models can handle some problems relatively accurately, most of them only play a supplementary role rather than a crucial one. A key constraint behind this is that mainstream models basically follow the principle of correlation rather than causality when answering questions. For example, the "rain" and "umbrella" understood by AI are based on the simultaneous appearance of these two phrases during the previous learning process, but in fact, the AI model cannot well understand the causal relationship of "because it rains, so one holds an umbrella". This often leads to the fact that large - scale models cannot fully understand the essence of human questions and formulate effective solutions. Especially when it comes to answers related to facts or history, there are very likely to be serious "hallucination" problems.
Secondly, although the rapid iteration of large - scale models has effectively reduced the cost of model usage, in fact, the implementation cost of some current AI applications in actual scenarios is still much higher than the actual benefits. After the United States strictly restricted high - end AI chips, the actual usage cost of domestic AI computing power has increased significantly. This greatly increases the uncertainty risk of the ROI of AI applications.
Secondly, from the perspective of the domestic computing power sector, the emergence of DeepSeek has not led to a situation of short supply of computing power. Instead, the market once reported that there was a structural surplus of domestic computing power. That is, on one hand, there is a serious shortage of advanced computing power, while on the other hand, there is an oversupply of low - end and ineffective computing power. In the face of the market's urgent demand for advanced computing power, according to TrendForce data, domestic AI chips covered less than 40% of the incremental demand in 2024.
This is mainly because domestic AI chips are still restricted by various factors such as technical adaptability, production capacity bottlenecks, and an immature software ecosystem. As a result, although domestic chips are close to or even surpass Nvidia's H20 in terms of parameters, it is difficult to quickly and widely promote them in actual use.
Is capital expenditure the only certainty we can grasp?
Since what the market is actually worried about is the implementation of AI applications and the progress of domestic computing power, why is it influenced by the capital expenditure of large - scale enterprises? Is the capital expenditure of large - scale enterprises an effective forward - looking indicator to measure the progress of domestic AI?
The current mainstream view in the market is that AI applications start by reconstructing the original business of Internet enterprises and gradually spread to AI - native applications. However, this non - breakthrough innovation is actually a gradual process, so it is difficult to build high - frequency tracking data in the short term.
As J.P. Morgan pointed out in a research report, generative artificial intelligence will go through four stages: the development of large - language models, the adoption of generative artificial intelligence by existing applications and services, the growth of Internet service consumption, and the emergence of killer - level native AI applications.
After the emergence of DeepSeek, it marked that China has entered the second stage of AI penetration into existing applications and services from the first stage of model R & D. The main feature of this stage is that AI functions empower existing mainstream applications, mainly playing a role in cost reduction and efficiency improvement. However, since enterprises are not yet fully aware of how AI will create incremental value for their operations, this is a stage involving repeated trials. In addition, due to a more complete content ecosystem and a more comprehensive AI layout, the promotion of this stage is more likely to be driven by Internet enterprises.
In this context, due to the lack of forward - looking indicators to measure the progress and the relatively concentrated underlying computing power resources, the market has taken the capital expenditure of Internet enterprises as a forward - looking indicator to monitor the prospects of AI applications. At the same time, since the capital expenditure expectations of large - scale enterprises are directly related to the shipment expectations of domestic computing power chips, servers, etc., it is also indirectly regarded as a forward - looking indicator reflecting the prosperity of the domestic computing power sector.
However, there is a problem with this. The logic of inferring the slowdown of domestic AI progress from the lower - than - expected capital expenditure and then affecting the prosperity of the domestic computing power sector may not be rigorous.
From the data, Alibaba's capital expenditure in Q1 2025 was 24.612 billion yuan. Although it decreased compared with 31.775 billion yuan in Q4 2024, it doubled year - on - year compared with 11.153 billion yuan in Q1 2024. Tencent also showed a similar pattern. Its capital expenditure in Q1 2025 was 27.476 billion yuan, down from 36.578 billion yuan in Q4 2024, but it almost doubled year - on - year compared with 14.359 billion yuan in Q1 2024.
This shows that since 2025, the attention of domestic Internet giants to AI has increased significantly, and the expenditure has also been strengthened. As for the sequential decline, one key reason may be related to the "chip hoarding" behavior in Q4 2024 and the "chip shortage" situation in Q1 2025.
As is well - known, H20 is an AI accelerator card launched by Nvidia for the Chinese market at the end of 2023. Although its performance has been greatly restricted compared with H100 and other models, it was previously the most advanced Nvidia chip that could be legally purchased for large - scale model training and inference.
According to the progress of H20 disclosed by foreign media, after the chip was approved in June 2024, small - scale supply tests were conducted for large - scale enterprise customers from July to August, and mass supply began in Q4 2024. At the same time, considering that Q4 last year was also a critical period for the U.S. presidential election, in order to hedge against the uncertainty after Trump took office, domestic Internet giants such as Tencent and ByteDance carried out large - scale "chip hoarding" in the fourth quarter of last year, directly boosting the capital expenditure of that period.
Originally, Q1 2025 was still the time for large - scale supply of Nvidia's H20 chips. The market previously reported that the shipment volume of H20 in China increased by 50% sequentially in Q1 2025, and some distributors even resold them at high prices.
However, in fact, after Trump took office, H20 was included in the new export restriction list in early April. This may have led to the situation that some H20 chips that had been ordered but not shipped could not be delivered due to the new restrictions. In other words, some Internet giants may have planned to spend the capital expenditure in Q1, but due to various restrictions, it did not actually materialize. Instead, it was written back on the books, directly leading to a decrease in the capital expenditure amount on the books.
This can also be confirmed from the information in the 8 - K document disclosed by Nvidia on April 15. According to the 8 - K document disclosed by Nvidia, the company's performance in the first quarter of fiscal year 2026 (ending on April 27, 2025) is expected to include a provision of up to approximately $5.5 billion for inventory, purchase commitments, and related reserve expenses related to the H20 product.
Therefore, the short - term book fluctuations in the capital expenditure of Internet giants may not accurately reflect the true views of large - scale enterprises on the future progress of AI. Instead, they are closely related to some short - term behaviors of enterprises under policy fluctuations. So, compared with the data fluctuations in one quarter, the hard capital expenditure commitments of enterprises may better reflect their confidence in the future progress of AI.
Anyway, as mentioned above, in the absence of clear quantitative indicators, capital expenditure is one of the few observable indicators for the market to judge the development prospects of AI and the attitudes of leading enterprises. Therefore, even if the data does not accurately reflect the reality, market transactions are more likely to revolve around it, especially in the absence of more positive surprises.
What's the outlook for the future?
As mentioned above, the capital expenditure progress of large - scale enterprises is still a key reference for the market to observe and judge the progress of AI. In the short term, a new surge in the capital expenditure of large - scale enterprises may be the starting point for the next upward trend in the domestic computing power sector.
Previously, according to Reuters, after the ban on H20 in April, Nvidia plans to launch a new product with the internal code name B40 based on the Blackwell architecture this year to replace H20. This product will not use TSMC's CoWos technology and will abandon HBM and use GDDR7 as memory, with a bandwidth of 1.7TB/s and a NVLink interface bandwidth of 550GB/s. It will continue to support CUDA, and the expected selling price is between $6,500 and $8,000, which is significantly lower than that of H20. According to the disclosed progress, the production of B40 is expected to start in June, and the Chinese market may achieve full - scale distribution in the third or fourth quarter.
This means that after the mass production of B40 in China in the third or fourth quarter, the capital expenditure of large - scale enterprises may experience a new round of explosive growth, which will drive market sentiment back into the optimistic range and provide new positive guidance and support for the domestic computing power sector.
Looking at the long - term, due to the continuous Sino - U.S. trade friction, the repeated restrictions on Nvidia chips have not only increased the risk of supply interruption of advanced computing power for domestic enterprises, affecting the stability of domestic computing power development, but also significantly reduced the actual cost - performance ratio of the specially - made chips with repeated cut - backs.
In this context, in order to reduce supply - chain risks, the current AI investment path of large - scale enterprises has actually begun to shift from focusing solely on Nvidia to a multi - dimensional approach that includes Nvidia, self - research + domestic substitution, computing power leasing, and overseas data centers.
In this context, with strong market demand and continuous policy support, domestic AI chip suppliers represented by Cambricon and Huawei are expected to accelerate the process of supply - chain autonomy and further expand the market share of domestic computing power.
According to TrendForce's forecast, the proportion of chips purchased from Nvidia, AMD, etc. in the Chinese AI server market is expected to drop from about 63% in 2024 to about 42% in 2025, while the proportion of domestic chip suppliers is expected to increase to over 40% in 2025, achieving a balance with the purchased chips.
This means that in the medium and long term, domestic computing power still has huge growth potential and room for imagination, which will strongly support the long - term performance improvement and valuation repair of the industrial chain.
*Disclaimer:
The content of this article only represents the views of the author.
The market is risky, and investment should be made with caution. In any case, the information in this article or the opinions expressed do not constitute investment advice for anyone. Before making an investment decision, if necessary, investors must consult professionals and make decisions carefully. We have no intention of providing underwriting services or any services that require specific qualifications or licenses for the trading parties.