HomeArticle

The price butcher AMD has stabbed Intel but can't beat NVIDIA.

丁卯2025-11-06 16:29
AMD rings the bell for the change in the AI landscape.

Author | Ding Mao

Editor | Zhang Fan

On November 5th, Advanced Micro Devices (AMD) released its financial report for the third quarter of 2025.

In this quarter, the company achieved revenue of $9.25 billion, a year - on - year increase of 35.6%, far exceeding market expectations. The more eye - catching data center business, benefiting from the promotion of the Instinct MI350 series of GPUs and the growth of server market share, saw the company's data center business revenue reach $4.34 billion, a year - on - year increase of 22.3%.

Since October, AMD has been receiving a series of positive news. First, it reached a strategic cooperation with OpenAI for a computing power of 6GW. Then, it won a super order of 50,000 MI450 series from Oracle.

During the earnings conference after this financial report, AMD revealed that OpenAI's first GW deployment will start in the second half of 2026, and it is expected to contribute over $100 billion in revenue to the company in the next few years, greatly enhancing the certainty of the company's future performance growth.

More importantly, as AMD mentioned in the conference, the adoption by leading players such as OpenAI and Oracle means that AMD's Instinct platform and ROCm ecosystem already have mature performance and cost advantages, marking its entry into a new stage of rapid growth and market share grabbing in the AI accelerator chip and data center market.

After the performance announcement, the market gave a positive response. At the close of the day, AMD's stock price rose by 2.5%. Looking at the longer term, from October 6th to the present, the company's stock has accumulated a gain of 56%, and its market value has expanded by over $100 billion.

This inevitably reminds people of AMD's "last - ditch counterattack" against Intel in the CPU field back then. Only this time, the battlefield is focused on the more profitable AI computing power chips, and the challenger has changed from the CPU hegemon Intel to the GPU hegemon NVIDIA.

So, in this crucial battle, can AMD repeat its familiar counterattack? And what changes will the competitive landscape of the AI chip industry undergo?

Cost - effectiveness is the biggest weapon

As mentioned above, the super orders from large - model giants and cloud giants indicate that AMD's Instinct series of GPUs are becoming a reliable and scalable alternative to NVIDIA in the market, laying its foundation as a key challenger in the AI era.

The root cause of AMD's ability to break through NVIDIA's monopoly lies in its precise targeting of two major pain points in the AI computing power market: NVIDIA's monopoly pricing and the computing power demand shifting towards inference.

In the past few years, NVIDIA has long held an absolute monopoly in AI training with its high - performance chips, forming a "dominant" pattern. Data from Wells Fargo shows that NVIDIA's share in the AI accelerator market has long remained between 80% - 90%.

Chart: Changes in the market share of data center GPUs. Data source: Wells Fargo, compiled by 36Kr

Benefiting from the high pricing and high gross margin brought by the monopoly, NVIDIA's fundamentals have shown an accelerating expansion trend in the past two years. Financial report data shows that since the second half of 2023, NVIDIA's data center business revenue has continued to grow at a high rate. As of FY26Q2 (by the end of July 2025), its single - quarter revenue reached as high as $41.1 billion, maintaining a high growth rate for nine consecutive quarters.

In contrast, the GPU revenue of AMD and other competitors is still in the climbing stage. The latest financial report shows that despite rapid expansion, AMD's data center revenue in Q3 2025 was only $4.3 billion, showing a huge gap in scale.

Chart: NVIDIA's business composition and growth rate. Data source: Wind, compiled by 36Kr

Chart: Comparison of data center business revenues between NVIDIA and AMD. Data source: Wind, compiled by 36Kr

This industry pattern not only exacerbates the supply - chain risks of mid - and downstream cloud computing and large - model manufacturers but also brings them huge cost pressures, especially when the implementation speed of downstream AI applications is not ideal and the input - output ratio of enterprises is relatively low.

Facing this pain point, mid - and downstream participants urgently need to seek more cost - effective alternatives to reduce the total cost of ownership (TCO) of infrastructure and diversify their supply chains.

Coincidentally, as the iteration speed of large - language models slows down, the market's demand for computing power has shifted from "model training" with high precision and high power consumption to "model inference" with low latency and large - scale deployment. This structural change in demand means that the performance requirements for chips are no longer solely based on high precision but rather focus more on memory bandwidth, capacity, and energy efficiency.

It is precisely these underlying reasons that provide a practical breakthrough for AMD's substitution.

On the one hand, AMD has optimized the system - level cost advantage for inference requirements. Its Instinct series of chips reduce the need for multi - card interconnection through larger memory bandwidth and model capacity, thereby improving inference efficiency. Taking the MI300X as an example, its single - card bandwidth is 192GB HBM3, far higher than the 80GB of the H100. This means that during the inference stage, a single MI300X card can handle models that require 2 - 3 H100 cards, saving system - level costs such as servers, CPUs, rack space, and power consumption.

Chart: Comparison of AI chips between NVIDIA and AMD. Data source: Compiled by 36Kr

On the other hand, the aggressive pricing strategy brings a high Tokens/Dollar premium. In terms of single - card selling price, according to early estimates from market reports, the price of the NVIDIA H100 GPU is over $25,000, and it once reached as high as $30,000 - $40,000 during shortages. In contrast, the estimated price of the AMD MI300X is around $10,000 - $15,000, only half or even less than that of NVIDIA's corresponding products.

Based on the dual advantages of hardware cost and targeted performance optimization, AMD chips are more cost - effective in inference scenarios. According to data from cloud service provider RunPod, the Tokens/Dollar (number of Tokens per dollar) of the AMD MI300X shows a significant cost advantage over the NVIDIA H100 at both low - latency and high - throughput ends, reaching up to about 33%.

Successful Counterattack in CPU

Overall, AMD's rapid progress in the GPU field is mainly due to its ultra - high cost - effectiveness and differentiated advantages, which exactly meet the urgent needs of cloud giants for supply - chain diversification and cost - effectiveness during the inference stage.

Under such circumstances, AMD, with its more attractive total cost of ownership (TCO), has broken through NVIDIA's monopolistic barrier of dominance and started to rapidly erode its market share.

This strategy aims to: first, break NVIDIA's long - term monopoly and customer stickiness by "trading price for volume" to quickly gain market share and an ecological foundation. Then, make up for the shortcomings in high - end competition through R & D and technological iteration. Finally, drive a positive cycle of revenue and profit through the scale effect under the cost - effectiveness advantage and high - margin products.

This is exactly the same as the scenario when AMD challenged Intel back then.

In 2017, AMD launched the Zen architecture, offering processors with a higher number of cores and stronger performance at a price far lower than Intel's corresponding models.

Especially in 2019, the Ryzen and EPYC series of products based on the Zen2 architecture, by introducing TSMC's advanced manufacturing process, comprehensively surpassed Intel in terms of performance, energy efficiency, and the number of cores, quickly impacting its market share.

In 2016, AMD's share in the CPU market was less than 18%. However, by 2019, it regained a 30% share, and its latest market share is around 39%, maintaining a duopoly pattern for a long time.

Chart: Changes in CPU market share. Data source: Wind, compiled by 36Kr

After successfully seizing the market with cost - effectiveness advantages, AMD has continuously challenged the high - end market with TSMC's advanced manufacturing process.

In terms of average selling price (ASP), AMD's chip prices have been catching up since 2012. By 2024, the average price of AMD's products had almost doubled, while Intel's only increased by about 30% during the same period.

Chart: Changes in AMD's chip prices. Data source: Mercury Research, Bank of America, compiled by 36Kr

The acceleration towards high - end products has also significantly improved profitability.

After 2017, AMD's gross margin climbed from around 35%. As of Q3 2025, the gross margin reached 52%. In contrast, Intel's gross margin has been continuously declining from its peak. In 2022, it was surpassed by AMD, and currently, its gross margin is only about 30%.

Chart: Changes in AMD's gross margin. Data source: Wind, compiled by 36Kr

The difference in the performance of the two companies is also reflected in the capital market. After 2017, AMD's market value has been rising, and currently, its market value is 2.5 times that of Intel.

Overthrowing NVIDIA is Harder than Expected

Now, AMD's GPUs, with their advantages in inference and lower prices, are trying to replicate the success of the past. However, pulling NVIDIA off its pedestal still seems to face significant challenges.

First, although AMD GPUs have a lower single - card cost and show system - level cost advantages through targeted optimization, in fact, the implicit costs implied by the differences in software ecosystems may invisibly increase the potential costs for customers to deploy AMD products.

Although the ROCm platform has made significant progress in inference, its maturity, stability, and developer - community support still lag behind CUDA. Official data shows that the CUDA ecosystem has accumulated nearly 6 million developers, with more than 300 acceleration libraries and over 600 pre - optimized AI models. For customers, this means that migrating to the AMD platform requires time and resources to re - adapt and verify models, invisibly facing huge conversion costs.

However, the recent access of leading cloud providers such as Oracle, Meta, and Microsoft has undoubtedly brought a good start to the rapid development of the ROCm ecosystem.

Second, it is worth noting that looking back at the CPU competition back then, Intel's stagnant R & D, lack of innovation, and rigid IDM model left a valuable time window for AMD's successful disruption.

From 2005 to 2020, Intel's R & D investment lagged behind competitors such as AMD. Especially during the most dominant period from 2008 to 2013, its R & D expense ratio once dropped below 15%, while AMD maintained it above 20% for many years.

Chart: Comparison of R & D expense ratios between AMD and Intel. Data source: Wind, compiled by 36Kr

In contrast, NVIDIA is still in a strong product cycle driven by high - intensity R & D and continuous technological iteration. According to the financial report, NVIDIA's R & D expenses in fiscal year 2025 reached $12.914 billion, a year - on - year increase of nearly 50%. In the first half of FY2026, the R & D expenses were $8.6 billion, and the year - on - year growth rate still remained above 40%, far higher than the R & D growth of AMD and Intel during the same period.

Chart: Comparison of R & D expense growth rates among AMD, Intel, and NVIDIA. Data source: Wind, compiled by 36Kr

Based on high - intensity R & D and capital expenditure, NVIDIA always leads its competitors in product generations and has built a moat through its powerful