1.15 million wafers will determine the "chip war" in 2026. Apple, MediaTek, and OpenAI are rushing to enter the game.
"Data center revenue will reach $500 billion in the next six quarters." Jensen Huang said at GTC25.
At CES 2026, which opened on January 6, 2026, Huang also claimed that 90% of ASIC projects would fail. This is actually a verbal "criticism" of the ASIC chips (Application-Specific Integrated Circuits) represented by Google's TPU. A full-scale hunt for ASICs has quietly begun.
Many people may wonder how the competition between GPUs and ASICs will end. The answer depends on the ultimate ammunition depot in the semiconductor war - TSMC's CoWoS advanced packaging capacity.
This means that by conducting a granular analysis of the reservation and allocation of TSMC's CoWoS capacity, we can accurately calculate the shipment pattern of AI computing power chips in 2026.
It can be said that the "chip war" in 2026 hinges on TSMC's 1.15 million CoWoS wafer capacity.
Confrontation between the GPGPU and ASIC camps. The picture is generated by AI.
01 Origin of the War
Let's first provide some background for the war between GPUs and ASICs (those with industry knowledge can skip this part).
It is a consensus that the demand for computing power in artificial intelligence is expanding. However, it must be clear that more advanced computing architectures, process technologies, and advanced packaging are the three key paths.
Regarding architecture, the most frequently mentioned is GPGPU (General-Purpose Graphics Processing Unit). NVIDIA, with the 20-year foundation of the CUDA ecosystem, has become the absolute king in general parallel computing.
At the hardware level, NVIDIA has two core weapons: the extremely high bandwidth of HBM memory and the large-scale stream processor array of GPGPU. From H200, GB200 to "Vera Rubin" launched in January 2026, they are all products of this path. The performance improvement is directly related to the video memory bandwidth and the scale of NVLink interconnection.
Besides GPGPU, ASIC chips represented by Google's TPU have explored another more precise and customized architecture - the workload on the cloud inference side is becoming increasingly fixed. ASIC chips customized for specific algorithms (such as Transformer) can demonstrate a crushing energy efficiency ratio, that is, the advantages of performance per watt and total cost of ownership (TCO).
Google's TPU and Amazon's Trainium are the pioneers of this path. Design companies such as Broadcom, Marvell, and Al chip have torn open a gap in the trillion-dollar AI chip market by customizing ASIC chips for these cloud giants.
Compared with architecture competition, the path of process technology is easier to understand. From 7nm, 5nm, 3nm to the mass production of 2nm by the end of 2025, each leap in process technology means an increase in transistor density and energy efficiency.
However, process technology is a path with high thresholds: the evolution speed is getting slower and the cost is getting more expensive. The wafer foundry price of 2nm is as high as $30,000, and not all players can afford the entry fee. In addition, the miniaturization of process technology will also face the "power wall" and the "memory wall".
Besides architecture and process technology, the third key path is advanced packaging. Advanced packaging represented by CoWoS (Chip on Wafer on Substrate) is the "pearl on the crown" created by TSMC for high-performance computing.
Conceptual diagram of CoWoS packaging. Source: TSMC
The essence of CoWoS lies in heterogeneous integration. Multiple small chips, such as computing dies (GPU/ASIC cores), high-bandwidth memory (HBM), and I/O dies, are interconnected with ultra-high density and ultra-high bandwidth through an interposer and integrated into one package.
Table 1: Trend of the area change of the CoWoS interposer
This method can break through the size limit of a single-chip photomask. The current area of the interposer can reach 2,800 mm². The direct benefit is that there are more transistors and higher HBM video memory.
In addition, since CoWoS uses a silicon interposer, the pitch of the microbumps (μBump) on it is extremely small, which leads to a sharp increase in the communication bandwidth between dies and a significant reduction in latency and power consumption.
Therefore, whether it is NVIDIA's GPUs pursuing extreme performance or the ASICs of cloud giants pursuing the best total cost of ownership, as long as they are involved in top-level AI computing power, they cannot do without CoWoS.
So, at the time point of 2026, when the process technology enters the deep water area of 2nm, the cost is high, and the architecture routes are fundamentally divided, the capacity allocation of CoWoS advanced packaging has become the most critical variable in determining the computing power landscape, and there is no other.
02 Capacity Map: Supply Pattern of TSMC's CoWoS
Table 2: Climbing situation of TSMC's CoWoS capacity
According to the information we have, in the past three years, TSMC's CoWoS capacity has gradually climbed from 12K wafers per month to 80K wafers per month by the end of 2025, and the estimated target by the end of 2026 is about 120K wafers per month.
Taking an annual effective average of 96K wafers per month, TSMC's total effective CoWoS capacity in 2026 is approximately: 96K wafers per month × 12 months = 1.15 million wafers. This is the total ammunition base for the AI chip war.
Capacity Allocation Principles
How to allocate these 1.15 million wafers is a complex game based on technology, business, and geopolitics.
In terms of priority, as the earliest and boldest co-definer and investor of CoWoS, NVIDIA's architecture (such as NVLink) is deeply coupled with TSMC's CoWoS process. Without a doubt, it can get the most.
In terms of customer levels, since Apple, NVIDIA, and AMD are TSMC's top three VVIP customers, their large prepayments and long-term agreements have locked in the basic capacity. However, Apple will not have its self-developed AI chips until 2028. In addition, Broadcom and Marvell have entered the ranks of top VIP customers because they have undertaken a large number of ASIC orders from cloud giants such as Google, AWS, and Meta.
In addition, for TSMC, besides ASICs, AMD, Intel, and even Chinese customers are important forces to balance NVIDIA and diversify customer risks, and they will also be allocated a part of the capacity.
Capacity Allocation Details
Overall, NVIDIA, with the strongest product demand, the highest unit price, and the most advanced technology, is expected to get nearly 60% of the capacity; AMD's pre-order quantity is about 90K, accounting for nearly 8%, with a 64% increase compared to 2025, and the increase rate is almost the same as that of NVIDIA.
Of course, the surge in CoWoS orders from a single customer also includes the factor of the enlarged interposer, but the contribution to the performance must be positive. However, it should also be emphasized that the initial yield rate of more complex and highly integrated packaging (such as integrating more HBM and larger interposers) is low, and the actual effective output needs to be discounted.
Table 3: Overall reservation and allocation of CoWoS capacity
The entire ASIC camp can be roughly divided into several companies, including Broadcom, Alchip, Marvell, and MediaTek. Among them, Broadcom is the leader.
Broadcom's pre-order quantity in 2026 has increased significantly to 200K, a year-on-year increase of 122%, mainly driven by the external supply of Google's TPU. Broadcom is mainly responsible for TPU v6p and v7p, while the inference-oriented v7e is the responsibility of MediaTek and will be launched in the second half of 2026. In the future, TPU v8 will still follow the model of v7, and both Broadcom and MediaTek will place CoWoS orders.
Broadcom's pre-order quantity of 200K can be roughly split according to the customer reservation situation as follows:
The first major customer, Google's TPU, is expected to get 60 - 65% of the 200K
The second major customer, Meta's MTIA, accounts for about 20% of Broadcom's pre-order quantity.
The third major customer, OpenAI, will launch its internally codenamed Titan chip at the end of the year, using TSMC's N3 process. It is expected to account for 5 - 10% of Broadcom's pre-order quantity this year and will reach more than 20% in 2027.
In 2028, Apple's AI ASIC chip, Baltra, will also be launched. Currently, Broadcom is responsible for high-speed interconnection, SerDes IP, and back-end wiring. It is expected to enter the tape-out stage in the first half of 2026.
Table 4: Reservation and allocation of CoWoS capacity in the ASIC camp
In contrast, since AWS's next-generation Trainium 3 has switched to Alchip, Marvell is a bit frustrated. Its main customer is still AWS's Trainium 2. Fortunately, the new customer, Microsoft's Maia 200 using the N3E process, has joined, which has avoided a decline. Its CoWoS pre-order quantity is the same as that in 2025.
Alchip has increased its CoWoS pre-order quantity to 60K due to obtaining the AWS Trainium 3 order, a year-on-year increase of 200%. Most of the reserved capacity is for Trainium 3 Anita using the N3 process, plus Inferentia 2, Microsoft's Maia 100, and a small amount of Intel's Gaudi 3.
Annapurna, as a subsidiary of AWS, has always been responsible for the development of AI ASICs. It also directly reserves CoWoS capacity from TSMC. The Mariana version of Trainium 3 is different from the Anita version of Alchip and is also taped out at TSMC.
MediaTek is a new customer of TSMC's CoWoS in 2026. It has currently allocated a large amount of manpower to support the ASIC business - which will become a key segment of MediaTek in the future. In the second half of 2026, it will mainly be responsible for the shipment of the inference-oriented TPU v7e and will be the main shipment year in 2027. At the same time, it will also have orders for TPU v8e in 2027, and there is a chance of a 600% year-on-year increase in CoWoS orders.
According to the information we have, MediaTek currently regards AI ASIC as its core business in the future. As an industry giant, its layout in AI chips will have a great impact on the current industry pattern of ASIC design.
The remaining TSMC CoWoS customers have an order volume of less than 10,000 wafers. Among them, the early design and tape-out of Microsoft's self-developed ASIC, Athena, are still being promoted in small batches by Microsoft's own team.
With the capacity allocation data and based on the area of the silicon interposer, we can roughly calculate how many GPU/ASIC chips each company can produce in 2026.
We assume that among NVIDIA's 660,000 wafers, 10% is allocated to the Hopper architecture, that is, 66,000 wafers. Calculated at 29 chips per wafer, it is estimated that the overall output of H200 this year can reach 1.9 million chips.
Looking back at the overall capacity allocation of TSMC, the GPGPU camp (NV + AMD), which has obtained a total of 750,000 CoWoS wafers, still has an absolute firepower advantage when facing the ASIC camp, which only has 370,000 wafers of capacity. Even the firepower of NVIDIA alone exceeds the sum of other enterprises in the world.
03 Computing Power and Revenue: GPGPU Has a Crushing Advantage
CoWoS is a key variable, but only comparing CoWoS may lead to a