The semiconductor IP market has changed.
In the past year, three major upheavals in the semiconductor IP market have completely exposed the underlying logic of generative AI's reshaping of the underlying architecture. The first shock came from Rambus, a giant in the storage interface field. Its stock price nearly doubled between 2025, making it an indispensable "golden node" in the AI server industry chain. The second shock came from Synopsys, one of the two giants in the EDA and IP fields. It decisively chose to spin off its star processor IP business, ARC, and sold it entirely to the foundry GlobalFoundries. The third shock was Alphawave Semi, which once vowed to disrupt the high - speed interconnection landscape. It finally accepted a full acquisition by Qualcomm at the end of 2025.
One was re - evaluated, one was sold off, and one was acquired. These three scenarios reflect the cruelest truth in the semiconductor industry in the AI era: the throne of computing power is shifting.
Rambus' Resurgence
For a long time, Rambus didn't have a good reputation in the semiconductor circle. It was once labeled a "patent troll" for collecting "toll money" from major memory manufacturers through intensive patent litigations. However, the arrival of the AI era has enabled this company, which has been deeply involved in high - speed interface technology, to make a magnificent transformation.
As of early 2026, Rambus' stock price has reached a high of about $115 - $125 and once touched a high of $135. The cumulative increase in the past year was nearly 100%, making it extremely eye - catching in the market. Investors have not only seen the increased demand for its technology in the AI era but also noticed a significant change in the market's expectations for its future growth path.
Overview of Rambus' stock price trend
In the process, investment bank analysts, including those from Baird, have repeatedly raised Rambus' target price to the range of about $120 - $130, believing that it has not only moved away from its past business model that solely relied on patent licensing but also occupied a more core position in AI and data center infrastructure.
One important turning point was Rambus' strategic adjustment of its product line and business focus in the past two years. For example, in 2023, the company sold some of its SerDes and memory interface PHY IP assets to Cadence, thus freeing up resources to focus on high - performance memory subsystem solutions and security IP.
Why has the market started to re - evaluate Rambus?
The core lies in the fact that the biggest pain point in AI computing power currently is not that GPUs are not fast enough, but that data cannot be transmitted smoothly. The training of AI models is essentially a large - scale matrix operation. Taking GPT - 4 or the more advanced O1 model as an example, data needs to be exchanged extremely frequently between processors (GPU/NPU) and memories (HBM/DDR5). Due to the existence of the memory wall, data transmission has always troubled the industry.
In the current changes in system architecture requirements, Rambus has grasped three key points:
DDR5 RCD Interface Chip: As AI servers fully transition to the DDR5 standard, Rambus' RCD (Register Clock Driver) chip is an indispensable component. It is like a "traffic commander" on the memory stick, ensuring that trillions of data do not deviate at high frequencies. Rambus has long held a leading position in this niche market, with a market share of over 40% according to industry statistics.
HBM4 Controller IP: HBM (High - Bandwidth Memory) is the soul of chips such as NVIDIA's H100/H200. Rambus was the first to launch the HBM4 physical layer and controller IP, which means that any company wanting to develop an AI accelerator (such as AWS or Google's self - developed chips) will find it difficult to bypass its authorization.
MRDIMM (Multi - Rank DIMM): This will be the breakout point from 2026 to 2027. MRDIMM can double the server memory bandwidth again. Rambus expects to continue to take more than 40% of the share in this new niche market worth $600 - $700 million.
The 2025 financial report shows that Rambus' product revenue recorded an amazing 42% growth, driven mainly by the demand for DDR5 memory interface chips (RCD) in AI servers. Rambus' HBM3E/4 interface IP can provide a throughput of over 1.2 TB/s, which has become the standard for AI accelerators. In addition, as the value of AI model assets soars, Rambus' Root of Trust technology has become a necessity for cloud providers to protect model privacy.
ARC's Farewell
If Rambus' rise represents the leap in connection value, then Synopsys' sale of its ARC processor business represents the equalization of general - purpose computing.
The ARC processor was once a leader in the embedded field and had long ranked among the top in global IP shipments. In the era of microcontrollers (MCU) and early SoCs, ARC's low power consumption and customizability were its killer features. The ARC processor belongs to a typical general - purpose CPU architecture. In the past two decades, this "one - size - fits - all" architecture was the first choice for chip designers. However, under the pressure of generative AI, its efficiency has become inadequate.
In AI chips, 90% of the transistors are allocated to tensor processing units (TPU/NPU) and caches. Traditional CPUs (such as ARC) have degenerated from the decision - making core to "foremen" responsible for task scheduling and input/output processing. When the "brain" is no longer the performance bottleneck, customers' willingness to pay for general - purpose CPU IP has declined.
What really suffocated the closed - source processor IP like ARC is the equalization of the RISC - V architecture. AI manufacturers need to customize instruction sets according to their own operators (Kernels). The open - source nature of RISC - V allows developers to freely add matrix extensions without paying high architecture licensing fees. Maintaining a large processor IP ecosystem means supporting a large compiler team and middleware support team and also facing the low - price impact of the RISC - V open - source wave.
Caught between ARM's near - monopoly and RISC - V's rapid catch - up, Synopsys realized that instead of struggling in the red ocean, it was better to "make room for new opportunities" and tap into the new infrastructure of AI. It has thrown all its funds and R & D efforts into two directions: the first is AI - enhanced EDA (DSO.ai), using AI to design chips. This is Synopsys' most profitable future pillar. The second is the super merger with Ansys. In 2025, Synopsys completed the century - long acquisition of the simulation giant Ansys to strengthen its system - level simulation capabilities. It aims to enable NVIDIA and Google to simulate the heat dissipation, stress, and signal integrity of tens of thousands of computing nodes in the digital world before placing orders with TSMC.
This upheaval has revealed an industry truth: the value in the IP market is shifting from the core to the periphery. In the past, people bought CPU cores; now, they are competing for the protocols (UCIe/PCIe 7.0) that connect these cores, the interfaces that solve data throughput, and the simulation tools that optimize overall power consumption.
On the other hand, for GlobalFoundries, the world's third - largest foundry, taking over the ARC processor IP is not about picking up scrap but about filling a survival gap. GlobalFoundries knows that it has withdrawn from the "money - burning race" for 7nm and more advanced processes, and its core battlefield lies in the mature/featured processes from 22nm to 12nm. GlobalFoundries needs ARC. With its own processor IP, it can provide "one - stop" services to low - and middle - end customers with limited budgets and weak R & D capabilities. Customers no longer need to buy licenses from third - parties and can complete the closed - loop from design to tape - out at GlobalFoundries.
This also helps GlobalFoundries attract long - tail customers. Manufacturers of smart home appliances, low - end automotive sensors, and industrial controllers do not need 3nm AI computing power. What they need is a cheap, stable, and mature turn - key solution.
This is more like a handover in a "class - solidified" situation: Synopsys is moving up to earn the software premium and system access fees at the top of the AI era, while GlobalFoundries is holding its ground in the mature processes by providing more comprehensive "nanny - style" services.
Alphawave's Acquisition
Alphawave Semi was originally called Alphawave IP. As its name suggests, it was a pure IP provider, and its core product was SerDes (Serializer - Deserializer). This is a key technology that enables ultra - high - speed data transmission inside and outside the chip. At that time, it was regarded as the "Lamborghini in the interface IP field", specifically challenging the territories of Synopsys and Cadence.
In 2022, it changed its name to Alphawave Semi, removing the "IP". This marked that it was no longer satisfied with just selling blueprints. By acquiring OpenFive (the custom - chip business under SiFive), it gained the ability to design a complete SoC. Qualcomm completed the acquisition of Alphawave Semi at the end of 2025. This move officially ended its identity as an independent IP company and made it the core connection technology department for Qualcomm to enter the data center and AI infrastructure markets.
Alphawave Semi attracted Qualcomm mainly for two reasons: First, as the chip area approaches the reticle limit, the era of monolithic large - scale chips has come to an end. AI manufacturers must use Chiplet technology to assemble different functional units together. The most core "glue" in this process is the UCIe (Universal Chiplet Interconnect) that Alphawave is good at.
Second, as the scale of AI models explodes, the speed of switches is evolving from 800G to 1.6T. 224G SerDes is the ticket for AI switches to enter the 1.6T era. Alphawave's leading position in this field has made it a welcome guest for all cloud providers (AWS, Google, Meta).
Qualcomm's acquisition of Alphawave is a long - planned strategic move. For a long time, Qualcomm has been trapped in its mobile - market glory and has had difficulty entering the high - performance computing (HPC) and AI data center markets. AI chips are no longer used alone but in clusters of thousands of chips. The UCIe (Chiplet interconnection standard) and 2nm/3nm high - speed interface technology that Alphawave possesses are exactly the missing pieces for Qualcomm to enter the high - performance computing (HPC) market. The acquisition of Alphawave has instantly given Qualcomm the ability to compete with Broadcom and NVIDIA in data center interconnection protocols.
Alphawave's disappearance proves that the threshold for high - speed interconnection IP is extremely high. In the arms race of the AI era, medium - sized IP companies with single - technology are gradually losing the ground for independent survival and must become the sharpest weapons in the hands of vertical giants.
Equalization of Computing and Rise of Connectivity
In the evolution of computer architecture over the past few decades, from CPU clock - speed competition, to multi - core architecture, and then to GPU parallel computing, the industry has long followed a "computing - centric" paradigm - whoever has a stronger computing core controls the ceiling of system performance and thus has the industry's voice.
However, the emergence of generative AI is shaking this long - established premise.
Under the load of large models, the performance function of the system is undergoing a structural change: in the past, system performance was approximately equal to computing power, but now, the performance of the system needs to comprehensively consider multiple indicators and is determined by the "slowest link" among "computing power, memory bandwidth, interconnection bandwidth, system latency, and energy - efficiency constraints".
With the maturity of process technology, EDA toolchains, compiler technology, and software stacks, more and more manufacturers have the ability to design usable AI accelerators. The technical barriers of a single computing unit are decreasing. This does not mean that computing is unimportant, but rather that computing power is changing from a "decisive competitive advantage" to an "infrastructure - type ability". In other words, computing power is moving towards "equalization".
It should be emphasized that "equalization of computing" does not mean that computing has become unimportant. On the contrary, computing is still the foundation of the AI system, but it is no longer the only source of power.
As computing power becomes more like infrastructure, a new power center is shifting to the data path. In an AI system, the data flow paths include: the memory path between the processor and HBM/DDR, the interconnection between dies, the communication between chiplets, the high - speed SerDes at the board level, and the network switching between nodes... These paths together form the "vascular system" of the AI system. Once there is a bottleneck in these paths, even the most powerful computing core will be idle. Therefore, companies that master the key technologies and standards of these data paths are beginning to occupy a high position in the system's power structure.
Specifically in the IP niche market, the value in the IP market is shifting from the core to the periphery. So in recent years, it has been the interface IP manufacturers that have been making rapid progress. According to the industry monitoring by IPnest, the landscape of the semiconductor IP market is undergoing a structural reshuffle: the market share of processor IP (such as CPU/GPU cores) has continuously declined from 57.6% in 2017 to less than 45% in 2025. Meanwhile, the share of interface IP has increased against the trend from 18% and is expected to account for more than a quarter of the entire IP market by 2026. While the traditional IP market maintains a regular growth rate of 8% - 10%, the interface IP segments represented by high - speed SerDes, PCIe 6.0/7.0, and HBM controllers are surging at a compound annual growth rate (CAGR) of over 20%.
In the longer term, the focus of competition in the semiconductor IP market will increasingly concentrate on interfaces, interconnections, and system - level capabilities, rather than the performance of a single computing core.
This article is from the WeChat official account "Semiconductor Industry Observation" (ID: icbank), author: Du Qin DQ. It is published by 36Kr with authorization.