StartseiteArtikel

AXERA Technologies Co., Ltd. is listed on the Hong Kong Stock Exchange, targeting the trillion-dollar blue ocean of edge AI.

晓曦2026-02-10 16:02
Physical AI initiates the major trend of computing power sinking. Aixin Yuanzhi, with its full-stack self-developed technology and leading commercialization capabilities, has successfully secured the position of "the first Chinese edge AI chip stock", laying a computing power foundation for the implementation of AI in the real world.

Dr. Qiu Xiaoxin (right), the founder and chairman of AXERA, and Sun Breeze (left), the CEO of AXERA, ring the bell at the listing ceremony.

On February 10, AXERA (0600.HK) was listed on the Hong Kong Stock Exchange, winning the title of "the first Chinese edge AI chip stock." Its stock price reached HK$29.18 at the opening on the first day, and its total market value exceeded HK$17 billion.

In recent years, AI has been booming. The competition among tech giants has expanded from cloud models and large - scale GPU clusters to a broader physical AI space. Silicon Valley tycoons like Elon Musk and Jensen Huang have frequently stated that "the ultimate form of AI will be realized in the physical world."

From the accelerated mass - production of Tesla's Optimus to the increasing production of Meta's smart glasses, and Apple's strategic shift towards edge - side AI, global investors' attention is quickly focusing on edge - side intelligence.

However, the implementation of edge - side intelligence is not easy. It requires AI to complete the closed - loop of perception, decision - making, and execution within milliseconds in a limited physical space. This poses higher requirements for the energy - efficiency ratio of computing power. Against this background, "computing power sinking" has become the key to solving the problem, and the importance of edge - side inference chips is becoming increasingly prominent.

Although the potential is huge, few companies have reaped the rewards so far. As a leader in the edge AI chip track, AXERA is one of the few companies that have achieved large - scale shipments. In its Hong Kong IPO, the company's market value reached HK$16.6 billion at the opening on the first day, with a static PS ratio of over 30 times, approaching that of leading companies like Horizon Robotics, which demonstrates the market's recognition of its scarcity and future growth potential.

So, at the singularity of the explosion of physical AI, how large is the market space for edge AI chips? And how will the investment value of AXERA, which has entered the secondary market, unfold?

01

Physical AI Drives Computing Power Sinking, Creating a Trillion - Dollar Blue Ocean

In the past two years, most discussions about AI in the market have focused on cloud - based large models and data centers. The common feature of these scenarios is that data is mainly generated in the form of text and images, computing occurs in the cloud, and the results remain at the "information level."

However, in 2026, the industrial logic has changed: AI is stepping out of the laboratory and having its "iPhone moment" - AI is no longer just a "digital brain" that can have remote conversations but also needs to become physical intelligence that can control machines and influence the world.

But when AI tries to enter the physical battlefield, the drawbacks of traditional cloud - based deployments are fully exposed:

On the one hand, physical interaction requires real - time feedback. A millisecond - level delay can be a matter of life and death. For example, when an autonomous vehicle is driving at high speed, a 100 - millisecond lag means several meters of uncontrolled distance. The long - chain cloud transmission has an obvious delay problem and cannot support real - time feedback.

On the other hand, smart devices have led to an exponential growth of perception data. If all the massive raw data is transmitted back to the cloud, it has high requirements for bandwidth and is extremely uneconomical in terms of business logic. At the same time, for most edge - side devices in "narrow - lane" environments, maintaining long - distance high - bandwidth transmission means high energy consumption, which poses a challenge to the battery life of mobile devices.

Finally, uploading raw data to the cloud for processing may lead to data leakage due to vulnerabilities in the transmission or storage process. The privacy red line is particularly troublesome for sensitive fields such as healthcare and finance.

Against this background, computing power sinking is no longer an option but the only way out. Meanwhile, since 2024, the popularization of lightweight AI models and the development of open - source LLMs have also made computing power sinking possible.

The rigid demand for "local perception and local computing" has directly led to the explosion of edge and edge - side AI. And edge and edge - side AI inference chips have become the key bridge connecting the digital brain and the physical world.

By deeply integrating AI models with intelligent perception technology, these chips can build a complete closed - loop of perception, computing, and execution. They can directly use the physical data generated by devices to complete AI - driven analysis and decision - making, providing real - world value and significantly reducing the dependence on cloud resources.

The shift of the computing power paradigm is creating a potential market space worth trillions. According to data from CIC, in 2024, the global scale of edge inference and edge - side inference chips was RMB 379.3 billion, and by 2030, it will expand to RMB 1,612.3 billion, with a CAGR of up to 27%.

It can be said that whoever can successfully seize the opportunity will master the entrance to the post - AI era.

02

Reconstructing the Value Landscape of Edge AI

Although the demand is huge, for AI to truly enter the physical world, it faces a much harsher survival environment than in the cloud.

Edge and edge - side AI is not simply "installing" AI into robots, cars, or smart glasses. Instead, it is to empower these terminal devices with AI, enabling them not only to have a "brain" for processing information but also to have sensitive visual senses, in - depth reasoning logic, and millisecond - level reaction capabilities.

This complete closed - loop of deep coupling of perception, computing, reasoning, and feedback has a much higher technical complexity and engineering threshold than in the cloud. As a leading player, AXERA has broken through industry barriers and made the implementation of edge - side AI possible by building a proprietary technology platform of "integrated perception and computing" and a differentiated platform operation model.

In the closed - loop of physical intelligence, the moment a terminal device captures complex environmental signals, it must accurately extract features and quickly convert them into computing instructions to make correct decisions in the ever - changing physical world.

In response to this high - synergy requirement of "perception" and "computing", AXERA starts from the underlying architecture and shortens the reflection distance from perception to decision - making through a deeply coupled path.

At the perception level, to solve the pain points of edge - side devices in harsh environments such as black light and strong backlight, AXERA has subverted the linear architecture of traditional ISP and uses the computing power of its AXERA NPU to intervene in the processing at the beginning of signal generation in its AXERA AI - ISP. This not only achieves extreme imaging of "night as bright as day" but also converts physical signals into high - fidelity "data truths", ensuring that the perception accuracy can be instantaneously and losslessly transmitted to the computing core.

At the computing core level, in response to the physical limitations of "power consumption, area, and computing power" on the edge side, the AXERA NPU adopts a multi - threaded, heterogeneous multi - core design. Its core competitiveness lies in supporting dynamic scheduling of mixed precisions such as INT4, INT8, and INT16. This design makes the AXERA NPU like a racing engine that can automatically switch gears according to road conditions. In the "narrow lanes" of limited power consumption, it runs much more efficiently than traditional GPUs and achieves native compatibility with Transformer and CNN architectures.

Meanwhile, by optimizing the neural network path and memory hierarchy design, the AXERA NPU minimizes unnecessary data transmission, ensuring that complex models can maintain millisecond - level responses under extremely low power consumption of a few watts, supporting real - time closed - loop inference within physical entities.

After solving the technical thresholds of "seeing clearly" and "calculating accurately", in the face of the extremely fragmented application scenarios of edge AI, AXERA has achieved efficient reuse of the entire link from the underlying IP core to terminal products through a differentiated platform operation model and software - hardware synergy capabilities.

First, AXERA integrates the general architecture through its proprietary technology platform and provides developers with a standardized entry for in - depth software - hardware integration based on the Pulsar2 toolchain and a mature SDK software package. This system not only enables the rapid migration and expansion of algorithm models but also allows developers to reuse R & D results across chip platforms, greatly reducing the development cost and entry threshold for customers and making large - scale deployment possible.

Second, AXERA organically integrates market insight, product development, and marketing to build a standardized "trinity" collaborative process. This operation model ensures that market demands can be quickly feedback to the technology R & D and engineering implementation links, enabling each SoC to accurately address industry pain points from the very beginning of its definition and significantly shortening the business cycle from technological innovation to solution implementation.

Finally, with a stable and scalable global supply chain system, AXERA can continuously obtain the most advanced process technologies and stable production capacity.

Thanks to the resilience of this platform - based architecture, on the basis of stabilizing the visual terminal market, the company can quickly enter high - growth blue - ocean tracks such as intelligent driving and edge computing at a lower marginal cost and rapidly release the scale effect, reconstructing the value landscape of edge AI.

03

Performance Enters the Growth Phase

As a leading domestic AI inference SoC supplier, AXERA has built three major business matrices centered around terminal computing, intelligent vehicles, and edge AI inference.

Benefiting from the rise of edge and edge - side AI, from 2022 to 2024, the company's revenue scale expanded from RMB 50 million to RMB 470 million, with an annual compound growth rate (CAGR) of up to 207%. In the first three quarters of 2025, the revenue reached RMB 270 million, indicating high growth potential and technology monetization ability.

Behind the high growth are the resonances of three underlying logics:

First, terminal computing provides the cornerstone of performance. In terms of product structure, visual terminal products are the core source of the company's revenue growth. Currently, the company has dozens of commercial terminal - computing SoC products covering high - end, mid - end, and basic - level markets to meet diverse needs.

With a complete product matrix, the cumulative shipments of the company's terminal - computing SoCs have exceeded 157 million units (as of the end of September 2025). According to CIC data, the company is the fifth - largest global supplier of visual edge - side AI inference chips. In the most competitive high - and mid - end markets, AXERA ranks first globally with a market share of 24.1%. The leading market share in the high - and mid - end markets represents that the company has strong bargaining power and a brand moat in its core business area.

Driven by the steady growth of shipments, from 2022 to 2024, the revenue of the company's terminal - computing products expanded from RMB 45 million to RMB 448 million, with a CAGR of 216%. The revenue proportion has remained at a high level for a long time, providing a stable foundation for the company and cash flow for the expansion of other tracks.

Second, horizontal breakthroughs bring multiple growth points. While the terminal - computing products provide a stable foundation, the company has achieved rapid development in the intelligent vehicle and edge AI inference businesses through the efficient reuse of the underlying IP core and platform - based operation, expanding new growth curves and providing performance flexibility.

In the field of intelligent vehicles, currently, the company has three automotive - grade SoCs (M55H, M57, and M76H) fully entering commercial applications and has successfully won contracts from several automobile OEMs and Tier 1 suppliers, covering L2 to L2+ level ADAS scenarios. It is also actively developing new in - vehicle SoCs such as M97 to support higher - level intelligent driving. According to CIC data, AXERA has become the second - largest domestic supplier of intelligent - driving SoCs in China.

Meanwhile, in the field of edge AI inference, relying on the successful experience in terminal computing and intelligent vehicles, the company continues to expand the technological boundaries of edge AI. Currently, through the 8850 series of products, it has explored more niche areas such as edge AI boxes and AI inference acceleration cards. Since the launch of the 8850 series of SoCs in 2023, the shipments have increased from over 21,000 units to over 100,000 units in 2024, helping the company become the third - largest domestic supplier of edge AI inference chips, with a market share of 12.2%. During the same period