Offer Nokia 1 billion, and Jensen Huang wants to make 200 billion.
At the GTC in 2025, Jensen Huang dropped a bombshell: NVIDIA will invest $1 billion in Nokia. Yes, it's the Nokia, the company behind the Symbian phones that were all the rage across China 20 years ago.
Jensen Huang said in his speech that the telecom network is undergoing a major transformation from a traditional architecture to an AI-native system, and NVIDIA's investment will accelerate this process. So, NVIDIA will jointly create an AI platform for 6G networks with Nokia through investment, empowering the traditional RAN network with AI.
Specifically, NVIDIA will subscribe to approximately 166 million new shares of Nokia at a price of $6.01 per share, which will give NVIDIA approximately 2.9% equity in Nokia.
At the moment the cooperation was announced, Nokia's stock price rose by 21%, the largest increase since 2013.
01
What is AI-RAN?
RAN stands for Radio Access Network, and AI-RAN is a new network architecture that directly embeds AI computing power into wireless base stations. The traditional RAN system is mainly responsible for transmitting data between base stations and mobile devices, while AI-RAN adds edge computing and intelligent processing capabilities on this basis.
It enables base stations to apply AI algorithms to optimize spectrum utilization and energy efficiency, improve overall network performance, and at the same time, it can also use idle RAN assets to host edge AI services, creating new revenue sources for operators.
Operators can run AI applications directly at the base station site without having to send all data back to the central data center for processing, greatly reducing the network burden.
Jensen Huang gave an example. Nearly 50% of ChatGPT users access it through mobile devices. Moreover, the monthly mobile downloads of ChatGPT exceed 40 million. In the era of the explosive growth of AI applications, the traditional RAN system cannot handle the mobile networks dominated by generative AI and agents.
AI-RAN provides distributed AI inference capabilities at the edge, making AI applications such as agents and chatbots respond faster. At the same time, AI-RAN also prepares for the integrated sensing and communication applications in the 6G era.
Jensen Huang cited the prediction of analyst firm Omdia. The firm expects the RAN market to exceed $200 billion in cumulative value by 2030, and the AI-RAN segment will be the fastest-growing sub-sector.
Justin Hotard, President and CEO of Nokia, said in a joint statement that this partnership will put an AI data center in everyone's pocket and enable a fundamental redesign from 5G to 6G.
He specifically mentioned that Nokia is collaborating with three different types of companies: NVIDIA, Dell, and T-Mobile. As one of the first partners, T-Mobile will conduct field tests of AI-RAN technology starting in 2026, focusing on verifying performance and efficiency improvements. Justin said this test will provide valuable data for 6G innovation and help operators build intelligent networks that meet AI requirements.
Based on AI-RAN, NVIDIA launched a new product called Aerial RAN Computer Pro (ARC-Pro), an accelerated computing platform for 6G. Its core hardware configuration includes two types of NVIDIA GPUs: Grace CPU and Blackwell GPU.
This platform runs on NVIDIA CUDA, and RAN software can be directly embedded into the CUDA technology stack. Therefore, it can not only handle traditional radio access network functions but also run mainstream AI applications simultaneously. This is also NVIDIA's core method to realize the "AI" in AI-RAN.
Given the long history of CUDA, the biggest advantage of this platform is actually programmability. Moreover, Jensen Huang also announced that the Aerial software framework will be open-sourced and is expected to be released on GitHub under the Apache 2.0 license starting in December 2025.
The main difference between ARC-Pro and its predecessor ARC lies in the deployment location and application scenarios. The previous ARC was mainly used for centralized cloud RAN implementation, while ARC-Pro can be directly deployed at the base station site, enabling the actual implementation of edge computing capabilities.
Ronnie Vasishta, head of NVIDIA's telecom business, said that in the past, RAN and AI required two different sets of hardware to be implemented, but ARC-Pro can dynamically allocate computing resources according to network requirements. It can either prioritize radio access functions or run AI inference tasks during idle periods.
ARC-Pro also integrates NVIDIA's AI Aerial platform, a complete software stack that includes CUDA-accelerated RAN software, Aerial Omniverse digital twin tools, and the new Aerial Framework. The Aerial Framework can convert Python code into high-performance CUDA code to run on the ARC-Pro platform. In addition, the platform also supports AI-driven neural network models for advanced channel estimation.
Jensen Huang said that telecom is the digital nervous system of the economy and security. The cooperation with Nokia and the telecom ecosystem will ignite this revolution and help operators build intelligent and adaptive networks to define the next generation of global connectivity.
02
Looking at 2025, NVIDIA has really invested a lot of money.
On September 22, NVIDIA and OpenAI reached a cooperation. NVIDIA plans to gradually invest $100 billion in OpenAI, which will accelerate its infrastructure construction.
Jensen Huang said that OpenAI sought NVIDIA's investment a long time ago, but the company had limited funds at that time. He humorously said that he was too poor then and should have given all the money to them.
Jensen Huang believes that the growth of AI inference is not 100 times or 1000 times, but 1 billion times. Moreover, this cooperation is not limited to hardware but also includes software optimization to ensure that OpenAI can efficiently use NVIDIA's systems.
This may be because after learning about OpenAI's cooperation with AMD, he was worried that OpenAI would abandon CUDA. Once the world's largest AI foundation model stops using CUDA, it is reasonable for other large model manufacturers to follow suit.
Jensen Huang predicted in the BG2 podcast that OpenAI is very likely to become the next company with a trillion-dollar market value, and its growth rate will set a new record in the industry. He refuted the theory of an AI bubble, pointing out that the global annual capital expenditure on AI infrastructure will reach $5 trillion.
It was also because of this investment that OpenAI announced the completion of the company's capital restructuring on October 29. The company was split into two parts: a non-profit foundation and a for-profit company.
The non-profit foundation will legally control the for-profit part and must also consider the public interest. However, it can still freely raise funds or acquire companies. The foundation will own 26% of the shares of this for-profit company and hold a warrant. If the company continues to grow, the foundation can also obtain additional shares.
In addition to OpenAI, NVIDIA also invested in Elon Musk's xAI in 2025. The scale of this company's current financing round has been increased to $20 billion. Approximately $7.5 billion will be raised through equity, and up to $12.5 billion will be raised through debt from a special purpose vehicle (SPV).
The operation mode of this special purpose vehicle is that it will use the raised funds to purchase NVIDIA's high-performance processors and then lease these processors to xAI for use.
These processors will be used in xAI's Colossus 2 project. The first-generation Colossus is xAI's supercomputing data center in Memphis, Tennessee. The first-generation Colossus project has deployed 100,000 NVIDIA H100 GPUs, making it one of the largest AI training clusters in the world. Now, xAI is building Colossus 2, which plans to expand the number of GPUs to hundreds of thousands or more.
On September 18, NVIDIA also announced that it will invest $5 billion in Intel and establish a deep strategic partnership. NVIDIA will subscribe to newly issued common shares of Intel at a price of $23.28 per share, with a total investment of $5 billion. After the transaction is completed, NVIDIA will hold approximately 4% of Intel's shares and become an important strategic investor.
03
Of course, Jensen Huang also said a lot at this GTC.
For example, NVIDIA launched several open-source AI model families, including Nemotron for digital AI, Cosmos for physical AI, Isaac GR00T for robotics, and Clara for biomedical AI.
At the same time, Jensen Huang launched the DRIVE AGX Hyperion 10 autonomous driving development platform. This is a platform for Level 4 autonomous driving, integrating NVIDIA's computing chips and a complete sensor suite, including lidar, cameras, and radar.
NVIDIA also launched the Halos certification program, the industry's first system for evaluating and certifying the safety of physical AI, specifically targeting autonomous vehicles and robotics technology.
The core of the Halos certification program is the Halos AI system, the industry's first laboratory recognized by the ANSI certification committee. ANSI is the American National Standards Institute, and its certification has high authority and credibility.
The task of this system is to detect whether the autonomous driving system meets the standards through NVIDIA's physical AI. Companies such as AUMOVIO, Bosch, Nuro, and Wayve are the first members of the Halos AI system inspection laboratory.
To promote Level 4 autonomous driving, NVIDIA released a multimodal autonomous driving dataset collected from 25 countries, which contains 1700 hours of camera, radar, and lidar data.
Jensen Huang said that the value of this dataset lies in its diversity and scale. It covers different road conditions, traffic rules, and driving cultures, providing a basis for training more general autonomous driving systems.
However, Jensen Huang's blueprint goes far beyond this.
He announced a series of collaborations with US government laboratories and leading companies at the GTC, aiming to build the US AI infrastructure. Jensen Huang said that we are at the dawn of the AI industrial revolution, which will define the future of every industry and country.
The highlight of this cooperation is the collaboration with the US Department of Energy. NVIDIA is helping the Department of Energy build two supercomputing centers, one at Argonne National Laboratory and the other at Los Alamos National Laboratory.
Argonne Laboratory will receive a supercomputer called Solstice, which is equipped with 100,000 NVIDIA Blackwell GPUs. What does 100,000 GPUs mean? This will be the largest AI supercomputer in the history of the Department of Energy. There is also a system called Equinox, equipped with 10,000 Blackwell GPUs, which is expected to be put into use in 2026. Together, these two systems can provide 2200 exaflops of AI computing performance.
Paul Kearns, director of Argonne Laboratory, said that these systems will redefine performance, scalability, and scientific potential. What will they use this computing power for? From materials science to climate modeling, from quantum computing to nuclear weapon simulation, all require this level of computing power.
In addition to government laboratories, NVIDIA is also building an AI factory research center in Virginia. The special thing about this center is that it is not just a data center but an experimental field. NVIDIA will test something called Omniverse DSX here, which is a blueprint for building a gigawatt-scale AI factory.
An ordinary data center may only require dozens of megawatts of power, while a gigawatt is equivalent to the power generation of a medium-sized nuclear power plant.
The core idea of the Omniverse DSX blueprint is to make the AI factory a self-learning system. AI agents will continuously monitor power, cooling, and workload and automatically adjust parameters to improve efficiency. For example, when the grid load is high, the system can automatically reduce power consumption or switch to energy storage battery power supply.
This intelligent management is crucial for gigawatt-scale facilities because electricity and cooling costs will be astronomical.
This vision is grand, and Jensen Huang said it will take him three years to achieve it. The AI-RAN test will not start until 2026, autonomous vehicles based on DRIVE AGX Hyperion 10 will not hit the road until 2027, and the Department of Energy's supercomputer will also be put into use in 2027.
NVIDIA holds the trump card of CUDA and controls the de facto standard of AI computing. From training to inference, from data centers to edge devices, from autonomous driving to biomedicine, NVIDIA's GPUs are everywhere. The investments and collaborations announced at this GTC further consolidate this position.
This article is from the WeChat official account “Facing AI”. Author: Miao Zheng, Editor: Wang Jing. Republished by 36Kr with permission.