HomeArticle

Old Huang is going crazy. NVIDIA is going to send AI computing power into space, which even makes Elon Musk restless.

智能Pro2026-03-24 08:26
AI is reshaping the real world once again.

On March 16 local time, NVIDIA's GTC conference was held in California. CEO Jensen Huang delivered a three - hour keynote speech. Huang brought a supercomputer specially built for AI Agents, along with 7 AI chips and 5 rack systems. Regarding the currently popular OpenClaw, Huang also talked at length, stating that NVIDIA has launched multiple open - source models covering different niches.

However, among the many things revealed in Huang's speech, what most piqued Xiaolei's interest was the "Space AI" project. Yes, you read that right. NVIDIA plans to move AI devices into space orbit. Huang isn't just talking casually. NVIDIA's Thor chip has passed the radiation - resistance certification, and it is currently developing the NVIDIA Space - 1 Vera Rubin space computer.

(Image source: NVIDIA)

I believe many people are still full of doubts at this point: How does AI relate to space? Moreover, what exactly can be achieved by building an AI data center in space orbit?

After AI goes to space, large models can also run in space

Actually, NVIDIA's Space AI project wasn't proposed just recently. In 2025, Starcloud, a space startup supported by NVIDIA, successfully launched a satellite named Starcloud - 1 into space orbit. This satellite was equipped with a proper commercial version of the H100 graphics card. Of course, the H100 is a GPU AI chip released by NVIDIA in 2022 and wasn't specifically customized for the space environment. The expected lifespan of this chip in space is 5 years.

This satellite is very small in size and only weighs 60 kilograms. However, it successfully powered on in space and ran Google's open - source large model Gemma, achieving the feat of running a large model in space for the first time.

(Image source: Starcloud)

This experiment undoubtedly gave NVIDIA more confidence, so Huang's plan to send AI to space is accelerating. Another chip that NVIDIA plans to send to space is Thor, a System - on - a - Chip (SoC) with strong computing power. It integrates the latest Blackwell GPU, and its single - chip computing power is very strong, making it suitable for the edge - computing field. Initially, Thor was designed specifically for cars and robots. Its high integration gives it an advantage in terms of weight and volume, making it very suitable to be sent to space.

(Image source: NVIDIA)

However, for a chip to work properly in space, it must be able to adapt to the special space environment. After all, compared to the ground, the space environment is harsher, filled with solar flares, high - energy cosmic rays, charged particles, etc. The fact that the Thor chip has passed the radiation - resistance test means it meets the prerequisite for going to space.

In addition, NVIDIA's IGX Thor and Jetson Orin platforms have also completed space - level adaptation. The former is small in size and low in power consumption, suitable for being installed in small satellites; the latter has extremely high computing power and a high - reliability design, suitable for serving as the computing brain in satellites and even space stations.

Why send AI to space?

I believe this is a question that many people instinctively ask after hearing about NVIDIA's space project. To answer this question, let's specifically analyze the advantages and disadvantages of sending AI to space and the application scenarios that space AI can find.

Let's start with the advantages.

Firstly, building an AI data center in space has a very direct benefit: it can continuously utilize the "unlimited" clean energy. In specific space orbits, without the interference of the atmosphere or rainy weather, satellites can be exposed to sunlight for a long time, even 24 hours a day, making full use of solar energy without worrying about power shortages.

(Image source: NVIDIA)

Secondly, the heat - dissipation problem in space is relatively easier to solve. Ground - based data centers and servers need to invest a lot in heat dissipation, such as continuous freshwater cooling. Therefore, many domestic data companies set up their servers on the Yunnan - Guizhou Plateau, using the local cool climate to reduce heat - dissipation costs. A typical example is Guizhou on the Cloud. In space orbit, the side facing away from the sun has extremely low temperatures, which naturally helps to suppress heat generation.

However, the disadvantages are also obvious. Space is in a vacuum state. Without air, there is no convective heat dissipation, and fans can't be used for cooling. The side of the device equipped with an AI chip facing the sun will have a very high temperature, and a large - area heat - dissipation panel needs to be set up to pump the heat from the high - temperature area to other areas in a way similar to liquid - cooling.

Moreover, the maintainability of equipment in space is almost zero. Once a satellite launched into space malfunctions, it is basically impossible to repair it. Therefore, the AI devices sent to space must have a high - redundancy design, which will undoubtedly significantly increase the cost.

Now let's look at the application scenarios.

First of all, it should be clear that before AI went to space, humans had already developed and utilized space to a considerable extent. The most closely related to ordinary people are satellite positioning and satellite communication. Almost everyone uses satellite navigation, which has long been an indispensable function of smartphones. Satellite communication is also developing and popularizing rapidly, gradually becoming a key part of filling the ground - signal blind spots.

Moreover, satellite communication is gradually expanding from emergency and low - frequency application scenarios to regular and high - frequency application scenarios. Manufacturers such as SpaceX even plan to enable ordinary smartphones to connect to the base stations of low - orbit satellites. In addition, there are a large number of remote - sensing satellites in space, continuously transmitting the captured data back to the ground.

(Image source: SpaceX)

In Xiaolei's opinion, the most core function of sending AI to space is to change the function of space equipment from "transmitting data" to "transmitting conclusions". In the traditional model, satellites are basically responsible for sending data to the Earth, and ground personnel then use equipment for analysis and calculation. If space equipment is itself equipped with an AI chip, it can directly process the data in space and then send the filtered data or conclusions back to the ground. This additional process will greatly reduce the data - transmission bandwidth pressure from space to the ground and significantly improve the efficiency of space - ground communication.

Sending AI chips to space means equipping space equipment with the ability of AI edge computing. Similar applications are already very common on the ground. For example, operators will add AI chips to base stations to support edge computing. Part of the data obtained by the base station doesn't need to be transmitted back to the server center and can be processed first.

In recent years, with the rapid development and popularization of AI, the trend of AI deeply influencing all industries has become very obvious. As more and more devices and scenarios are rapidly being AI - enabled, it is completely expected that AI will enter space and endow space equipment with AI capabilities. With the wide application of space AI in the future, we will see more new scenarios being explored.

Not only NVIDIA, more players are entering the space AI field

NVIDIA is not the only participant in sending AI to space. When it comes to space, Elon Musk is definitely involved. One of Musk's major moves this year is to integrate SpaceX and xAI. He even proposed a very ambitious plan: to deploy an orbital data - center constellation consisting of up to one million satellites.

Recently, the financial report submitted by SpaceX to the U.S. Securities and Exchange Commission revealed that the commercial quotation for a Starship launch is $90 million. Calculated at 150 tons, the launch cost per ton is about $600. The significantly reduced rocket - launch cost has given Musk even greater ambitions.

(Image source: SpaceX)

Internet giant Google has also announced its space AI project, but it is still in the "concept" stage. Google said that they plan to install their self - developed AI chips into solar - powered satellites and use space laser communication for data interconnection between satellites, aiming to establish an expandable distributed AI data center composed of numerous small satellites. However, the progress of Google's project won't be fast. It is expected to launch two satellites for testing in 2027.

Domestic brands have also joined the space AI project. Well - known home - appliance manufacturer Dreame has announced its space plan. At this year's AWE, Xinji Chuangyue, a chip company under Dreame, released the "Tianqiong" series of chips and announced that the chips have been mass - produced. Moreover, Xinji Chuangyue also announced the relevant plan for space computing power, launching the "Yaotai" series of space computing - power boxes into low - Earth orbit and completing the construction of a super - computing center.

On March 16, Xinji Chuangyue's first Yaotai computing - power base station was successfully launched by the Kuaizhou - 11 Y7 carrier rocket from the Jiuquan Satellite Launch Center. It is understood that this base station was deployed in an optical remote - sensing satellite, and this launch was mainly to test its extreme capabilities in the space environment.

(Image source: Dreame)

Judging from the full - scenario computing - power product matrix previously announced by Xinji Chuangyue, Dreame plans to equip all intelligent terminals with computing power, covering devices such as mobile phones, AI computers, robots, and home appliances. According to Dreame's vision, in the future, terminal devices should not only rely on the cloud - based AI capabilities but also have their own native computing power at the device end.

(Image source: Dreame)

Sending AI to space is also to achieve Dreame's planned space - air - ground integration and improve its own hardware - ecosystem network. It is worth mentioning that Dreame has very ambitious plans for the Yaotai series of space computing - power centers, saying that it will consist of two million computing - power satellites, which is even bolder than Musk's vision.

It is imaginable that if Dreame's space AI center is built, it can be connected with satellite communication, and then link its own AI mobile phones and other hardware products. This not only expands its own ecological territory but also develops more business sectors and enhances its overall competitive advantage.

Overall, although different brands are entering the space AI field, their specific approaches and purposes still vary. NVIDIA's high - profile announcement of the plan to send AI to space is essentially to sell AI chips. NVIDIA has repeatedly emphasized that its various types of chips have reached the space - level standard and passed the radiation - resistance test, which is to showcase its space AI solutions.

For SpaceX and Musk, building an AI data center in space is a natural step: SpaceX has a low satellite - launch cost, the gradually formed Starlink meets the conditions for efficient space - ground communication, and xAI provides support for AI technology and applications. Dreame's efforts in chip - making and assembling computing - power satellites are to further strengthen its own hardware - ecosystem advantage and prepare for the all - inclusive AI era.

How will space AI reshape the future world?

We still can't draw a clear blueprint for the future space AI world at present, but Xiaolei personally believes that it will have a profound impact in at least the following two aspects.

1. Completely break the shackles of AI computing power.

With the continuous and explosive demand for computing power in AI, the energy consumption of AI facilities is also increasing exponentially. In 2025, OpenAI CEO Sam Altman said in an interview that in the future, if we want to meet the computing - power demand of AGI, we need to build 17 nuclear power plants specifically for it to meet its energy consumption.

Huawei also stated in a report named "Intelligent World 2035" (released in 2025) that the global demand for computing power will soar by 100,000 times in the next decade. By 2035, the total power consumption of global data centers will reach up to 1.5 trillion kilowatt - hours.

In other words, if AI continues to develop at the current speed, the exploitation of Earth's energy in the future will gradually become unsustainable. At this time, the continuous solar energy in space becomes the best new energy - acquisition channel. Transferring AI training and inference tasks that consume a lot of energy and don't require direct human intervention to space and utilizing the infinite solar energy in the universe is a way for humans to enter the AGI era.

2. Reshape satellite communication and inject a brain into the future 6G network.

In the future, satellites will no longer be just mechanical signal - forwarding pipelines but space edge - computing nodes filled with AI chips. This will not only greatly relieve the bandwidth pressure of satellite - Earth communication but also give rise to space application scenarios that we can't even imagine at present. For example,