The Earth can hardly support AI anymore. Google and NVIDIA are forced to turn to space, which benefits Elon Musk.
Just now, Google officially launched its data center moonshot project, aiming to move all its computing power into space. They also gave this project a really cool name, Project Suncatcher.
Google's idea is quite simple. Instead of competing for the increasingly scarce resources on Earth, it's better to directly access solar energy in space. This brand - new moonshot project has only one goal: to build a scalable AI infrastructure powered by solar energy in space.
A few days ago, OpenAI's CEO, Altman, and Microsoft's CEO, Satya Nadella, said in a podcast:
My problem today isn't the chip supply; the fact is, I don't have enough warm shells to plug them into.
It sounds rather boastful. After all, before this AI wave, we always thought that computing power was everything.
Altman and Nadella
But as Altman said in the program, the future of AI requires more breakthroughs in energy. Ordering too many AI chips is useless if the supporting data centers and power supply can't keep up.
How outrageous is the power consumption of AI? According to data from the International Energy Agency (IEA), by 2030, the power consumption of the global data infrastructure is expected to be equivalent to that of the entire country of Japan.
It's not just about electricity, but also water. Data from the World Economic Forum shows that a 1 - megawatt data center consumes as much water per day as about 1000 residents in developed countries.
The maximum power consumption of an NVIDIA H100 chip can reach
In the past five years, the demand for data centers has started to soar, but the growth rate has far exceeded the speed of planning new power - generation capabilities.
To solve the same energy problem, Google's plan is to launch a constellation of satellites powered by solar energy and equipped with Google's self - developed TPU chips (used for computing, similar to NVIDIA's GPUs), and build an "orbital AI data center" in space.
Is space really cheaper and more efficient than Earth?
Why space? Google's reason is straightforward.
8 times the efficiency: If the satellites carrying chips are in the right orbit, the efficiency of solar panels is 8 times that on Earth.
24/7 uninterrupted power supply: There are no nights or clouds in space. Compared with solar panels on Earth, they can generate electricity continuously.
Musk posted on X that AI satellites in space can protect the Earth
Zero resource consumption: In space, data centers don't need to consume the limited land on Earth, nor do they need to consume large amounts of water for cooling.
Companies like Apple, Huawei, Tencent, and China Mobile have set up data centers in Guizhou. Image source: Xinhua News Agency
Currently, data centers on Earth are getting closer to the energy bottleneck. They are built in Iceland and Norway for the cold climate, and in the Nevada Desert for electricity. In China, most large - scale companies set up their data centers in places like Guizhou and Zhongwei, Ningxia, relying on the environment for cooling.
However, the environment in space is far more complex than that on Earth. Google mentioned in its research paper the current difficulties and the methods to deal with them in detail.
To send AI "to the sky", Google has to solve three major problems
Problem 1: A "local area network" in space?
AI training requires massive numbers of chips to work together, and the requirements for connection bandwidth and latency between them are extremely high. On Earth, we can use fiber - optic cables for high - speed data transmission. What about in space?
Google's solution: Formation flight + laser communication.
They plan to have the satellites "fly very close" to each other, with a distance of only a few kilometers or less.
In a simulated constellation of 81 satellites, each satellite is equipped with a solar array, a radiation cooling system, and a high - bandwidth optical communication module; and the distance between satellites only varies dynamically between 100 - 200 meters.
At such a close distance, they can achieve high - speed interconnection through free - space optical communication (FSO ISL, Free - Space Optical Inter - Satellite Links). Google revealed in its paper that their demonstration has successfully achieved a two - way transmission rate of 1.6 Tbps.
Problem 2: Cosmic "radiation"?
The space environment is extremely harsh. While the sun provides energy, it also emits deadly high - energy particles (radiation), which can be directly devastating to cutting - edge chips.
Image source: NASA
Google's solution: Endurance.
They sent their Cloud TPU v6e (Trillium) chips to the laboratory and bombarded them with a 67 MeV proton beam.
The result was "surprisingly radiation - hard". The most sensitive high - bandwidth memory (HBM) of the TPU started to show abnormalities only after receiving a dose of 2 krad(Si), which is almost 3 times the expected radiation dose (750 rad(Si)) for a 5 - year mission.
This means that Google's TPU can operate continuously in low - Earth orbit for 5 years without permanent damage.
Google plans to collaborate with Planet to launch two prototype satellites before 2027 to test the actual operating environment.
The official website of Planet, which is mainly engaged in satellite image and Earth data analysis
Problem 3: Data transmission back to Earth
In space, data transmission between GPUs has been made fast and efficient. But even after the calculations are done in space, how can the data be transmitted back to Earth at high speed?
This is a major challenge that Google admitted in its paper and is yet to be solved.
Latency problem: The "dawn - dusk synchronous orbit" that Google chose, although it maximizes solar energy, will increase the latency to some ground locations, as the paper admits.
Bandwidth bottleneck: The current highest record for "ground - space" optical communication was set by NASA in 2023, which is 200 Gbps.
200 Gbps sounds fast, but for a space AI data center, this "pipe" is far from enough.
However, on top of all these high - difficulty technical challenges (local area network, radiation, ground communication), there is a fundamental and fatal obstacle that determines whether it's worth solving all the previous problems: the cost of going to space.
This used to be the biggest obstacle. Sending one kilogram of something into space used to be more expensive than an equivalent weight of gold.
Comparison of launch costs for a series of low - Earth orbit satellites
Google calculated in its paper that if SpaceX's launch cost can be reduced to $200/kg (expected around 2035), the unit power cost of the space data center can be on par with that of ground - based data centers, about $810/kW/year, which completely overlaps with the range of $570–3000/kW/year for data centers in the United States.
In other words, when rockets become cheap enough, space will be more suitable for building data centers than Earth.
However, the reality is that the current launch price is more than ten times the ideal price.
Who can make this happen? SpaceX
Google clearly adopted SpaceX's learning - curve assumption in its paper: Every time the total launch mass doubles, the unit launch cost decreases by 20%.
Since the first successful launch of the Falcon 1, the payload mass of SpaceX, calculated based on the lowest achieved price, has gradually changed for different types of rockets
From the Falcon 1 to the Falcon Heavy, SpaceX has reduced the launch cost from $30000/kg to $1800/kg; and the goal of the Starship is $60/kg with a 10× reusability rate, and it can be reduced to $15/kg in the extreme case.
This means that SpaceX is very likely to be the company that supports Google's economic model for the space data center.
If NVIDIA monopolizes the GPU market on Earth, then SpaceX may monopolize the computing - power space in space in the future.
On Earth, NVIDIA sells GPUs; in space, SpaceX sells orbits.
A few days before Google published its paper, on November 2, NVIDIA's powerful H100 GPU was "sent into space for the first time".
This H100 was carried on a satellite of a startup called Starcloud, and its on - orbit computing power is 100 times stronger than any previous space computer.
Starcloud was founded in 2024 and was dedicated to building data centers in space from the start. It has received investments from NVIDIA, YC, etc.
Their mission is more straightforward: real - time data processing on orbit. Starcloud's CEO gave an example: The raw data of a SAR (synthetic aperture radar) satellite is extremely large. Instead of downloading hundreds of gigabytes of raw data, it's better to analyze it on - orbit with the H100 and only transmit back a result of 1KB in size, such as "a ship is at a certain location and its speed is...".
When asked how all this can be achieved, Starcloud's CEO also pointed to Musk: Their vision completely depends on "the cost reduction brought by SpaceX's Starship".