HomeArticle

New Narrative: Space Computing Power

锦缎2025-12-16 08:32
The ultimate utopia of computing power?

According to a report by The Wall Street Journal on December 5, SpaceX is about to launch a new round of stock issuance, and its valuation is expected to soar to a staggering $800 billion. This means that the company's valuation has doubled in just five months.

In response to this market rumor, Elon Musk's response seemed strategically ambiguous. He denied that the company was raising $800 billion in funds but emphasized SpaceX's continuous positive cash flow and its twice - a - year stock repurchase policy to send a positive signal of financial health to the market.

When talking about the core driving factors of the valuation, Musk clearly linked it to the progress of SpaceX's two pillar projects - Starship and Starlink. He particularly pointed out that successfully obtaining the right to use the radio spectrum for direct - to - device (D2D) satellite - to - mobile communication globally will be the key to unlocking a potential market worth trillions of dollars.

This valuation expectation has shaken the capital market. If realized, SpaceX will not only surpass OpenAI and become the world's top "unicorn", but its scale will be comparable to that of a technology sovereign wealth fund. Even when evaluated among the components of the S&P 500 index, SpaceX's valuation would soar to the 13th place. Its market value would even exceed the combined value of the top six US defense contractors, such as Lockheed Martin and Raytheon, highlighting the status of commercial spaceflight as a national - level strategic industry in the eyes of capital.

What's even more worthy of in - depth analysis is that in this valuation narrative, a grand blueprint beyond traditional satellite internet has been clearly outlined:

Orbital computing, or "space computing power".

01 Musk's Latest Ambitious Plan

Musk revealed that SpaceX is planning to enter the field of orbital data centers. The logic behind this points to a growing bottleneck on the ground: it is becoming increasingly difficult to find cheap, sustainable, and massive power resources to run large AI models. So, space has become the new promised land.

In Musk's vision, deploying a large number of AI computing units directly in space is expected to become the "fastest and most feasible way to expand computing power" in the next three to four years.

He gave a shocking quantitative outlook: if SpaceX can launch millions of tons of payloads into low - Earth orbit each year, and each satellite carries about 100 kilowatts of dedicated AI computing power, then the newly added computing power each year will reach up to 100 gigawatts (GW) - which is several times the total computing power of hundreds of the world's largest data centers today.

Although this long - term model omits a lot of engineering details, the theoretical advantages it reveals are extremely attractive: orbital data centers hardly require manual maintenance; the energy comes from the inexhaustible and stable solar energy in space; heat dissipation can be efficiently solved through passive heat radiation in the near - absolute - zero cosmic background, eliminating about 40% of the cooling energy consumption of ground - based data centers.

In addition, these satellites equipped with computing power can form an intelligent network through inter - satellite laser links, creating a distributed and dynamically - schedulable "orbital AI cloud" that seamlessly integrates with the existing Starlink communication network, building a space - based integrated computing - communication infrastructure.

02 The Ultimate Utopia for Computing Power?

Space provides a physical environment for large - scale computing that is difficult to replicate on Earth. The background temperature is about -270 degrees Celsius, close to absolute zero, which allows the waste heat generated by electronic devices to be efficiently radiated directly into deep space.

In contrast, ground - based data centers rely on large - scale air - conditioning, chilled water units, and fan systems for cooling, and the related energy consumption usually accounts for 30% to 40% of the total energy consumption. In space, the additional energy consumption for passive radiation cooling is almost zero. Some analyses (such as the prediction by StarCloud) suggest that the comprehensive energy cost of space data centers could be reduced to one - tenth of that on the ground.

Of course, space heat dissipation is not without cost. To efficiently dissipate the heat generated by computing chips such as GPUs in the form of infrared radiation, large - area radiation heat sinks need to be deployed. A super - large - scale orbital data center with ExaFLOP - level computing power may require several square kilometers of deployed heat - dissipation area. This undoubtedly poses an epic challenge to the satellite's structural design, material technology, and orbital deployment.

However, even considering these factors, space still has a huge inherent advantage in heat - dissipation efficiency.

An even greater energy dividend comes from the sun. In low - Earth orbit, the solar energy density is stable at about 1361 watts per square meter and is not affected by atmospheric attenuation, day - night cycles (it can be almost continuously illuminated through orbital design), and weather. In contrast, even in the desert areas on Earth with the best sunlight conditions, the average annual effective solar radiation flux is only about one - fifth of that in space.

In terms of the application paradigm, deploying computing power on a satellite constellation orbiting the Earth essentially creates a globally - covered, low - latency edge - computing "space - based platform".

Due to the continuous movement of satellites, users anywhere in the world (including areas such as the ocean and the poles, which are traditional network dead zones) can always access nearby computing nodes in a short time. This means that data no longer needs to be transmitted back and forth through thousands of kilometers of ground - based optical cables, and the end - to - end latency is expected to be reduced by an order of magnitude. This will not only completely eliminate "signal dead zones" but also open up new possibilities for applications that are extremely sensitive to latency, such as autonomous driving, remote surgery, immersive metaverse, and high - frequency financial trading.

Currently, SpaceX has firmly grasped the dominant position in the global satellite - launch capacity with its reusable rocket technology, accounting for about 90% of the launch mass. As the launch capacities of competitors like Blue Origin (New Glenn) and Rocket Lab (Electron/Neutron) gradually mature, and with the rapid development of China's commercial spaceflight, the global launch market is entering a new growth cycle. Economies of scale are expected to continuously reduce the cost of launching each kilogram of payload into orbit, removing the economic obstacles for the deployment of a larger - scale computing - power satellite cluster.

03 The Thorny Road under the Bright Ideal

However, turning the blueprint into reality, space computing power faces multiple severe challenges from technology to governance.

Technical feasibility is the primary hurdle:

The problem of radiation hardening: Space is filled with high - energy cosmic rays and charged particles, which can cause bit flips, latch - ups, and even permanent damage to unprotected integrated circuits. Although using special radiation - hardened (Rad - Hard) chips can solve the reliability problem, their manufacturing processes are often several generations behind consumer - grade chips, resulting in high performance and cost. Finding a balance between commercial - grade high - performance computing hardware (such as NVIDIA's H100) and necessary radiation protection is the core engineering problem.

On - orbit maintenance and reliability: Once a satellite fails, it is almost impossible to repair it manually at present. This means that a very high level of system reliability is required, or a replaceable modular design should be adopted. After large - scale deployment, how to manage the end - of - life de - orbiting of satellites to avoid creating space debris is also a major challenge.

Energy and heat management on a large scale: As mentioned above, gigawatt - level computing power means gigawatt - level power consumption and waste heat. Designing lightweight, high - efficiency, and ultra - large - scale deployable solar arrays and radiation radiators is a complex systems engineering involving materials science, structural mechanics, and orbital dynamics.

Network interconnection and latency: Although inter - satellite laser links can provide high bandwidth, a large amount of verification is still needed for their dynamic networking, routing optimization, and the stability of satellite - to - ground communication, especially under adverse space weather conditions.

Regulation and governance are another difficult area:

Competition for spectrum resources: Both satellite - to - ground user (D2D) and inter - satellite communication require the use of scarce radio spectra. The coordination process of the International Telecommunication Union (ITU) is lengthy, and regulatory policies vary from country to country. Especially, coordinating the spectrum compatibility and interference with existing terrestrial 5G/6G networks will be a long - term game.

Orbital and space safety: The space resources in low - Earth orbit are limited. With the addition of tens of thousands of computing satellites, the risk of collisions has increased dramatically, posing unprecedented requirements for space traffic management (STM). The international rules regarding "who has the right to deploy, how to avoid collisions, and how to define responsibilities" are still lacking.

Data sovereignty and security: When data is stored and processed in the "space cloud" that crosses national borders, issues such as judicial jurisdiction, data privacy protection (such as GDPR), and cross - border data flow regulation related to national security will become thorny international political and legal issues.

04 The Competition Begins and the Ecosystem Emerges

Despite the huge challenges, capital and technology giants have already taken action. Institutions such as Morgan Stanley have begun to identify the major players in this emerging field in their research reports, and an initial ecosystem centered around "space computing power" is taking shape.

Pioneering startups:

Starcloud is SpaceX's most direct potential competitor. The company received more than $20 million in seed - round financing in 2024 and launched the "Starcloud - 1" technology - validation satellite in November this year. The satellite is equipped with an NVIDIA H100 GPU and Google's lightweight open - source model Gemma and aims to train the NanoGPT model in space. On December 11, the company announced the successful completion of the first large - language - model training mission in orbit, taking a crucial step in concept validation.

Axiom Space, with its experience in operating commercial space stations, has included orbital data centers in its product planning and aims to launch the first batch of free - flying nodes by the end of 2025. Its advantage lies in the potential to use future commercial space stations as a larger - scale and maintainable hosting platform for computing - power modules.

Lonestar Data Holdings has chosen a more imaginative route - a lunar data center. The company has completed multiple storage tests on the International Space Station and sent a small data - storage payload to the moon in February this year via a lunar lander of "Intuitive Machines". Although the mission ended prematurely after landing, it proved the feasibility of the relevant technology for short - term operation in deep - space environments. Its vision is to use the moon as the "ultimate offshore backup center" for Earth's data.

Technology giants' layout:

Google officially launched the "Suncatcher" project in November 2024, planning to use its self - developed tensor processing units (TPU) to build a space - based AI computing cluster. Its roadmap shows that it will launch two prototype satellites in early 2027 for technology validation and envisions that in the early 2030s, when the launch cost drops significantly due to fully reusable rockets, the cost of space computing will be comparable to that on the ground.

NVIDIA, as a core computing - power provider, is deeply involved. Its high - performance GPUs (such as the already - launched H100) are the preferred hardware for space computing power. Through the "NVIDIA Inception Accelerator Program" and other ecological collaborations, it is closely associated with companies like Starcloud to jointly define the standard hardware architecture for space computing. In addition, whether its CUDA ecosystem and the full - set of AI software stack can adapt to the space environment will also be the software foundation affecting the development of the entire industry.

Overall, although more players are entering the field, the space - computing - power industry is still in the extremely early "technological exploration" stage. The extremely high technical, capital, and qualification thresholds in the aerospace field mean that a highly competitive market landscape is unlikely to emerge in the short term.

The curtain of space computing power has been lifted, but this is far from a simple business competition. It reflects a deeper logic of civilization evolution: as the physical boundaries of the Earth become more and more prominent, humanity is drawing the blueprint of infrastructure towards the stars. SpaceX, Google, NVIDIA, and China's aerospace cluster are all pointing not only to a far - reaching computing - power network but also to another extension of human intelligence and will on a cosmic scale.

This article is written based on publicly available information and is for information exchange only. It does not constitute any investment advice.

This article is from the WeChat official account "Jinduan" (ID: jinduan006). The author is Siqi, and it is published by 36Kr with permission.