The Global Marathon of Space Computing Power: Three Routes, One Cornerstone
Recently, Yiwei Aerospace (Beijing) Technology Co., Ltd. (hereinafter referred to as "Yiwei Aerospace") successfully completed a tens of millions-level angel round of financing. The investors in this round include Zifeng Capital and Tsinghua Holdings Jixin, and the old shareholder, Linge Venture Capital, continued to increase its investment. The funds from this round will be mainly used for the R & D iteration and engineering verification of core products.
As the first institutional investor in Yiwei Aerospace (Yiwei Aerospace completed a tens of millions-level seed round of financing, with Linge Venture Capital as the sole investor), Linge Venture Capital has been continuously focusing on the field of space computing power. Based on our in - depth research on the industry and in - depth communication with Dr. Xing Ruolin, the founder and CEO of Yiwei Aerospace, we have written this observation article.
A Global Competition Unfolding
In 2025, space computing power is rapidly moving towards engineering verification globally.
In November, Starcloud launched the Starcloud - 1 test satellite equipped with NVIDIA H100 GPUs into low - Earth orbit, which can be regarded as a representative milestone of the first publicly disclosed "demonstration of language model training using data - center - level GPUs in orbit". In the same month, Google officially announced its "Project Suncatcher", proposing to build an in - orbit computing power constellation with 81 satellites equipped with TPUs and planning to conduct prototype verification in 2027. Meanwhile, Elon Musk proposed on X a technical route of transforming space computing power based on Starlink V3 satellites. As the market's discussion about SpaceX heated up, it further increased the outside world's attention to the route of "expanding space computing power through communication constellations".
Domestically, the "Three - Body Computing Constellation" led by Zhejiang Lab completed the deployment of the first batch of 12 satellites in May, publicly disclosing that the in - orbit interconnection computing power reached 5 POPS. In the same month, the first batch of satellites in the second phase of the "Tianyuan Constellation" of Beijing University of Posts and Telecommunications successfully entered orbit. The space servers v1, hyperspectral cameras, and inter - satellite laser communication payloads with over 100 Gbps carried by the Beiyou 2 and Beiyou 3 satellite platforms have successfully completed in - orbit verification and achieved phased results, opening up the "sensing - transmission - computing" integrated process. The second - phase plan is to deploy a total of 24 satellites, aiming to carry out in - orbit technology verification and application exploration in areas such as aerospace computing, 6G networks, and intelligent remote sensing.
On May 17, 2025, the "Beiyou 2" and "Beiyou 3" satellites were successfully launched into space
The underlying logic of the demand for space computing power stems from an established law in the Internet era: data growth always outpaces transmission capacity. There is a classic quote in computer network textbooks: "Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway". It reveals a counter - intuitive economic fact: when the data volume exceeds a critical scale, the cost - efficiency of physical transportation will exceed that of network transmission. The person in charge of AWS once gave an example: it would take about 26 years to transmit 1 EB of data via a 10 Gbps dedicated line, while 10 Snowmobiles could complete it in half a year.
In space, this contradiction is pushed to the extreme. The rate of satellite data generation far exceeds the transmission capacity: the down - link of high - resolution remote sensing mission payloads has reached the Gbps level, and the data volume can reach TB per day or even higher. At the same time, the satellite - to - ground link is limited by the hard constraints of power, spectrum, and ground station windows and cannot be linearly expanded with the constellation scale. As the constellation grows from dozens to tens of thousands of satellites, the gap between data generation and down - link capabilities will continue to widen. There is only one way to break the deadlock: move the computing to the orbital nodes and directly process the data where it is generated.
This communication bottleneck is giving rise to the demand for space computing power in two dimensions.
The first dimension is the endogenous demand for constellation operation (Computing for Space). As the satellite network evolves from a simple forwarding system to a mobile distributed network - with the topology rearranged every 90 minutes and link switching occurring at a frequency of seconds - to ensure its stable operation like a terrestrial operator network, capabilities such as routing, resource allocation, and fault self - healing are indispensable.
If the control plane is still completely locked on the ground, every routing decision and every link switch requires a round - trip between the satellite and the ground - with communication delays ranging from tens of milliseconds to seconds. More seriously, when the constellation scale exceeds a critical point, the dominant cost of the system will shift from launch and manufacturing to operation. The number of in - orbit satellites in SpaceX's Starlink constellation is approaching 10,000. If the abnormal status of each satellite needs to be transmitted back to the ground for manual judgment, the scale of the operation and maintenance team will be unbearable.
Moving the computing power to the in - orbit nodes and upgrading the satellites from passive relays to network nodes with autonomous decision - making capabilities is an inevitable choice to break through this systematic constraint: first, complete status perception, routing decision - making, and fault diagnosis in orbit, and only transmit the compressed data and key decision - making results back to the ground, thus transforming the satellite - to - ground down - link from a bottleneck restricting the system operation into a controllable resource. This trend is expected to accelerate further with the integration of 6G space - air - ground. As satellites gradually become native network elements of the mobile communication network, more protocol processing, edge computing, and intelligent scheduling capabilities will inevitably migrate to the satellites.
The second dimension is the extension of terrestrial applications to space (Space for Computing). Parallel to the endogenous demand for constellation operation, it is the active pull of external applications for space - based computing power. The driving force also comes from the communication bottleneck, but the scenario logic is different:
Firstly, it is the migration of cellular networks to space. The terrestrial mobile communication network is approaching the theoretical limit of Shannon's law: the density of base stations in urban core areas has reached dozens per square kilometer, and the marginal benefit of further densification is decreasing sharply, while nearly 3 billion people globally have never used mobile Internet services. The space - based network is a natural extension of the terrestrial cellular architecture: low - orbit satellites cover the globe at an orbital altitude of 500 - 1200 kilometers, and the coverage diameter of a single satellite can reach thousands of kilometers.
However, this extension is not just "moving the base stations into space". When satellites become native nodes of the mobile network, if all the signaling processing, session management, and edge computing requirements generated by the access of a large number of terminals are transmitted back to the terrestrial core network, the satellite - to - ground link will become a bottleneck again. The "space - air - ground integrated architecture" being explored in the 6G standard essentially requires moving some network functions to the satellite nodes, enabling the space - based network to have processing capabilities equivalent to those of the terrestrial 5G network.
Secondly, it is the penetration of AI into a broader physical world. Ocean - going ships, mining equipment, agricultural drones, emergency rescue - the AI applications in these scenarios move with the assets, and the operation areas are exactly outside the coverage of the terrestrial network. If all the remote sensing images and IoT sensing data are transmitted back to the terrestrial cloud for processing, the bandwidth cost can reach dozens of cents or even several dollars per MB, and the response delay can reach several seconds or even dozens of seconds due to the combination of satellite - to - ground round - trips and cloud reasoning. A typical solution is to complete perception and reasoning in orbit: compress TB - level raw remote sensing images into MB - level target annotations or complete the interpretation of SAR images in orbit to track targets in real - time, and only transmit the decision - making results rather than the raw data.
Thirdly, it is the long - term exploration of energy and data sovereignty. The energy constraints and data sovereignty pressure faced by some markets are driving them to explore the possibility of transferring computing power infrastructure to space. The space environment provides almost unlimited solar energy supply and natural passive heat dissipation conditions. Although the current launch cost is still high, with the large - scale application of reusable rockets, migrating large - model reasoning and even training tasks to space will gradually show economic feasibility in the next 10 - 15 years. This is a more long - term vision, but the exploration of the technical route is already underway.
The above driving forces have given rise to three main technical routes for space computing power.
The first is the space data center route, represented by the US startup Starcloud and the ASCEND project funded by Horizon Europe in Europe. The idea is to build GW - level space computing power infrastructure in orbit. Among them, Starcloud (formerly Lumen Orbit) has a long - term vision of a 5GW - level space data center, has received support from institutions such as Y Combinator and NFX, and is promoting in - orbit computing power verification as a member of the NVIDIA Inception ecosystem. The ASCEND project, led by Thales Alenia Space, is conducting a feasibility study and proposes a modular in - orbit data center concept for a sun - synchronous orbit of about 1400 km, aiming to achieve a 1GW - scale deployment before 2050. This article believes that the advantages of this route are large computing power scale and full utilization of energy, which can fully leverage the solar energy and heat dissipation advantages of space. However, it also faces challenges such as high difficulty in in - orbit construction technology, large investment scale, and long commercialization cycle.
The second is the distributed constellation route, represented by Google's "Project Suncatcher". Different from the centralized data center, this route distributes the computing power in the constellation and conducts collaborative computing through inter - satellite laser links. Google plans to deploy 81 TPU satellites in the dawn - dusk orbit, and the prototype satellite is expected to be launched in 2027. Its inter - satellite optical link has verified a transmission capacity of 1.6 Tbps in the laboratory. This article believes that the advantages of this route are high flexibility and good scalability, but the technical challenges such as inter - satellite communication, orbit control, and energy consumption management are relatively large.
The third is the communication constellation expansion route, represented by SpaceX. The idea is to superimpose computing power on the existing constellation network capabilities, enabling the satellites to evolve from communication relays to nodes with edge - processing capabilities. The Starlink official website disclosed that the launch of its third - generation satellites is planned to start in the first half of 2026, and the designed down - link capacity of a single satellite exceeds 1 Tbps, more than 10 times that of the second - generation satellites. This article believes that the advantage of this route is its practical feasibility. Relying on the existing satellite network infrastructure, there is no need to wait for the in - orbit construction technology to mature. Edge intelligence can be achieved through efficient software and hardware design under limited power consumption and heat dissipation conditions, which is in line with Tesla's in - vehicle computing concept. Meanwhile, Elon Musk proposed in November the view that "solar - powered AI satellites may become the lowest - cost AI computing method in 4 - 5 years", which triggered further discussions on the future computing power ceiling of this route.
The differences among the three routes largely reflect strategic choices under different market environments. The United States' large - scale investment in space data centers has its special background: prominent terrestrial energy bottlenecks, highly concentrated AI computing power demand, and the initial manifestation of SpaceX's launch cost advantage. Europe's driving force also takes into account more data sovereignty considerations. In contrast, the domestic terrestrial power supply is relatively sufficient, and large - scale space data centers are in a critical period of technology verification and cost - curve decline. With the large - scale application of reusable rockets, this route is expected to gradually release its commercial potential in the next 10 - 15 years. Currently, the more urgent issue in the domestic satellite industry is the lack of in - orbit autonomy - satellites still highly rely on ground measurement and control, the on - board intelligent processing capacity is limited, and the constellation collaboration efficiency is restricted by the traditional satellite - to - ground architecture.
This means that the implementation path of space computing power in China is more likely to follow the logic of gradual evolution: first, solve the problem of single - satellite intelligence, then promote constellation collaborative computing, and finally consider larger - scale computing power deployment on the premise of mature technology and controllable costs. These three stages are progressive accumulations of capabilities. The engineering verification of each stage lays the foundation for the next stage and may eventually converge in the long run - whether it is a distributed space computing power network or a centralized space data center, it cannot do without a stable network backbone and a mature operation and maintenance system, but these basic capabilities usually need to start from single - satellite intelligence.
Three Stages of the Evolution of Space Computing Power Forms
As mentioned above, single - satellite intelligence is the core proposition of the first stage. Its essence is to integrate Agent capabilities into satellite operations, upgrading satellites from terminals that passively execute instructions to autonomous nodes with a closed - loop capability of perception, decision - making, and execution. This requires the in - depth integration of three subsystems: communication, computing, and control. On the communication side, it perceives the link status, inter - satellite topology, and network load; on the computing side, it runs lightweight AI models for situation judgment and task planning; on the control side, it drives equipment such as thrusters, antennas, and payloads to execute decisions. The three must work together under a unified software framework to achieve a complete closed - loop of perception - decision - making - execution.
To understand the value of this stage, we need to first see the system cost of the traditional architecture. In the traditional transparent forwarding mode, satellites are responsible for forwarding, and complex protocol processing, scheduling, and business logic remain in the ground gateway and core network. For single satellites or early - stage constellations, this route is clear and the risk is controllable. However, in large - scale operations, the system will become like a "dumbbell" - that is, more satellites can be added in space and more servers can be added on the ground, but the most difficult and expensive part to expand is the satellite - to - ground down - link window in the middle. It is restricted by the hard boundaries of orbital geometry, ground station resources, and spectrum reuse and is difficult to expand linearly with the number of satellites. The value of single - satellite intelligence is to upgrade satellites from relays to network nodes, moving part of the protocol processing, routing, and business closed - loop to the satellites, and reducing the down - link from "bearing all the complexity" to a controllable resource constraint.
From a technical logic perspective, the value space of single - satellite intelligence is very clear: on the communication side, it improves the net throughput and operability; on the remote sensing side, it increases the information density corresponding to the unit down - link window; on the platform side, it enhances the operation and maintenance efficiency and autonomous response ability.
Single - satellite intelligence
The second stage is constellation collaborative computing. When single satellites have in - orbit autonomy, the next proposition is to make the