It only takes five years for computing power to exceed that of the Earth. Elon Musk spent three hours finally clarifying space AI.
Yesterday, Elon Musk recorded a nearly three - hour podcast with Dwarkesh Patel and John Collison, the co - founder of Stripe.
During this interview, Musk systematically explained for the first time a judgment: why he has started to fully promote the space data center.
To get the Colossus cluster online, xAI had to build power plants across states, and even began to consider self - developing some key equipment. The chip production capacity is releasing exponentially, but the power supply is stuck in the long cycle of approval processes, cooling conditions, and equipment delivery.
This led him to conclude that the path on the ground won't work.
In Musk's view, within the next 36 months, the cheapest place to deploy AI will not be on Earth, but in space.
For this purpose, SpaceX is preparing for an extreme goal: to increase the annual launch frequency of Starship to 10,000 - 30,000 times, with an effective payload of 100 - 150 tons each time. This is a prerequisite for the large - scale realization of space computing power.
His prediction is even more radical. In five years, the newly added computing power of space AI each year will reach hundreds of gigawatts; the AI computing power sent into space for operation each year will exceed the total historical accumulation of all AI on Earth.
From that moment on, the main battlefield of the AI computing power competition will no longer be on the ground.
This is not the end. According to Musk's judgment, the annual newly added power on Earth can only reach about 1 terawatt at most, which is a hard ceiling. To expand further, we must break out of the Earth's system.
His idea is to directly target the Moon. About 20% of the lunar soil is silicon, and it is also rich in aluminum resources. Solar panels and heat dissipation structures can be manufactured locally; the truly complex chips will be transported from Earth.
In this system, the lunar base will use a mass driver to shoot AI satellites into deep space at a speed of about 2.5 kilometers per second, with a theoretical transportation capacity of up to 1 petawatt (1 million gigawatts) per year. This is what he calls "true large - scale development".
Although SpaceX's ultimate goal is still Mars, Musk also admitted that currently, each step must first achieve commercial returns before moving on to the next stage. So, Starship will first serve the orbital data center.
So, what other judgments did Musk make during this interview? Next, let's follow Silicon - based Jun to find out.
01 Earth's energy expansion can't keep up with the development speed of AI
To understand this, we must first look at the reality of global power supply.
Outside China, the power generation in most countries either remains flat or has only a slight increase, and the overall situation is approaching a plateau. Only China is still rapidly expanding its power - generation capacity. This means that if large - scale data centers are built anywhere outside China, power will become a bottleneck.
The chip production capacity is growing exponentially, while the power supply remains almost unchanged. This is why space has been reconsidered in the discussion framework.
In a sense, space is a "shortcut" in terms of regulations and physical conditions. It is already difficult to expand data centers on the ground, and the larger the scale, the higher the difficulty. In space, however, there are fewer restrictions.
The key lies in the energy conditions.
Take power generation for example. Space solar energy has two advantages: it operates at full power 24 hours a day, without the obstruction of clouds and the atmosphere, the light intensity is increased by 30%, and there is no need for batteries.
Musk calculated: "Chinese solar panels are already as cheap as $0.25 per watt. In space, the power - generation efficiency is five times that on the ground, and the battery cost is also saved. Overall, the cost per kilowatt - hour of electricity is one - tenth of that on the ground."
In addition, there are also objective physical limitations. On Earth, it takes about 30 - 36 months for a new data - center project to be implemented.
Even if solar energy is widely used, the Earth itself cannot support such expansion.
The current average power consumption in the United States is about 0.5 terawatts. 1 terawatt means doubling the current power consumption. What does this mean? It means building a large number of data centers, power plants, and supporting power transmission and distribution systems simultaneously.
On the one hand, the entire process will be restricted by approval, regulations, and public utility commissions at multiple levels.
Even for signing an interconnection agreement, it often takes a year of research. After a year, when the research report comes out, it is found that the power data of the meter itself cannot be accurately determined.
On the other hand, equipment is a more realistic dilemma.
On the surface, we just need to build more turbines, but those who have actually been involved will find that: turbine blades are the biggest bottleneck. There are only three foundries in the world that can produce them, and the orders are already booked until 2030. Other components can be prepared 12 - 18 months in advance, but not the blades.
This is not a secret. Call any turbine manufacturer, and they will tell you the same thing.
The conclusion is becoming clear: the speed of energy expansion on the ground may not be able to keep up with the demand curve of AI. So, at least within 36 months, space will be the cheapest place to deploy AI.
02 Space AI computing power will exceed that of Earth in just 5 years
In five years, there may be a structural reversal in the installed AI computing - power capacity between Earth and space.
Musk's judgment is that the AI computing power launched and operated in space each year will exceed the total historical accumulation of all AI computing power on Earth. Calculated in terms of power, in five years, the newly added space AI computing power each year may reach hundreds of gigawatts.
To understand this prediction, we must start from physical constraints rather than technological imagination.
First is the launch capacity.
On Earth, before actually encountering the rocket - fuel bottleneck, theoretically, it can support about 1 terawatt of AI computing power.
But if the goal is to achieve an annual deployment of 100 gigawatts of space AI within five years, the problem becomes the system - level specific power: solar arrays, radiators, structural components, chips, everything needs to be calculated together.
Crudely estimated, this means about 10,000 Starship launches. If trying to complete it within one year, it is equivalent to one Starship launch per hour.
It sounds extreme, but compared with the aviation industry, it is still a low - frequency system. The key is not whether a polar orbit is needed, but the altitude.
That is to say, as long as it flies high enough, it can gradually get out of the Earth's shadow, and a geosynchronous orbit is not a necessity.
Fortunately, the launch system is being planned in this direction.
SpaceX is preparing for 10,000, or even 20,000 - 30,000 launches per year. The goal is to turn the launch capacity itself into a large - scale infrastructure and export it to the outside.
If this rhythm is achieved, a very radical conclusion will emerge: in 5 years, the AI computing power launched and operated by SpaceX each year may exceed the sum of all other systems on Earth.
This is not the end.
Since the annual newly added power on Earth can only reach about 1 terawatt at most, which is a hard ceiling. Beyond this scale, launches will have to be made from the Moon.
The lunar soil contains 20% silicon and sufficient aluminum. Solar cells and radiators can be manufactured locally, and the chips can be transported from Earth.
Musk's idea is that the lunar base will use a mass driver to shoot AI satellites into deep space at a speed of 2.5 kilometers per second, with a transportation capacity of up to 1 petawatt (1 million gigawatts) per year. This is "true large - scale development".
Ultimately, everything returns to SpaceX's underlying logic. The end goal is Mars, but each step must generate real - world cash flow to drive the next stage.
The Falcon 9 created Starlink, and Starship is likely to first serve the orbital data center.
When the power problem of the space AI data center is solved, it also brings another problem: when the power constraint is removed, the limiting factor becomes the chips again.
On the chip side, Musk's biggest concern is not logic chips, but storage. The evolution path of logic chips is relatively clear, while the supply elasticity of storage is worse, which is why the price of DDR has risen first.
In Musk's view, although chip manufacturers are expanding production at full speed, it is still not fast enough. It takes five years from building a fab to mass - producing with a high yield.
The root cause, according to Musk, lies in the industry's collective memory.
If you have worked in the storage or semiconductor industry for thirty or forty years and experienced multiple cycles of prosperity and collapse, you will understand that this caution is not short - sighted, but a response to historical costs. During the prosperous period, the demand seems infinite, but collapse often follows, and the primary goal of enterprises becomes "avoiding bankruptcy".
At the same time, the human - resource structure in chip manufacturing is often misunderstood. There are indeed thousands of doctors in the wafer fabs who have a deep understanding of the process details, but the real large - scale engineering work does not rely on doctors, but on skilled technicians. This kind of human resource is even more difficult to replicate quickly.
So, Musk plans to build a storage - chip factory that covers three aspects: storage, logic processing, and packaging integration. The goal is to increase the production capacity to one million working hours per month by 2030.
03 In the Sino - US manufacturing competition, the US can only win with robots
If relying solely on human labor, the United States cannot win in the long - term competition.
The reason is not complicated. China's population is about four times that of the United States, and more importantly, the per - capita work intensity is not low.
The long - term leading side often develops complacency and reduces the level of effort; this is common in professional sports and industrial competition. My observation is that China's overall work - input level is at least not lower than that of the United States, and may even be higher.
Even if the human resources are rearranged through organizational optimization, education upgrading, etc., the United States' disadvantage in the total amount of human resources still cannot be compensated.
Even assuming that productivity improvement brings a four - fold leverage, this assumption is too optimistic, and the actual situation may be far lower than this level. China is not necessarily at a disadvantage in terms of "per - capita output".
This means that in the traditional human - labor competition framework, the United States is at a structural disadvantage.
The population structure further magnifies this gap. Since 1971, the birth rate in the United States has been lower than the replacement level for a long time. The number of retired people is continuously increasing, and the number of deaths is approaching or even exceeding the number of births. In the long - term trend, the supply of human resources in the United States is shrinking, not expanding.
Therefore, the United States cannot win on the human - labor front.
But there is still an opportunity on another front, that is, robots.
This is the strategic significance of humanoid robots (such as Optimus). In the past, there were many things that were technically feasible but could not be implemented due to being too labor - intensive or too costly.
Now, this constraint is changing. Robots mean that those manufacturing and infrastructure projects that were once abandoned can be re - examined.
Tesla has started to make arrangements in this direction. In Corpus Christi, Texas, we have built and put into operation a lithium refinery, which is the largest lithium - refining facility in the United States and one of the largest outside China.
In Texas, a nickel and cathode - material refining facility has also been built, which is currently the largest cathode - refining plant in the United States.
The common prerequisite for these projects is high - level automation.
If relying on human labor, it is difficult for the United States to replicate such refining capabilities on a large scale. On the one hand, this kind of work has high labor intensity and complex working environments; on the other hand, in reality, few Americans are willing to engage in refining work for a long time.
Robots change this constraint. Through Optimus, more refineries can be expanded, and the United States' self - sufficiency in key materials can be improved without relying on limited human - resource supply.
This leads to a more fundamental question: why rely on robots now, but not in the past?
The answer lies in the scale constraint. The United States has only about one - fourth of China's population. If humans are used to do these things, it means that other key tasks cannot be completed simultaneously. Robots provide a "parallel expansion" ability, rather than a replacement for human labor.
Looking at the global perspective, this gap is even more obvious. BYD is approaching Tesla in terms of production and sales volume. As China's production capacity continues to grow, the global manufacturing pattern is being reshaped.
This competitiveness is not accidental but comes from extremely profound basic capabilities. China's refining capacity is about twice the sum of that of the rest of the world.
From energy, mining, and refining to the fourth - level and first - level supply chains, almost all basic links have scale advantages. Any complex product will ultimately contain components made or refined in China.
Energy data further confirms this. This year, China's power generation is expected to exceed three times that of the United States. Electricity is a fundamental indicator of the real economy: factory operations, infrastructure construction, and manufacturing activities all rely on electricity. If the power scale is three times, the industrial potential is also roughly at this level.
Without the "recursive productivity leap" brought by humanoid robots, countries like China, which have a complete manufacturing, energy, and raw - material system, will dominate the large - scale manufacturing of AI, electric vehicles, and robots themselves.
A possible division of labor is emerging: the United States is responsible for breakthrough innovation, while China dominates large - scale manufacturing.
So, where is the United States' path?
The answer is not to confront the scale directly, but to continue to be the source of breakthrough innovation.
And the ultimate breakthrough vision points to a more distant space. If we want to expand AI in space, we need real - world AI, humanoid robots, and million - ton - level space infrastructure. That will be a system project on a completely different scale.
If one day, the lunar mass driver can operate, the problems of energy, materials, and scalability will be fundamentally rewritten. By that time, the competition logic itself will change.
If we can reach that point, I will consider it a victory.
04 The self - replication ability of robots is the key to victory
If the United States wants to achieve large - scale and low - cost manufacturing of humanoid robots like China, it must face and solve two fundamental problems: real - world intelligence and a scalable manufacturing system.
Let's first look at the hardware itself.
So far, there has not been a robot system in the market that truly demonstrates "hand - like flexibility". The design goal of Optimus is to cover as many degrees of freedom as possible that a human hand has. The difficulty of this goal does not lie in the appearance, but in the actuator system.
The hand is the most electromechanically complex part of the entire humanoid robot, and its complexity even exceeds the sum of the rest of the robot's components.
To achieve this ability, we must start from the first - principles of physics and redesign the overall coordination of motors, gears, transmission mechanisms, electronic control systems, and sensors, rather than relying on the assembly solutions of the existing supply chain.
In reality, such a supply chain does not exist. It has to be built from scratch and designed for large - scale production from the beginning.
However, hardware dexterity does not mean that the robot is useful.
What really determines the upper limit of the value of a humanoid robot is real - world intelligence. In this regard, Tesla does not start from scratch. The intelligent system used for autonomous driving is essentially a mature "real - world control framework": centered on vision, integrating multi - source sensor data such as inertial measurement and positioning, and compressing high - dimensional and continuous environmental inputs into stable and executable control instructions.