StartseiteArtikel

Elon Musk verbrennt wild 14 Billionen Yuan. Die Rechenleistung von 50 Millionen H100 wird in fünf Jahren online gehen, und es wird schließlich einen Sprung in die Milliarden geben.

新智元2025-08-27 09:51
Elon Musk plant, in fünf Jahren zwei Billionen US - Dollar zu investieren, um einen Rechenleistungsknoten mit 50 Millionen H100 - Chips aufzubauen.

Elon Musk announced a crazy plan to achieve the computing power equivalent to 50 million H100 GPUs within 5 years. What does this mean? What kind of impact will it bring to humanity? Can ASI emerge under the all - in effort of the brave?

The world's richest man, Elon Musk, has decided to go all - in on AI this time.

Achieve the computing power equivalent to 50 million H100 GPUs within 5 years.

You know, he already has the world's most powerful Colossus supercomputer cluster, whose AI computing power is equivalent to about 200,000 H100 GPUs.

What on earth does he want to do with so many GPUs?

What kind of miracles can one trillion yuan create?

Currently, the wholesale price of each H100 GPU is as high as $20,000.

For 50 million H100 GPUs, the cost of just the GPUs will be as high as $1 trillion.

To build the most advanced supercomputer cluster at present, the cost of GPUs only accounts for about 50%.

That is to say, the final cost will exceed $2 trillion (more than 14 trillion yuan).

What does $2 trillion mean?

The total military expenditure of the United States last year was about $997 billion, which already accounts for 37% of the global military expenditure.

This means that AI has become a new key area that can compete with traditional arms races.

Elon Musk's net worth is about $400 billion.

The market value of Tesla is about $1.1 trillion.

Adding SpaceX, X, and xAI, the total market value of the companies under Elon Musk's control is about $1.6 trillion.

Once Moore's Law is not effective for GPUs in the next 5 years, the cost will not decrease exponentially.

Elon Musk is staking his and all shareholders' entire fortunes on AI, aiming to recreate a Tesla for the near - future era.

In addition, power supply is a major problem.

This envisioned supercomputer cluster may require power supply from more than a dozen nuclear power plants.

But Elon Musk thinks it's not enough.

His ambition is to have an AI supercomputer cluster with the computing power of billions of H100 GPUs.

Expand the scale by a hundred times.

Elon Musk's technological optimism adds a logarithm to the astronomical figures.

Grok is already very powerful, but it's still far from enough

What on earth does he want to do with so many GPUs?

Whether it's xAI and Tesla training models, or Neuralink and SpaceX making breakthroughs in hard - tech fields, they all require massive computing power.

For this reason, Elon Musk built the world's most powerful supercomputer cluster, Colossus.

When it was first launched, it used 100,000 H100 GPUs and was built in just 19 days under extreme conditions.

Subsequently, it was expanded to 200,000 GPUs.

As is well known, Grok 3 was trained with 200,000 GPUs, and its computing power is ten times that of Grok 2.

Grok 2 was trained on 15,000 GPUs at that time.

Until last month, the most powerful Grok 4 was launched and once again topped the charts, showing everyone the powerful strength of LLM under super - strong computing power.

It not only topped all the charts but also crushed human doctors in the HLE test.

At the press conference, Elon Musk also previewed the "Easter eggs" in the next few months.

The coding model will be released in August; the multi - modal intelligent agent will be launched in September; the video generation model will be unveiled in October.

Just in training the next - generation models, it has become a "bottomless pit" for computing power.

In addition, the product strategy of the xAI chatbot, such as the launch of the AI girlfriend Ani, once attracted many users.

Grok Ani's character illustration

Without strong computing power support, xAI cannot expand more applications.

Having the most powerful Colossus I is not enough.

Elon Musk also wants to use astronomical computing power to make his competitors back off.

After all, he once boasted that "Google will not be his match in the future."

Colossus II was born amidst high expectations, carrying such a grand mission.

Colossus II is under construction

Currently, Colossus 2 is being gradually implemented in batches.

The supercomputer center is expected to launch 550,000 GB200 and GB300 GPUs in the first batch, all of which adopt liquid - cooling design and are specially built for AI training.

In Elon Musk's words, Colossus 2 will become "the world's first AI training supercomputer with a computing power of over one - billion - trillion."

Last month, he posted about the wiring of the GB200 in the supercomputer center, and the density was quite spectacular.

As early as February this year, xAI purchased a campus of about one million square feet on Tulane Road, Memphis, Tennessee, USA, as the second - phase base.

Similar to the first - generation (156 units), Colossus 2 will also be powered by Tesla Megapacks, and this time the number has increased to 208 units.

Moreover, Elon Musk also plans to relocate a power plant from overseas to supply power for it.

The power supply for Colossus 2 will adopt diversified measures, including building or renovating substations, energy storage, and relocating external power sources.

The interstellar gateway is still far away. If Colossus 2 can continue the legend of the first - generation construction, it will surely break the world record again!

Jensen Huang of NVIDIA has praised Elon Musk's in - depth understanding of engineering systems more than once.

References

https://x.com/elonmusk/status/1947704195844608094

https://x.com/elonmusk/status/1959383653256962378

https://x.com/xAIMemphis/status/1947724711968051414

https://x.com/elonmusk/status/1947701807389515912

https://x.com/teslaownersSV/status/1924684020107116709

This article is from the WeChat official account “New Intelligence Yuan”. Author: Allen and Peach. Republished by 36Kr with permission.