HomeArticle

The largest-ever training of Anthropic is exposed. Was Ilya wrong? The CEO laments: Startups will be destroyed.

新智元2026-03-30 14:50
The performance of Anthropic's new model has doubled. It's the eve of an AI revolution!

[Introduction] Was the crazy rumor three weeks ago finally confirmed by Mythos? Anthropic may have completed the largest-scale training in history, and the performance of the new model may reach twice the expected level, doubling and crushing the Scaling Law! A disruptive transformation is approaching, with computing power and energy becoming the ultimate chips, and startups may face a devastating dimensionality reduction blow!

A rumor three weeks ago now seems increasingly credible: a top laboratory has completed the largest-scale training in history.

The performance of the new model has far exceeded internal expectations and even broken the predictions made based on the Scaling Law.

If all this is true, we may be standing on the eve of a disruptive transformation!

Today, a post on X by AI influencer Andrew Curran has sparked extensive discussions in the industry.

He believes that the legendary laboratory is Anthropic.

They may have achieved a breakthrough in architecture: when training above a certain scale or in a specific way, the model has produced capabilities far beyond previous levels!

Once the rumor from Mythos came out, the stocks of cybersecurity companies had already fallen. Will there be something even more terrifying happening next?

Is it the eve of a storm in Silicon Valley?

Mythos, or its other candidate name Capybara, is very likely to be Anthropic's next-generation flagship model to be released.

What's even more chilling is that the actual performance of the model is said to have reached twice the internal expectation.

You know, in the field of AI, a two-fold performance jump means a generational leap in logical reasoning ability.

If GPT - 4 is an excellent college student, then Mythos is probably a think tank composed of tens of millions of geniuses.

Anthropic's chief scientist once predicted that "fully automated AI research may be achieved within a year." Now it seems that he was not looking into the future but describing a demo already running on the server.

The test of Claude 5.0 has been leaked, and its programming reasoning is unrivaled!

Just yesterday, netizens found that Anthropic is quietly testing its new - generation flagship model, Claude Mythos 5.0 Beta!

In the Claude interaction interface, Mythos 5.0 (Beta) is prominently listed, and the official calls it "larger in scale and more intelligent."

In the Claude Code terminal, Mythos 5 is directly titled "Next - generation model."

Previously, according to insider reports, Mythos 5.0 is an absolute monster in coding, reasoning, and offensive security, so powerful that it's hard to believe. The first leak event led to a sharp drop in cybersecurity stocks.

On March 27, foreign media Fortune exclusively reported that Anthropic's most powerful model, Claude Mythos, comprehensively crushed the strongest Claude Opus 4.6 and has powerful "cyber attack" and "defense capabilities."

Internal tests show that Mythos will bring unprecedented security risks. Anthropic has remained inactive because it knows well that once this "beast" is unleashed, the consequences will be unpredictable!

In terms of cyber attacks, Mythos far outperforms any other model in the world. Therefore, it is very likely to be used by hackers to launch large - scale and highly destructive cyber attacks.

The reason why Fortune got this exclusive scoop is that they found a draft blog post.

What netizens have discovered also proves that the value of this blog post scoop is increasing.

Was Ilya wrong?

Obviously, if this breakthrough is real, Ilya's theory of hitting the wall with the Scaling Law half a year ago will be a bit embarrassing.

In the Reddit post on the relevant topic, the core of the netizens' debate is: Is the progress of Claude 5.0 the result of piling up computing power or an architectural revolution?

According to this rumor, Anthropic has discovered a new training method above a certain scale.

If it's just a simple piling up of graphics cards, the performance will follow the diminishing marginal effect; but if there is an "architectural breakthrough," the performance curve will take off vertically like a rocket!

Perhaps Ilya, the former chief scientist of OpenAI, didn't expect that the "wall" he thought he would hit would be broken by a new mechanism similar to "recursive self - improvement."

This is also in line with Karpathy's law of "eating one's words." In October 2025, he was still despising the code written by AI as garbage, but two months later, AI had already helped him write 80% of the code - even top experts often can't intuitively understand what "exponential thinking" is.

Reviewing OpenAI's abnormal actions: Why did Sora fail?

According to Andrew Curran's analysis, if Mythos is real, then OpenAI's recent series of abnormal decisions suddenly seem reasonable.

Previously, the most puzzling move of OpenAI was "shutting down Sora."

As a video generation model that once shocked the world, why did Sora suddenly fall out of favor? The answer may lie in the game between cost and computing power.

If Mythos proves that ultra - large - scale training is the only ticket to AGI, then every H100/GB200 is a strategic resource, and all major companies are facing a computing power hunger!

Although video generation is cool, it consumes an astonishing amount of inference computing power. At the critical moment of the AGI final round, OpenAI must direct all its computing power to large models that can produce breakthroughs in underlying logic, rather than worrying about whether the water splashes in the video are real enough.

This is the "accelerated escape" chosen by OpenAI. When the leader finds a narrower path leading to a god's - eye view, it is natural to discard all irrelevant burdens.

Second - order effect: Ordinary people can no longer afford cutting - edge models!

At the same time, Andrew Curran put forward a frustrating but realistic view: "Cutting - edge intelligence has become increasingly expensive, so expensive that most humans can't afford it."

For a long time, we have been used to the narrative that AI is getting cheaper and the API prices are constantly being halved. But the emergence of Mythos has directly shattered this illusion.

When the model scale breaks through a certain threshold, the importance of computing, memory, and energy will reach its peak. This is no longer a problem that can be solved by writing a few lines of Python. It has risen to a hardcore industrial war about substations, power grid loads, and liquid - cooled racks!

And the eternal winner is Huang (Jensen Huang of NVIDIA).

No matter who wins the AGI war, NVIDIA is the only big winner.

100,000 GB200s are the physical threshold for achieving "human - scale" intelligence. The Vera Rubin architecture can double the memory, which means an improvement in energy efficiency.

But at the same time, the "free lunch" is about to disappear.

The public will face a cruel reality: due to the high inference cost, the future "strongest model" will come with extremely strict rate limits and high subscription fees.

The class stratification of AI is quietly accelerating.

Entrepreneurs and the middle class are at a critical moment of life and death

Alex Finn, the founder of Creator Buddy, also posted on X, saying that the release of Myth