HomeArticle

Only the three giants remain in the ASI finals. The accelerated escape has begun, and Meta and xAI are on the verge of collapse.

新智元2026-03-17 08:37
The AI competition has stepped on the accelerator and started to sprint at full speed! In the second half of 2026, the three giants, OpenAI, Google DeepMind, and Anthropic, will leave everyone behind. The engine of recursive self-improvement has been ignited. In the next six months, human civilization may be permanently rewritten.

Just now, the three AI giants have started to accelerate their escape!

A big V on X, Andrew Curran, said that the world's three major technology giants will announce breakthroughs at an increasingly rapid pace.

Even though the feedback mechanism has not been fully established, the feedback loop within the leading laboratories has already begun.

The researchers in the three major laboratories have shown a research enthusiasm for their work that people have never seen before.

By the end of this year, they will completely leave other teams behind and establish an insurmountable leading advantage.

In March 2026, such signs had already begun to appear, and the next six months will completely prove this.

Ethan Mollick, a professor in the field of AIx entrepreneurship at the Wharton School, also asserted:

Meta and xAI can't keep up with the top laboratories, and the open - source weighted models are still several months behind.

This means that if AI recursive self - improvement is realized, it is likely to be first achieved by the models of Google, OpenAI, and/or Anthropic.

The real players that have entered the final circle of AGI in the United States are not four, not five, but the three giants: Anthropic, OpenAI, and Google DeepMind.

xAI is still catching up. Meta has started to slow down.

Moreover, more and more people say that in the next six months, we will witness a major upheaval.

The speed in the next six months will exceed that of the past ten years!

An article in Fortune magazine speculates that the next six months will change everything!

No one can deny that we are now at the most critical node in the history of AI development, and everything that is about to happen is terrifying!

Anthropic: The trend is set, and the future is here

Anthropic is the company that was the first to see this recursive cycle.

Its internal staff have long discovered various clues: the era of AI recursive self - improvement may come earlier!

Recently, the founder siblings of Anthropic, Dario Amodei and Daniela Amodei, appeared on the cover of Time:

The cover article even praised Anthropic as the "most disruptive company in the world".

This article in Time makes it even clearer that Anthropic has grand plans and long - term strategies:

Recursive self - improvement, in the broadest sense, is no longer the future but the present.

Jared Kaplan, the Chief Science Officer, even said bluntly that fully automated AI research is likely to be achieved within a year!

In the Series C financing in 2023, Anthropic once said:

To train the best model in 2026 means to build the strongest recursive cycle to train the next model.

When releasing Claude 3, Anthropic asserted that the intelligence of the model is far from reaching the limit.

Four days ago, Anthropic published a new blog, giving a more radical prediction.

In the next two years, more significant progress will appear one after another.

One of the core beliefs of our company is that AI is developing at an accelerating pace, and the progress we make will accumulate over time.

Extremely powerful AI is coming at a faster pace than many people expected.

After reviewing Anthropic's judgment on the limit of AI, Andrew Curran felt that AI had started to take off since last December.

He speculated that this amazing progress might be due to the technical compound interest effect, and everyone should compress their AI timeline.

Earlier this month, a researcher at METR said bluntly that there is no conclusive evidence to refute the argument that "AI R & D automation will be achieved by the end of this year".

Coincidentally, this recursive cycle has also begun to show signs in the open - source project AutoResearch recently released by Andrej Karpathy, the "father of Vibe Coding".

In this project, AI can independently run machine - learning experiments, adjust the learning rate, and even fine - tune its own attention mechanism.

Through this method, Karpathy successfully used automatic research to improve the AI architecture nanochat by 11% in three days, which made him overjoyed.

And this is just a simple and straightforward attempt, but it has witnessed the potential of AI self - recursion!

AI self - recursion even exceeds the imagination of insiders in the AI field, making them smell the danger of AI getting out of control.

The situation in front of humanity is so dangerous that last month, the head of the research team on protective measures at Anthropic, a doctor from the University of Oxford, directly announced his resignation to write poems to describe this danger.

OpenAI: In 2028, AI scientists will take up their posts

If Anthropic was the first to propose the blueprint of "recursive AI R & D", then OpenAI may be one of the laboratories closest to implementing it.

Recently, Altman predicted in an interview that the next - generation architecture will subvert the Transformer.

Previously, Altman also said bluntly in his blog many times that "we have crossed the event horizon, and the technological take - off has begun."

The key node in this is the automation of AI research.

In fact, the GPT series itself is the best example of recursive improvement: from GPT - 3 to GPT - 4, OpenAI has proved that through better data, greater computing power, and more sophisticated training methods, the model can continuously improve its own capabilities.

Last year, the o1 series of models released in autumn first demonstrated the model's ability to self - correct and self - reflect during the reasoning process.

The further extension of this ability is that the model can improve its own training process, and this is the core of recursive self - improvement.

According to the information revealed by Altman, the internal goal of OpenAI is: in 2026, AI will reach the level of an "intern researcher"; in 2028, a real automated AI researcher will be created.

Once an AI researcher is born, it means that AI can participate in the R & D of AI itself, forming a classic positive feedback mechanism: AI → improve AI → stronger AI → improve AI again.

And this is the core mechanism of the intelligence explosion!

Google: Recursive self - evolution, with the deepest moat

If OpenAI is building an AI product ecosystem, then Google DeepMind is building something even more terrifying - an automated scientific