HomeArticle

Altman Declares Transformer Dead, AGI to Arrive in Two Years, Next-Generation Architecture on the Way

新智元2026-03-17 08:44
Shocking revelations from Stanford's actual records

The architecture that will end Transformer is about to be born! In a recent interview, Altman boldly claimed that the next - generation AI architecture will completely subvert Transformer, and the fate of LSTM may repeat itself.

The biggest beneficiary of Transformer has personally pronounced its death sentence!

In recent days, Sam Altman returned to Stanford and dropped a bombshell in front of a group of sophomore students —

In the future, a brand - new underlying architecture will surely emerge, with a performance leap no less significant than Transformer's overwhelming advantage over LSTM back then.

You know, the GPT empire is built on Transformer.

ChatGPT, GPT - 4, o1, and Codex are all fruits of this architecture.

Now, the person who reaps these fruits says in person: The lifespan of this tree is almost over.

Moreover, Altman said bluntly, the AGI we are pursuing may just be a "warm - up"!

The breakthrough of the next - generation new architecture is on the way — The existing high - order LLMs already have sufficient cognitive ability and can serve as a lever for human intelligence to open the door to another technological paradigm.

Use AI to find the next Transformer

People say that brute force can create miracles, but brute force itself has its limits.

Transformer has an inherent computing power black hole: when the text length increases by 10 times, the computational complexity increases by 100 times.

This is why running a model at the GPT - 5.4 level today costs an astronomical amount of money.

Altman clearly saw this wall. But he doesn't think there is no way out. On the contrary, he believes that the tool to overthrow this wall is already in hand.

There is a crucial sentence in the interview: Now the models are finally smart enough to assist humans in conducting scientific research at this level.

This means that AI can now help us find the next - generation architecture.

Using the current AI to discover a new architecture that can replace it, the logical chain is clear:

The stronger the model → the higher the scientific research efficiency → the greater the probability of discovering a new architecture → the new architecture in turn makes the model stronger.

A self - accelerating flywheel is thus formed.

Altman's confidence in making this judgment is related to his unique sense of paradigm shift.

During his freshman summer vacation, he went to work in an AI lab at Stanford. His conclusion was "these things have no future", and then he went to start other businesses.

However, his attention to AI has never ceased. In Altman's own words, this is a habit of "looking at the big picture" to avoid being shortsighted.

In 2012, when AlexNet emerged, like most people, he thought it was "cool" but didn't take it seriously.

In the following years, as deep - learning models became larger and stronger, Altman kept watching. Until a certain critical point, the situation completely changed — this thing was like an approaching asteroid, extremely crazy, but few people in the world took it seriously.

So in 2015, OpenAI was founded. The core belief was simple: push the scale of deep learning to the limit and see what would happen.

But at that time, when they said they were going to build an AGI lab, the veterans in the industry thought they were crazy and even called them frauds.

But as we all know, the results are there for all to see.

GPT - 2 made Altman see for the first time what a computer could do that was unprecedented. GPT - 3 amazed the world, and GPT - 4 took it to the next level. When you stick to the right paradigm, the rewards are are exponential.

Now, the same intuition is projected onto the next paradigm.

Transformer is not the end, just as LSTM was not the end.

Altman even gave specific advice:

If you are a researcher now, you should focus on this direction to find "where a nuclear - level breakthrough can be made", and rely heavily on large models as scientific research assistants.

The whiteboard in Greg's apartment

A night that changed the world

The most interesting part of this interview is Altman's recollection of the early days of OpenAI.

On the first day of OpenAI, everyone gathered in the apartment of co - founder Greg Brockman.

At around 9:30 and 10 o'clock in the morning, eight or nine people arrived one after another and sat on the sofa, looking at each other.

Then someone said, "Well, what should we do?"

Someone proposed writing some papers. Another person said they needed a whiteboard first. Then someone placed an order on Amazon for an express delivery.

Altman said he felt a moment of panic at that time: This won't work. It's neither like a proper startup nor like any organization that can achieve something.

But then he said a very "Altman - like" thing: At that moment, you just need to take a deep breath and believe that if you are surrounded by the best people, things will always work out.

He bet right.

During that first week, most of the ideas that later became the core concepts of OpenAI in the first four years were written on that whiteboard. Even though they themselves thought these ideas were unreliable at that time.

They didn't even think about making products at the beginning.

Altman repeatedly emphasized that they thought they were just a pure research lab and only needed to publish papers.

But later, two things became clearer:

  • First, the economic value of this path far exceeds imagination;
  • Second, the required funds are not in the billions but in the trillions.

The real turning point that made Altman build his faith was GPT - 2.

He said he doesn't remember the specific date when GPT - 2 was released, but he will always remember the night when he first talked to that model.

It did things that I had never seen a computer do before.

At that moment, he thought, "This is it."

As for why the release of GPT - 2 was postponed? Altman admitted that in hindsight, it was a bit over - cautious, but he believes that it's not a bad thing to be a little more cautious when facing each new ability level of AI.

Of course, you can't be too timid. If a company doesn't embrace AI fast enough, it will be defeated by fully autonomous AI companies, which would be a real disaster.

The full view of the Stanford interview

Altman's 10 judgments

Besides the architecture prediction and entrepreneurial stories, Altman also shared a large number of views in this interview, and almost every one of them is worth discussing separately.

1. AGI will arrive within two years.

Altman directly told the sophomore students in the audience:

By the time you graduate, you will enter a world where AGI already exists.

Of course, the underlying driving forces of human beings won't change. You still have to move, find a job, and consider starting a family.

But scientific research will be highly automated, and the meaning of starting a startup or working for a big company will be completely rewritten.

2. Programming agents are the next ChatGPT moment.

What will be the next big thing? Altman didn't hesitate: Programming AI agents.

Closely following, but not fully exploded yet, is the equal ability of AI to perform tasks in all knowledge - based jobs.

However, this day is not far away.

3. One person can do the work of a medium - sized company.

In the future, there will be a large number of micro - startups with one person or six partners, and their influence and revenue can even compete with today's medium - and large - sized enterprises.

Altman said that the emergence of the iPhone was the last such opportunity, and this time it's even more powerful.

Not only can you do things that were previously unthinkable, but you can also build products and companies extremely quickly with very little manpower.

4. An AI CEO? It's not impossible.

When talking about the impact of AI on society, Altman said something thought - provoking:

He will never deceive himself into thinking that in the not - too - distant future, there won't be an AI CEO more suitable to lead OpenAI than him.

If some companies or countries embrace AI while others don't, the competitiveness gap will be overwhelming.

He admitted that he hasn't fully figured out the political, economic, and social impacts behind this.

5. Don't panic. Human adaptability is seriously underestimated.

Altman is not an AI doomsayer.

He repeatedly emphasized a point: Although AGI sounds like it will completely subvert society, the experience of being in it won't be as terrifying as it sounds. At most, you'll feel a bit confused in the first few days.

Humans long to be valuable to each other, to compete, to create, and to express themselves. These underlying driving forces won't disappear.

Maybe the occupations 100 years from now will be completely different from today's, but people will always have something to do and will always care about the connection between people.

6. Don't be afraid to compete with OpenAI.

Someone asked what if OpenAI becomes an ultimate giant?

Altman's answer was unexpectedly honest: Back then, everyone said it was impossible to compete with Google, but we did it.

One day, there will be a company bigger and more successful than OpenAI, and they definitely won't take the same path.

He even said that if Google hadn't been so "poor" back then, OpenAI would never have emerged.

Big companies have their common problems.

7. It burns money fast, but don't panic.

In the face of the sharp question "OpenAI burns money at a terrifying speed", Altman was very calm: It does burn money fast, but if spending 1 billion this year can earn 3 billion next year, there are plenty of capitals in the world queuing up to make this deal.

8. Self - developed chips are serious, but building data centers is out of the question.

OpenAI has a huge plan for custom - made chips and is extremely excited about its own inference chips.

As for building its own data centers, in Altman's own words, he really doesn't want to do this hard work at all.

He'll do it if forced, but it's better to design the server racks to the extreme and let others do the dirty work.

9. There will be a breakthrough in social products.

Altman believes that the opportunities of AI are far more than just "putting an AI into existing software".

He gave an example of social products: Imagine a bunch of AI agents representing their respective users chatting and exchanging information autonomously in a virtual space. This is a subversion of the underlying logic.

10. Knowing is easy, but doing is harder.

This is what Altman wrote in his first blog post.

Does it still hold true in the AI era? He said it holds even more than before.

It's getting easier to acquire knowledge, and it's also getting easier to achieve things, but this applies to everyone — you have to compete with the whole world.

He said that the top experts he knows who are most proficient in using AI tools all feel that their work has never been more difficult than now.

The tools are incredibly powerful, but it's also extremely difficult to use them well to maintain top - level competitiveness.

Sam, are you really happy?

The last unexpected moment in the interview was the soul - searching question from a student.

You know, this is a CEO whose life gets completely out of control after 8 am every day.

He works for a few hours, spends an hour with his kids, and then goes to the company. After that, it's just chaos.

In his words, no company runs as fast, is as chaotic inside, and is under as much pressure as OpenAI.

But Altman said that he is one of the happiest people he knows.

He shared a life - changing cognitive shift.

Most people think the opposite of a bad experience is a good experience, so they suffer when they encounter bad things. But he reframed the problem. The opposite of a bad experience is actually the complete loss of the ability to experience.