HomeArticle

GPT-5 Can Reason for the First Time. OpenAI Co-founder Reveals the Secret of AGI. Supercritical Learning Consumes Computing Power. Will Money Be Useless by 2045?

新智元2025-08-18 07:49
GPT-5 is a watershed moment as it has finally learned to "reason". In a recent interview, co-founder Greg Brockman talked at length about OpenAI's path to AGI. In the future, AI will be able to learn while in use and deduce Nth-order consequences in a supercritical mode.

“GPT-5 is a watershed.”

Yesterday, Greg Brockman, co-founder of OpenAI, gave a high - level evaluation of GPT - 5 in an exclusive interview with the Latent Space team.

This one - hour interview is extremely valuable.

From the significance of GPT - 5, the turning point of reasoning and reinforcement learning, to the computing power bottleneck and AI engineering practice, and then to the prediction of the future society, Greg Brockman's conversation revealed the latest strategic thinking of OpenAI.

He also said, “When we finished training GPT - 4 internally, we knew that the next step must be towards the reasoning paradigm. This is not a new idea, but the only way to make the model reliable.”

Here are the core highlights of the full - text interview:

· GPT - 4 can have continuous conversations, but it is not reliable enough; GPT - 5 begins to truly learn “reasoning”

· In the future, models will no longer be “one - time training + infinite reasoning”, but learn while being used

· Supercritical learning: AI not only learns the answers, but also can deduce the consequence chain

· Using AI is a management science. One should be a manager of multiple agents.

· The only scarce resource is computing power

GPT - 5, a Watershed

When talking about GPT - 5, Greg emphasized that it is OpenAI's first “hybrid model”, which automatically switches between the reasoning model and the non - reasoning model through a router.

This mode reduces the complexity of use and avoids users' entanglement in “which version to choose”.

In terms of performance, GPT - 5 has shown a qualitative change in high - intelligence tasks such as mathematics, programming, and physics.

In this regard, Greg made a sharp comparison with the previous generations of flagship models.

After the birth of GPT - 3, its text ability was still very shallow, and it couldn't even do basic tasks like “sorting numbers” well.

By the time of GPT - 4, its practicality had been greatly improved, becoming the basis for wide - scale commercial use, but it still lacked in real - depth intelligence.

“And GPT - 5 is a watershed.”

GPT - 5 can write proofs comparable to those of the best humans in extremely difficult fields, such as the IMO and IOI international competitions.

This was a great challenge in the past, but now we can solve it with a small team.

What's even more shocking is that physicists have reported that the reasoning process given by GPT - 5 can reproduce the insights they obtained after months of research.

This means that the model is no longer just an “auxiliary writing tool”, but a real scientific research collaborator.

He also mentioned that after GPT - 4, OpenAI made a key judgment:

Relying solely on a large amount of pre - trained data cannot make the model truly reliable.

Early experiments showed that although GPT - 4 could have continuous conversations, it often “went off - track” and was not reliable.

Therefore, the team determined that the model must “test ideas - get feedback - reinforcement learning” to narrow the gap with AGI.

Greg explained that we hope that the language model can be like the Dota AI back then, starting from a randomly initialized neural network and finally learning complex and stable behaviors.

Reinforcement learning can amplify reliable intelligence from limited human task designs.

This is also the biggest paradigm shift behind GPT - 5: from static training to dynamic reasoning.

Supercritical Learning

Humans have “sleep replay” in learning, and AI is also exploring the cycle of “reasoning - retraining”.

OpenAI's models have shifted from “off - line training + a large amount of reasoning” to “reasoning + retraining based on reasoning data”, gradually approaching the human learning process.

Greg said, “We are moving from the era of 'one - time training, infinite reasoning' to a new era of'reasoning while training'.”

In this process, humans only need to design a small number of tasks, and the model can learn complex behaviors through thousands of attempts, but it consumes a huge amount of computing power.

When the computing power increases by 10 times or 10,000 times, the model will experience “supercritical learning”.

It means that LLM learning is not only about mastering the current task, but also about deducing second - order and third - order effects.

Looking forward to the future applications of the model, Greg's experience in a biological research institute makes him believe that DNA, like a prophecy, can be learned by neural networks.

He said that for neural networks, there is no essential difference between human language and biological language, and we have reached the level of GPT - 2 in DNA modeling.

Greg also mentioned that his wife suffers from a rare genetic disease, and the breakthrough of AI in medicine has more personal significance for him.

Best Engineering Practices: Building a Prompt Arsenal

With such a powerful model, how can developers make the most of it?

“To fully unleash the potential of the model, some special skills are indeed required.”

This requires an almost paranoid tenacity to truly understand the boundaries of the model's capabilities and the contours of its defects.

For this reason, Greg proposed the best engineering practices -

1. Build an AI - friendly code library: clear modules, complete unit tests, and detailed documentation;

2. Decompose tasks and let multiple agents complete them in parallel;

3. Manage the “Prompt library”, accumulate your own prompt arsenal, and continuously explore the boundaries of the model.

However, these Prompts are often not the only correct answers, but tests that allow the model to show creativity and diversity.

In the interview, Greg said, “I always regard the model as a development team, rather than a single tool.”

It can complete tasks remotely and asynchronously, or collaborate in real - time like pair programming.

More importantly, AI doesn't mind being completely “micromanaged” and can be infinitely replicated, which human developers cannot do.

GPT - 5 performs outstandingly in front - end testing, but developers should not “over - fit” certain strong - point scenarios. They should learn to let AI switch between different modules to form a complete workflow.

Greg gave an example. He usually outsources non - critical tasks to the model to reduce risks while maintaining information flow.

He also boasted that OpenAI is building the largest intelligent machine in human history. In comparison, projects like the “Apollo Program” pale in comparison.

Even if some work is automated, excellent engineers are still scarce.

Regarding the current status of AI research, Greg pointed out that different laboratories do not have homogeneous orientations, but each has its own unique focus.

OpenAI's focus is - the next paradigm shift, with priorities including: reasoning paradigm, multi - modality, and applications.

Computing Power: The Eternal Bottleneck

In the next future, computing power will become the most sought - after resource.

Inside OpenAI, only when researchers have more computing resources can they carry out larger projects and achieve more results.

Recently, Altman said that we have a more powerful model internally, but due to insufficient computing power, we can't release it.

When talking about the limit of AI, Greg hit the nail on the head: “The bottleneck is always computing power.”

If you can give us more computing power, we can turn it into a stronger model.

He also compared computing power to a kind of “energy”. Pre - training converts energy into potential intelligence, while reasoning releases the intelligence again as kinetic energy for tasks in the real world.

For this reason, OpenAI started building the “Stargate” super - cluster this year to continuously expand its infrastructure.

In Greg's view, “computing power allocation” in future society will become a core issue, even more scarce than wealth.

In his words, “The only resource that will definitely be scarce in the future is computing power.”

Greg believes that with the scaling of computing power, the depth of AI reasoning will increase exponentially.

2045: AI Generates Everything. Is Money Useless?

In the interview, when the host asked what note he would send to 2045, Greg Brockman said that it would be an amazingly abundant world. The progress of AI may allow us to realize the dreams in science fiction novels and even move towards a multi - planetary civilization.

The application space of AI is extremely broad. Whether in medicine, education, or other industries, there are countless “unpicked fruits” waiting to be explored.

However, how to build a fair and efficient society to allocate computing resources will be a question that needs to be carefully considered in the future.

But he also seriously emphasized:

If AI can generate all materials for free, money may lose its meaning;

But computing power will become a new scarce resource. Those who can get more computing power can do more things.

At the end of the interview, Greg recalled that when he was young, he often felt that he “had missed the era”.

He said, “I used to think that by the time I was ready, all the cool problems must have been solved long ago... It turns out that this idea was completely wrong. The number of problems will increase over time rather than decrease.”

In other words, now is still the best time to enter the field of AI.

Reference Materials:

https://www.youtube.com/watch?v=35ZWesLrv5A&t=1s

https://x.com/slow_developer/status/1956741490170106288

This article is from the WeChat public account “New Intelligence Yuan”. Author: New Intelligence Yuan. Editor: Taozi. Republished by 36Kr with authorization.