OpenAI's timeline is made public: from 2026 to 2028, the strategy has completely changed.
On October 28, 2025, Sam Altman and Jakub Pachocki, the Chief Scientist of OpenAI, held a rare live broadcast about the roadmap.
They revealed a clear timeline:
Before September 2026, AI research interns will be launched.
Before March 2028, fully automated researchers will be realized.
This is not just empty talk, but a complete set of engineering paths are provided:
At the infrastructure level, Altman proposed a computing power factory with 1 gigawatt per week;
In terms of security mechanisms, Jakub announced a five - layer structure for value alignment;
At the product level, ChatGPT is to evolve from a dialogue tool to an "AI platform";
At the organizational level, OpenAI has completed a restructuring. Behind the new architecture are new tasks worth $25 billion and a binding cooperation with Microsoft worth $135 billion.
Altman said, "We are no longer just relying on releasing new models to drive the future. Instead, we want the world to create more things based on the platform."
OpenAI's approach has indeed changed.
This time, they are no longer trying to define "what is AGI". Instead, they have drawn clear milestones and told everyone that the future does not come suddenly but is a systematic process that can be built in advance.
Section 1 | Clear Goal: AI Researchers on Duty by 2028
"We are trying to build a system that can independently complete research projects." Jakub Pachocki's opening remarks were straightforward.
The timeline given by OpenAI is also quite specific: We plan to enable AI to reach the research assistant ability at the intern level before September 2026. By March 2028, a real AI researcher capable of completing independent scientific research projects will emerge.
This is no longer a question of whether the model can write papers. Instead, it is about when AI can become a formal employee in the laboratory.
In Jakub's view, the entire path is very clear: We believe that deep - learning systems may reach super - intelligent levels in less than a decade and outperform humans in many key fields.
Why are they so bold in making such a judgment?
Jakub provided a very simple but effective measurement standard:
A good way to judge is to see how complex tasks the model can solve and how long it can work continuously.
In the era of GPT - 3, the model could only handle tasks within tens of seconds;
In the era of GPT - 4, it can already handle complex tasks lasting for several minutes or even up to five hours;
Now, they are evolving towards a new ability: being able to mobilize the entire data center and think continuously for several days.
That is to say, AI is evolving from a problem - solving machine to a real scientific research system.
Altman added, "We used to think that AGI would suddenly appear at some magical moment in the future. But now we find that it is more like a process, and you are already on this journey."
This judgment is changing the way they train models.
In the past, models were improved by increasing the parameter scale. Now they pay more attention to two variables:
The "context computing" time during training (that is, let the model spend more time "thinking")
Whether the "inference time" after training is long enough
The underlying logic is: If we want AI to truly help humans make major discoveries, we must give it enough time and sufficient computing resources to think. Sometimes, it is worth having the entire data center focus on solving a single scientific problem.
Meanwhile, this also means the reconstruction of the organizational work mode.
OpenAI has started using the model as an internal research "intern" with the goal of expanding the computing ability of researchers, helping them advance research more quickly, and ultimately forming a system that can ask questions, find paths, and conduct experiments independently.
What changes will this bring?
First, for universities and research institutions: Those who use AI for research earliest will enter a new scientific research cycle first.
Second, for AI companies: Product design can no longer focus only on "outputting text". Instead, they need to think about how to let AI solve tasks.
For ordinary people, the way of using AI is also changing: from asking questions to "task delivery". The tasks you assign to the model determine what future it can create for you.
Section 2 | No Longer Just ChatGPT, but Building an AI Cloud Platform
Altman mentioned Bill Gates' definition of a "platform" in this live broadcast:
When the value created by people on the platform exceeds the value of the platform itself, it means you have really built a platform.
This statement is not about the future but about what OpenAI is currently doing.
They no longer regard ChatGPT as a "super assistant". Instead, they clearly state that ChatGPT is becoming a platform where everyone can develop their own AI services.
How to do it specifically?
Altman drew a complete product blueprint.
At the bottom layer are data centers, power, and computing chips; above that are trained models; further up are first - party applications such as ChatGPT, Sora, and Atlas; at the top layer is the part that excites him the most: users can create new services on top of AI.
This means:
Enterprises can access OpenAI technology through APIs to build their own applications.
Developers can create "application - type intelligent agents" in the ChatGPT plugin platform.
In the future, there will be new forms of hardware devices, enabling AI to serve you anytime and anywhere, not just on the web.
Altman said, "We hope that OpenAI will become an 'AI cloud platform', not just a place for our released products, but a place where others can build the future."
ChatGPT is the "starting point", not the "ending point".
To support this platform, they are building a complete set of AI interaction systems: web pages, the Atlas browser, mobile devices, application markets, plugin ecosystems, and enterprise platforms. These components are gradually opening up interfaces for others to access and develop.
For example, the Atlas browser can be regarded as an AI - version of Chrome, allowing you to be assisted by AI in real - time while surfing the Internet. Sora can not only generate videos but may also become a new content dissemination entry.
Behind this lies a deeper judgment:
Traditional products do what you tell them to do; an AI platform anticipates things you haven't thought of in advance.
In the past, you bought a computer, installed software, and shut it down after use;
In the future, you will have an AI that actively learns your preferences, understands your tasks, and becomes a part of you.
This is no longer just product evolution but a complete change in the usage relationship.
Section 3 | Five - Layer Security Architecture: Making AI Thinking Traceable
As AI becomes more and more powerful, "Will it get out of control?" has become an inevitable question for everyone.
Jakub made it very clear in this release: "We believe that the most core issue in the long - term security of AI is value alignment."
That is to say, what does AI really care about? How will it choose when goals conflict? These questions determine whether AI can truly integrate into human society.
OpenAI announced an internal security framework divided into five layers:
Value alignment: Where do AI's values come from? Can it understand human high - level goals?
Goal alignment: When given a specific task, can AI accurately understand and execute it?
Reliability: Can it provide stable output for simple tasks? Will it admit uncertainty when encountering complex problems?
Adversarial robustness: Can the model remain stable when facing malicious attacks?
System security: At the entire system level, are there clear boundaries for AI's data access and device control?
These five layers are progressive: from the model's "internal thinking" to "interaction with humans", then to "anti - attack ability", and finally to "the boundary of the entire system".
These security mechanisms are not post - hoc repairs but are embedded from the very beginning of the design. Among all these levels, the most core and challenging one is the first layer: value alignment.
One of the most special points is that they spent a lot of time introducing a new research direction called "Chain of Thought Faithfulness".
To put it simply: When solving problems, the model should not only get the correct answer but also show how it thinks step by step.
Jakub used a very vivid way to explain this idea: It's not about letting AI come up with good ideas but letting it faithfully record its own real thoughts. This is actually like writing a draft. It's not about writing a perfect answer but about preserving the entire thinking process.
OpenAI wants to make the "draft paper" inside the model readable, analyzable, and understandable.
They also mentioned a very real challenge: If you make the model's "thinking process" public at any time, it may start to cater to you. Over time, it will no longer be "really thinking" but thinking to perform well.
So they adopted a "restrained design" approach:
Don't force the model to give a "perfect idea". Instead, let it reason on its own first and then review it.
In product design, use a "summarizer" to indirectly show the chain of thought instead of exposing it completely.
In this way, you can understand what it is thinking without interfering with how it thinks.
Jakub said:
We believe that preserving the privacy of the model's chain of thought is actually one of the best ways to understand it.
The entire design is based on a fundamental concept: The more powerful AI is, the more traceable, explainable, and controllable each of its judgments and actions should be.
This is not about control but a prerequisite for coexistence.
This time, OpenAI has for the first time turned security into an implementable architecture rather than just an abstract ethical principle. This is what they mean by preparing for super - intelligence.
Because when the system becomes smart enough and runs for a long enough time, complete instructions will no longer be reliable. Principles are the only things that can really work.
Section 4 | $1.4 Trillion to Build an AI Factory with 1 Gigawatt per Week
Altman gave the most specific goal in this release: They are building a factory that can produce 1 gigawatt of computing power per week.
It's not a virtual platform but a real infrastructure: with land, workers, energy, and cooling systems.
This is what they call the "Stargate" data center. The first one is under construction in Abilene, Texas, with thousands of people working on it every day.
If AI is really going to advance science, cure diseases, write code, and serve as an assistant in the future, what we need is not just a few graphics cards but a complete industrial system of energy and computing.
Therefore, OpenAI has set a goal : To reduce the cost of each gigawatt of computing power to $20 billion within five years.
This statement, seemingly an engineering budget, is actually Altman's vision for the "popularization of AI": to make computing power be produced and dispatched on a large scale like electricity. Whoever can significantly reduce the cost can turn AGI into a tool for the general public rather than a toy for the elite.
It is a complete cooperation network:
"We are cooperating with multiple parties such as AMD, Broadcom, Google, Microsoft, Nvidia, and Oracle, covering various aspects such as chip manufacturing, data center construction, land procurement, and energy acquisition."
This cooperation involves cross - industry supply - chain linkages with contracts worth hundreds of billions of dollars and a construction period of more than a decade.
OpenAI has estimated that the infrastructure investment they have committed to so far is : A total of more than 30 gigawatts (i.e., 30 billion watts) of computing power, with a financial obligation close to $1.4 trillion.
Altman said:
If we can really provide all these at a low enough cost, we can support the entire society's demand for using AI.
To achieve this goal, OpenAI has also put forward a bold idea: to let robots participate in data center construction.
We need to rethink what robots should do. They should not just appear in demonstration videos but really participate in building data centers. This has found a clear industrial application scenario for humanoid robots: not chatting or delivering express but becoming an important force in the construction of AI factories. And this has been included in their five - year cost plan.
But all this is not just for the service of OpenAI itself.
OpenAI hopes to build not its own empire but a platform where everyone can build, use, and create AI applications.
As Altman repeatedly emphasized, "We hope that more people in the world can create things more valuable than ours using our platform."
This is the ultimate vision of the platform strategy.
Section 5 | Organizational Restructuring: A Two - Layer Structure of Non - Profit Foundation + Public - Benefit Company
Sam Altman revealed an important signal during the live broadcast:
OpenAI's structure is no longer a company in the traditional sense.
The new architecture diagram he showed is much simpler than before, but its intention is clearer.
The structure has been re - divided into two layers:
✅ Layer 1 | OpenAI Foundation: Control Belongs to the "Non - Profit"
This is the control center of the entire organization.
Altman said, "We hope that the OpenAI Foundation will become the largest non - profit organization in history."
It is responsible for controlling the board of directors, formulating research directions, and ensuring that the mission does not deviate.
The foundation initially holds about 26% of the equity of the PBC (Public - Benefit Company), and the proportion may increase further in the future depending on the company's performance.
This structure ensures that the long - term mission will not be hijacked by commercial interests.
✅ Layer 2 | OpenAI Group PBC: Executing Commercial and Infrastructure Tasks
This is the familiar OpenAI "company", but it is not a purely for - profit enterprise. Instead, it is a PBC (Public - Benefit Company).
Altman positioned it as follows: It will operate like a normal company, being able to raise funds, develop products, and promote construction, but it must abide by the mission and safety constraints defined by the foundation.
It is responsible for promoting:
Model development and product release (such as ChatGPT, GPT API, Sora, etc.)
Fund - raising and building computing power infrastructure
Long - term strategic cooperation with Microsoft, Broadcom, etc.
Investing heavily in promoting AI - related medical, scientific research, and resilience plans
In this structure, Microsoft holds about 27% of the shares of the PBC, with a valuation of about $135 billion, and the cooperation is locked in until 2032. This provides OpenAI with stable cloud services, funds, and technical support.
✅ The Initial Task of the Foundation: Allocate $25 Billion to Launch "AI in Healthcare" and "AI Resilience"
Altman said:
"We don't want the foundation to just talk about directions. Instead, we want it to support major tasks with real money."
Therefore, in the first phase, the foundation directly allocates $25 billion for two things:
1. AI in Healthcare: Using AI to Discover Disease Therapies
This is not about generating medical papers. Instead, it is about promoting real - world progress:
AI - assisted generation of training data
Automated research systems to accelerate pre - clinical research
Quickly building drug screening and optimization models
Jakub mentioned, "AI is expected to find treatment paths faster than humans. We even believe that this may be one of the most important long - term impacts of AI."
2. AI Resilience: Building a Social - Level 'Defense Layer'
Altman's co