Altman admitted that he "messed up." It was revealed that GPT-5.2 sacrifices writing ability for top-notch programming skills. The cost will be reduced by 100 times next year, and it's confirmed that Agents can work permanently.
In the AI circle, every statement made by Sam Altman is regarded as an update to the "weather forecast" of the future.
Last night, Altman posted on X that he would host an online seminar, hoping to collect public feedback and opinions before starting to build the next - generation tools.
At 8 a.m. Beijing time this morning, the seminar initiated by Sam Altman, the CEO of OpenAI, arrived as scheduled. Entrepreneurs, CTOs, scientists from various industries, and representatives from the developer community raised the most pointed and realistic questions to Altman regarding the future form of AI, model evolution, agents, scientific research automation, and security issues.
During the seminar, the helmsman of OpenAI not only outlined the evolution blueprint of GPT - 5 and its subsequent versions but also revealed a reality that all developers and entrepreneurs have to face: We are entering a period of drastic change where the intellectual cost is extremely low and the software form is shifting from "static" to "instant generation".
The first focus of the talk fell on the "asymmetry" of GPT - 5's performance. Some developers keenly noticed that compared with GPT - 4.5, the new version is extremely strong in logical reasoning and programming but seems slightly inferior in literary talent. In response, Altman showed great candor.
He admitted that OpenAI did "mess up" the priority of writing ability in the development of GPT - 5.2 because the team tilted the limited computing power resources towards hardcore intellectual indicators such as reasoning, coding, and engineering capabilities.
In Altman's view, intelligence is a "malleable resource". When the model has a top - level reasoning engine, the return of writing ability is only a matter of time. This "unbalanced development" actually reflects a certain strategic focus of OpenAI: first, conquer the highest territory of human intelligence through the Scaling Law, and then go back to fill in the details of aesthetics and expression. This means that in the future, the competition among models will no longer be a single - dimensional comparison but will depend on who can achieve "intellectual equality" in all dimensions earlier.
If intelligence determines the ceiling, then cost and speed determine the penetration rate of AI. Altman made a shocking commitment at the meeting: By the end of 2027, the intellectual cost at the level of GPT - 5.2 will be reduced by at least 100 times.
However, this future of "being so cheap that it doesn't need to be measured" is not the end.
Altman pointed out that a subtle shift is taking place in the market: Developers' desire for "speed" is surpassing their concern for "cost". As agents start to handle long - range tasks with dozens of steps, if the output speed cannot be increased by more than a hundred times, then complex autonomous decision - making will become practically useless. Under this trade - off, OpenAI may offer two paths: one is the extremely cheap "intellectual tap water", and the other is the extremely fast - feedback "intellectual booster". This emphasis on speed indicates that AI applications will completely enter the high - frequency, real - time autonomous driving stage from simple Q&A.
Against the background of the sudden drop in intellectual cost and the soaring speed, the concept of traditional software is collapsing. Altman put forward a subversive vision: Future software should not be static.
In the past, we were used to downloading a general Word or Excel; in the future, when you encounter a specific problem, the computer should directly write a piece of code for you to generate an "instant application" to solve it. This "generate on - demand and discard after use" model will completely reconstruct the operating system. Although we may retain some familiar interaction buttons out of habit, the underlying logical architecture will be highly personalized. Everyone's tools will evolve with the accumulation of their workflow, and finally form a set of personal, dynamically evolving productivity systems. This is not only the customization of software but also the reorganization of production relations.
InfoQ has translated and compiled the key points of this seminar for readers:
Question: How do you view the impact of AI on future society and the economy?
Sam Altman: To be honest, it is very difficult to fully digest an economic change of this scale in one year. But I think it will greatly empower everyone: it will bring large - scale resource abundance, lower thresholds, and extremely low costs for creating new things, establishing new companies, and exploring new sciences.
As long as we don't make major mistakes in policy, AI should become a "balancing force" in society, giving real opportunities to those who have been treated unfairly for a long time. However, I am really worried that AI may also lead to a high concentration of power and wealth, which must be the core concern of policy - making, and we must resolutely avoid this situation.
Question: I noticed that GPT - 4.5 was once the peak in writing ability, but recently the writing performance of GPT - 5 in ChatGPT seems a bit clumsy and hard to read. Obviously, GPT - 5 is stronger in agents, tool invocation, and reasoning. It seems to have become more "unbalanced" (for example, extremely good at programming but average at writing). How does OpenAI view this imbalance of capabilities?
Sam Altman: To be honest, we really messed up on the writing part. We hope that future versions of GPT - 5.x will far exceed 4.5 in writing.
At that time, we decided to focus most of our energy on the "intelligence, reasoning, programming, and engineering capabilities" of GPT - 5.2 because resources and bandwidth are limited. Sometimes, focusing on one aspect means neglecting another. But I firmly believe that the future belongs to "general high - quality models". Even if you only want it to write code, it should have good communication and expression abilities and be able to communicate with you clearly and sharply. We think that "intelligence" is interconnected at the bottom, and we have the ability to achieve excellence in all these dimensions in one model. Currently, we are really focusing on "programming intelligence", but we will catch up in other fields soon.
Intelligence will be so cheap that it doesn't need to be measured
Question: For developers running tens of millions of agents, cost is the biggest bottleneck. How do you view small models and the future cost reduction?
Sam Altman: Our goal is to reduce the intellectual cost at the level of GPT - 5.2 by at least 100 times by the end of 2027.
But now there is a new trend: as the model output becomes more and more complex, users' demand for "speed" even exceeds that for "cost". OpenAI is very good at reducing the cost curve, but in the past, we didn't pay enough attention to "extremely fast output". In some scenarios, users may be willing to pay a high price as long as the speed can be increased by 100 times. We need to find a balance between "extreme cheapness" and "extreme speed". If the market desires lower costs more, we will go very far along that curve.
Question: The current interaction interface is not designed for agents. Will the popularization of agents accelerate the emergence of "micro - apps"?
Sam Altman: I no longer regard software as a "static" thing. Now, if I encounter a small problem, I expect the computer to immediately write a piece of code to help me solve it. I think the way we use computers and operating systems will change fundamentally.
Although you may use the same word processor every day (because you need the buttons to stay in familiar positions), the software will be extremely customized according to your habits. Your tools will continuously evolve and converge to your personal needs. Inside OpenAI, everyone is already used to using the programming model (Codex) to customize their own workflows, and everyone's tools are completely different. Software that is "born for me and because of me" is almost an inevitable trend.
Advice for entrepreneurs: Don't make "small patches for models"
Question: When model updates continuously engulf the functions of startup companies, how should entrepreneurs build moats? What is it that OpenAI promises not to touch?
Sam Altman: Many people think that the physical laws of business have changed, but they haven't. The current change is just "faster work speed" and "faster software development". But the rules for building a successful startup remain the same: you still have to solve the customer acquisition problem, establish a GTM (Go - to - Market) strategy, create stickiness, and form network effects or competitive advantages.
My advice to entrepreneurs is: When facing the amazing updates of GPT - 6, will your company be happy or sad? You should build things where "the stronger the model, the stronger your product". If you just make a small patch on the edge of the model, it will be very difficult.
Question: Currently, when agents execute long - process tasks, they often break off after 5 to 10 steps. When will real long - term autonomous operation be achieved?
Sam Altman: It depends on the complexity of the task. Inside OpenAI, some specific tasks run through the SDK can already run almost permanently.
This is no longer a question of "when it will be achieved" but a question of "scope of application". If you have a specific task that is well - understood, you can try to automate it today. But if you want to tell the model "go and start a startup company for me", it is still very difficult at present because the feedback loop is too long and difficult to verify. It is recommended that developers first break down the tasks so that the agent can self - verify each intermediate step, and then gradually expand its scope of responsibility.
Can AI help humans generate good ideas?
Question: Many people now complain that the content generated by AI is "garbage". How should we use AI to improve the quality of human creativity?
Sam Altman: Although people call the output of AI garbage, humans also produce a lot of nonsense. Generating truly new ideas is very difficult. I am increasingly convinced that the boundaries of human thinking depend on the boundaries of tools.
I hope to develop tools that can help people generate good ideas. When the cost of creation drops suddenly, we can quickly test and correct through intensive feedback loops, so as to find good ideas earlier.
Imagine if there was a "Paul Graham robot" (the founder of YC) who knew all your past, your code, and your work and could constantly brainstorm with you. Even if 95 out of 100 ideas he gave were wrong, as long as it could inspire you to have those 5 genius - like ideas, the contribution to the world would be huge. Our GPT - 5.2 has already enabled internal scientists to feel non - mediocre scientific progress. A model that can generate scientific insights should be able to generate excellent product insights.
Question: I'm worried that the model will trap us in old technologies. Currently, it's very difficult for the model to learn new technologies that emerged two years ago. Can we guide the model to learn the latest emerging technologies in the future?
Sam Altman: Absolutely no problem. In essence, the model is a "general reasoning engine". Although they currently have a huge amount of world knowledge built - in, the milestone in the next few years will be: when you give the model a brand - new environment, tool, or technology, as long as you explain it once (or let it explore it autonomously once), it can extremely reliably learn to use it. This is not far from us.
Question: As a scientist, I find that research inspiration is growing exponentially, but human energy is limited. Will the model take over the entire scientific research process?
Sam Altman: There is still a long way to go to achieve fully closed - loop autonomous scientific research. Although mathematical research may not require a laboratory, top mathematicians still need to be deeply involved at present to correct the model's intuitive biases.
This is very similar to the history of chess: after Deep Blue defeated Kasparov, there was a period when "human - machine collaboration (centaur)" was stronger than pure AI, but soon pure AI regained dominance in the arena.
Currently, AI is like an "unlimited number of post - docs" for scientists. It can help you explore 20 new problems simultaneously and conduct a wide - range search. As for physical experiments, we are also discussing whether OpenAI should build an automated laboratory itself or let the global scientific research community contribute experimental data. At present, the embrace of GPT - 5.2 by the scientific research community makes us inclined to the latter, which will be a more distributed, smarter, and more efficient scientific research ecosystem.
Question: I'm more concerned about security issues, preferably stronger security. In 2026, there are many ways in which AI can go wrong, and one direction that we are very nervous about is biosecurity. Currently, these models are already quite strong in the biological field. At present, both OpenAI and the overall strategy of the world mostly try to restrict who can access these models and use various classifiers to prevent the models from helping people create new pathogens. But I don't think this approach can last long. What's your view?
Sam Altman: I think the world needs to make a fundamental shift in AI security, especially AI biosecurity - from "blocking" to "resilience".
A co - founder of mine once used an analogy that I really like: Fire safety. Fire initially brought great benefits to human society, but then it started burning down entire cities. Humanity's initial reaction was to limit fire as much as possible. I just recently learned that the word "curfew" was originally related to "not allowing fires at night" because cities would burn down.
Later, we changed our thinking. Instead of just trying to ban fire, we improved our resilience to it: we formulated fire safety regulations, invented fire - retardant materials, and established a whole set of systems. Now, as a society, we are doing quite well in dealing with fires.
I think AI must follow the same path. AI will become a real problem in terms of bioterrorism; it will also become a real problem in cybersecurity; but at the same time, AI is also an important solution to these problems.
Therefore, I think this requires efforts at the whole - society level: not relying on a few "labs we trust" to block risks correctly forever, but building a resilient infrastructure. Because there will inevitably be a large number of excellent models in the world. We have already discussed with many biological researchers and companies how to deal with the problem of "new pathogens". There are indeed many people involved, and there is also a lot of feedback indicating that AI is helpful in this regard, but this will not be a purely technical problem, nor will it be a problem that can be completely solved by technology. The whole world needs to think about this in a different way from the past. Frankly speaking, I'm very nervous about the current situation. But I also don't see any other realistic options besides the "resilience - centered" path. Moreover, on the positive side, AI can indeed help us build this resilience more quickly.
However, if there is a "obvious and serious" failure event of AI this year, I think biosecurity is a quite reasonable "risk explosion point" direction. In one or two years later, you can also imagine that many other things may go seriously wrong.
After the learning efficiency of AI is improved, is human - to - human collaboration still important?
Question: My question is related to "human collaboration". As AI models become stronger, they are very efficient in individual learning, such as quickly mastering a new subject. We have seen and highly recognized this in ChatGPT and educational experiments. But I often keep thinking about a question: when you can get answers at any time, why spend time, or even endure frictions, to ask another person? You also mentioned before that AI programming tools can complete the work that used to require human - team collaboration at an extremely fast speed. So, when we talk about "collaboration, cooperation, and collective intelligence", the combination of human and AI is very strong. What will happen to human - to - human collaboration?
Sam Altman: There are many layers of questions here. I'm a bit older than most of you here. But even so, when Google emerged, I was still in middle school. At that time, teachers tried to make students promise "not to use Google" because people thought: if you can look up everything casually, why take history classes? Why memorize?
In my opinion, this kind of thinking is completely unreasonable. At that time, I felt that this would make me smarter, learn more things, and be able to do more things