Just being able to write code won't cut it anymore. The latest interview questions from Altman have been exposed: One person has to do the work of an entire team.
Slow - paced OpenAI had no new products, just a QA session. Altman admitted on - site, "We messed up!"
OpenAI's first Q&A session of the year has gone viral across the internet!
On January 27th, Altman himself hosted an almost one - hour live stream on - site, responding to everyone's most pressing questions.
On - site, he candidly admitted, "In pursuit of ChatGPT's programming ability, the team sacrificed its creative collaboration ability."
Altman deeply repented, "We messed up. And due to the limited 'bandwidth', we had to prioritize, so we focused on AI programming."
However, rest assured, OpenAI is about to take action —
In the future, the GPT - 5.x versions will definitely be better at writing than GPT - 4.5.
The future belongs to truly excellent "general models". Altman promised to push the model to the extreme in these two major dimensions.
Even more astonishing is that when Altman talked about his experience using Codex, after two hours of experience, he couldn't go back and directly handed all permissions to the AI.
During the interview, an extremely awkward moment occurred.
Altman asked, "What do you hope OpenAI to create?" Suddenly, the whole venue fell silent, and no one said a word.
As can be seen from yesterday's live - stream pre - heat, this QA session is to prepare for 'building the next - generation tools'. OpenAI hopes to iterate the model based on industry feedback.
After all, Anthropic and its Claude army are making waves in Silicon Valley. If OpenAI doesn't do something, it's really going to be in trouble.
Next, the article selects the highlights from the interview QA.
The first question at the beginning:
What's the end - game for software engineers, unemployment or immortality?
At the beginning, Altman directly presented a sharp question from a netizen on X, which is also one of the most concerned questions for many programmers.
What's your stance on the "Jevons paradox" in software engineering?
If AI significantly optimizes the speed and cost of code generation, will it reduce the demand for SWEs, or will the demand skyrocket because customized software becomes extremely cheap?
A few days ago at the Davos Forum, Dario Amodei warned that in the next 6 - 12 months, end - to - end AI could completely replace software engineers.
The shadow of this statement has been looming over the whole world.
Although no specific timeline was mentioned, Altman gave his opinion, "I think the definition of'software engineer' is about to change dramatically."
In the future, more people will be able to command computers, create and obtain value.
However, the form of this job — the time you spend on writing and debugging code — will change fundamentally.
Every engineering revolution in history has allowed more people to participate, and the world has thus obtained more software. The world's demand for software shows no sign of slowing down.
His prediction is: We will enter an era of "personally customized software".
A lot of software will be written for one person or a very small number of people, and people will constantly customize their own software.
If this is also considered software engineering, the demand will skyrocket, and a larger proportion of the world's GDP will be created in this way.
Everyone has an AI, but it's not popular
Attention is the scarcest resource
On - site, a heavy ChatGPT user found that his bottleneck is no longer writing code, but finding users.
In short, most developers can create products, but GTM (going to market) has become a "new hell".
Altman deeply agreed. In the past at Y Combinator, entrepreneurs always thought that making products was the most difficult.
Now it's easier to build products, but they painfully find that making others care about your product is the most difficult.
Although AI has automated code and even started to automate sales and marketing, it's still very difficult. Because in a world with an abundance of materials and software, human attention is still an extremely limited and scarce resource.
I can imagine a future of complete abundance, where human attention is the only remaining competitive commodity. You have to be more creative, but this is destined to be a tough battle.
Will we be "killed" by OpenAI?
Independent developer George said that he is doing "multi - agent orchestration" on the Codex SDK. His biggest concern is whether OpenAI will take over everything he does in the future.
We have to admit that this is the fear of all wrapper developers.
Will OpenAI launch its own Agent Builder and monopolize this market?
Altman said that he hasn't figured out what the correct interface (UI) is. Some people like to monitor everything on 30 screens like in "The Matrix", while some people just want to say a word to the computer every hour.
However, OpenAI won't handle all models alone. The market for building tools to help people use models is currently completely blank.
There is a huge gap between the model's capabilities and the value that most people can extract.
If you have good ideas, go ahead and build. We'll also give it a try, but this space is big enough for everyone.
A GTM consultant said bluntly that not only is the attention low, but many AI product ideas are just "garbage".
Altman replied that it's now popular to call AI - generated content "slop", but humans also produce a lot of garbage.
I'm increasingly convinced that humans think at the limits of tools.
As the cost of creation plummets and the feedback loop of trial - and - error speeds up, we should be able to find good ideas faster. OpenAI is trying to build a "brainstorming" tool.
Imagine if there was a Paul Graham (the godfather of Silicon Valley startups) robot. Even if you reject 95 out of 100 ideas it proposes, the remaining 5 are enough to change the world.
Our internal model has already achieved scientific insights, and product insights are also on the horizon.
Altman admits "messing up"
The cost of AI drops by 100 times
The CTO of a startup, Raindrop, asked, "When we look to the future, how do you see the models evolving, towards specialization or generalization?"
For example, GPT - 4.5 is good at writing, but GPT - 5, although strong in programming, has become clumsy at writing.
It seems that the model's capabilities have become a bit "uneven" (spiky) — the programming ability has advanced by leaps and bounds, but writing doesn't seem to have kept up. How does OpenAI view this?
Unexpectedly, Altman directly admitted, "We messed up."
The team decided to invest all the limited bandwidth in intelligence, reasoning, and programming in the 5.2 version, resulting in a trade - off.
But Altman believes that the future belongs to very excellent general models. Intelligence is generalizable, and OpenAI will make future models catch up quickly in writing and charisma.
Another audience member asked, "The team spends a lot of time thinking about 'always - on AI'. You once said a sentence that resonated with me, 'Intelligence will be so cheap that it doesn't need to be measured'."
But we need to run millions or tens of millions of Agents for customers, and cost is the bottleneck. How do you view the significant reduction in the cost of small models in the next few months or years?
Altman promised, by the end of 2027, we should be able to provide advanced intelligence at the level of GPT - 5.2, with the cost reduced by at least 100 times.
But now there is a new contradiction: not only does it need to be cheap, but also fast. OpenAI hasn't figured out how to increase the output speed by 100 times while maintaining the quality. This is a difficult balance.
Another key question is, when can Agents truly run autonomously without crashing?
An OpenAI researcher said that it depends on the task. If it's an open - ended task like "starting a startup", it's still very difficult.
But for specific tasks, inside OpenAI, someone has already made an Agent run indefinitely through a special Prompt framework.
The key lies in breaking down large tasks into small closed - loops that Agents can "self - verify".
Take over scientific research?
AI is still an "infinite post - doc"
On - site, an old scientist asked a question that he cared about the most — will AI completely take over scientific research?
Nowadays, scientific ideas are growing exponentially, while scientists' time is decreasing. Do we need a new world model in the future?
Altman said that fully closed - loop autonomous research still has a long way to go.
This is very similar to the period after "Deep Blue" defeated Garry Kasparov in chess: Human + AI > Pure AI.
But in the future, as the complexity increases, AI's understanding of multi - step processes will eventually surpass that of humans.