59 minutes, 8 key questions. Altman answers everything (Full transcript attached)
In the morning of January 27th, Sam Altman, the CEO of OpenAI, had a nearly one - hour conversation with a group of developers in San Francisco. There was no PPT presentation and no new product launch at the event, just a Q&A session.
The questions were very practical: Will programmers be replaced? How can startups survive? Will AI make society more unequal? When will the cost of models come down? Where is the safety bottom - line of AI?
In the past, when people discussed AI, they either thought it could solve all problems or worried that it would destroy the world.
But what Altman talked about were all things happening in reality: Some people have already used AI to develop software in a few days that used to take months, and some startups have found that they've developed products but can't find users.
When it comes to work, he said that programmers won't be replaced, but the time spent on writing code will be reduced; when it comes to startups, he said that the hardest part isn't developing products, but getting people to use your products; when it comes to safety, he said that the most worrying danger at present may come from the biological field.
He also repeatedly emphasized that no matter how technology develops, the role of humans cannot be replaced.
Key points of the Q&A session:
- Software engineers won't be replaced, but their work methods will change, and more people will be involved in the work of "making computers act according to their intentions".
- Product development has become easier, but market promotion remains the biggest challenge because "human attention is always scarce".
- AI will bring huge deflationary pressure, lower the threshold of creation, and is expected to become a balancing force in society, but wealth concentration needs to be prevented.
- General models are the future direction, and the shortcomings in writing and other aspects will be made up. The cost is expected to drop significantly, but "speed" may become a new bottleneck.
- Software will evolve towards "personalized customization", and everyone can have tools customized for themselves.
- Biological safety is the area most worthy of vigilance in 2026. AI governance needs to shift from "blocking" to "resilience building".
- The use of AI and electronic devices should be reduced in kindergarten. The most important thing is to cultivate soft skills such as "high initiative, adaptability, and creativity".
Full transcript of Altman's Q&A session:
01 Future of the profession: Programmers' work will change, but the demand won't decrease
Altman: As we start planning the next - generation developer tools and thinking about how to apply these powerful models that will soon be launched, we want to hear your thoughts. What are you concerned about and what questions do you have?
Question: I'd like to start with a question from Twitter: Regarding the "Jevons paradox" of AI - accelerated programming, do you think it will reduce the demand for software engineers, or will cheap software create more demand and thus continuously safeguard this profession?
Altman: I believe the demand won't decrease; instead, it may increase. In the future, the role of engineers will change fundamentally, and more people will be engaged in the work of "commanding computers". Although the time spent on coding and debugging will be significantly reduced, the overall demand for software remains undiminished. The engineering field has experienced similar changes many times, and each time, more people have participated, productivity has increased, and the world has had more software.
I speculate that in the future, we will see more software customized for individuals or very small groups, with continuous personalized adjustments. Therefore, the number of people making computers serve specific needs will increase, and the way to achieve this will be very different from today. If this also counts as software engineering, I believe this field will expand in scale, and its contribution to the global GDP will also increase significantly.
02 Dilemma of startups: Easy to develop products, hard to find users
Question: Now with ChatGPT and various AI tools, it's easier to develop products. But the new bottleneck seems to be "market promotion". How can we find real users and create value?
Altman: Yes, this is exactly how many people feel now. When I was at YC, I often heard founders say, "I thought the hardest part was developing the product, but later I found that the hardest part was getting people to care about and use it."
Market promotion has always been a challenge. It's just that now it's so easy to develop products that this contrast is more obvious. The core rules of startups, such as creating differentiation and acquiring users, haven't become easier. AI can reduce development costs, but other aspects are still difficult.
However, we are also seeing AI being used for marketing automation, which has a certain effect. But the core is that human attention is always scarce. Even in a world of extreme abundance, you have to compete with everyone for attention and channels. This is an eternal business issue.
Question: I'm developing a multi - agent coordination tool based on your SDK. Currently, your tools mainly focus on workflow and prompt chaining. As a developer in the ecosystem, am I safe? Will you enter this field yourself, or will you leave room for us?
Altman: We don't have a "correct" answer at present. No one knows what the optimal interaction mode of multi - agent systems is. Some people like complex, multi - screen interfaces, while others prefer to simply talk to it with voice. Different people have different needs.
We believe that the world will eventually converge to a few mainstream interaction modes, but we can't do everything, nor should we monopolize it.
The actual situation now is that there is a huge gap between the capabilities of models and the ability of most people to extract value from them. Filling this gap and creating truly useful productivity tools is an excellent opportunity, but no one has done it perfectly yet.
We will conduct our own exploration, but there is a lot of space here, and we need everyone to try together. If you want us to support certain functions, just tell us.
03 Economy and cost: Can AI make the world fairer?
Question: I'm developing a product on OpenAI and am concerned about social issues such as the gender pay gap. How do you think AI can be used to solve the long - standing economic inequality?
Altman: One of the main positive impacts of AI is that it will bring huge deflationary pressure. This means that the prices of many goods and services will drop, including the cost of creating and acquiring knowledge.
For example, within this year, a person may only need to spend a few hundred or a few thousand dollars to develop software that used to take a team a year to complete. Regardless of their background, this will greatly empower individuals. This has the opportunity to become a balancing force, giving more opportunities to groups that have been neglected in the past.
But I'm really worried that AI may exacerbate wealth concentration. Ensuring that AI is used for the common good rather than exacerbating inequality must be at the core of policy - making.
Question: As the CTO of Raindrop, I want to ask: How do you view the trend of specialization and generalization of models? For example, GPT - 4o has weaker writing ability than GPT - 4 but stronger coding and reasoning abilities. It seems that its capabilities are unbalanced.
Altman: We admit that we haven't done well in writing and are working on improvement. We've concentrated our resources on improving the intelligence, reasoning, and coding abilities of GPT - 4o. Sometimes, we have to make trade - offs.
But I firmly believe that the future belongs to general models. Even if you specialize in coding models, you still need them to communicate clearly. Good writing isn't about piling up words but having clear thinking. We will promote the progress of models in all dimensions.
Question: I'm the CTO of Unifi. We do market promotion automation and run a large number of agents for clients. Cost is the main bottleneck. You once said that "intelligence will be so cheap that it can be ignored". How do you view the decline of model costs in the next few years?
Altman: I think the cost will drop significantly by the end of 2027. (Audience interjection: At least 100 times)
But there is also an important dimension of speed. As applications become more complex, more and more people demand faster output and are even willing to pay more for it. Cost and speed are two different technical issues that need to be balanced. If the market focuses more on cost, we will work in this direction, and there is indeed a lot of room for reduction.
04 Era of exclusive AI: Software will be "tailored" for you
Altman: Now I'll answer some questions from Twitter.
Question: Many current interfaces aren't designed for AI agents, but applications "born for me" are emerging. How will the innovation of customized interfaces accelerate the trend of micro - applications?
Altman: My own experience is that I'm increasingly not seeing software as something fixed. When I encounter a small problem, I hope the computer can directly write a piece of code to help me. This trend will become more and more obvious and may even completely change the way we use computers.
Of course, not everything needs to change all the time. For example, we're used to buttons being in fixed positions. But a lot of software will become more and more aware of my habits and eventually become customized for me. Inside OpenAI, people have already used Codex to customize their workflows in various ways. This direction is certain.
Question: If the functions of startups may soon be replaced by model updates, how should developers build their moats? Which layer of the technology stack does OpenAI promise not to touch?
Altman: The basic rules of startups actually haven't changed. Now you can develop products faster, but the difficulty of acquiring users and building brands hasn't decreased at all.
The good news is that this is the same for us. Many companies have done things that we should have done better, but they got ahead and established an advantage. I often ask entrepreneurs a question: If GPT - 6 is extremely powerful, will your company be excited or desperate?
You should develop products that will be more successful as AI becomes stronger. As for those that just patch existing models, if they can establish an advantage before the model upgrade, they may survive, but it will be more difficult.
Question: Now many on - chain tasks break off after a few steps. When can agents that can complete long - term processes on their own be realized?
OpenAI employee: It depends on the type of task. If it's a task with a clear goal, it can be done now. But for a complex goal like "help me start a company", the task needs to be broken down so that AI can self - verify. This isn't a matter of time but of scope. Start with small, controllable tasks first.
Question: Back to the issue of creativity and the market. The problem for consumers is limited attention, while the bottleneck for producers is the quality of creativity. I help promote AI companies and often find that the products themselves aren't attractive enough. Are there any tools to help people improve their creativity?
Altman: Coming up with a good idea is always difficult. We should create tools to help people come up with ideas. As the cost of creation decreases, validating ideas will be faster.
Many people have powerful AI assistants but don't know what to ask them to do. We can create a "brainstorming partner" to analyze your past work and give new directions. I know a few people. Every time I talk to them, I get a lot of new ideas. If AI can do this, even if 95% of the ideas are rejected, it will still be very valuable.
We have an internal version, and scientists say that the scientific progress it has brought is "already significant". A model that can propose new scientific insights can certainly inspire people in product development.
05 Most dangerous frontiers: AI may cause problems in these areas
Question: As a developer, I'm worried that AI may be "locked in" by the existing technology stack, just like the US power grid, which is difficult to upgrade. Do you think AI models can keep up with technological changes?
Altman: I think we will eventually enable models to make good use of new things. In essence, models are general reasoning engines. I hope that in the next few years, models can learn new skills faster than humans. An important milestone is that when facing a completely new environment or tool, the model can master it with just one explanation. This goal isn't far off.
Question: I'm a scientist. Research often leads to new directions, but human time is limited. Can AI eventually take over the entire research process?
Altman: In most fields, AI still has a long way to go before it can do research independently. For example, in mathematics, although mathematicians are now collaborating with models all day and making rapid progress, the key direction judgment and intuition still come from humans.
This is a bit like the development of chess. There was a time when "human + AI" was stronger than pure AI, but soon AI surpassed humans. The research field may also be the same. The problems will become so complex that AI will understand the multi - step thinking process better than humans.
Now some scientists are using AI for "breadth - first search", exploring dozens of new directions simultaneously and using AI as an "infinite number of graduate students". It's said that it has now been upgraded to an "infinite number of post - docs".
For experimental science, we've discussed building automated laboratories. But a more feasible approach may be to have the global scientific community contribute experimental data through a distributed path.
Question: I'm starting a business in biological safety. In the trend of experimental automation, how can we prevent AI from being used for malicious biological design? What's the position of safety in OpenAI's planning?
Altman: By 2026, biological safety is one of the areas we're most concerned about. Current models are already very powerful in biology. We currently prevent abuse by restricting access, but in the long run, this won't work.
One of my colleagues has a good analogy: We should treat AI like fire. People used to try to restrict the use of fire, but later they shifted to improving "resilience", formulating fire - fighting regulations and using fire - retardant materials. The same shift is needed for AI safety.
AI will indeed exacerbate biological threats, but it's also a tool to solve problems. We need the whole society to build safety infrastructure together instead of fantasizing about complete control.
If there is a major accident with AI this year, it's very likely to be related to biology. As time goes by, problems may also occur in other fields.
06 Education in the AI era: What should we learn?
Question: Since AI can quickly provide answers, do we still need interpersonal collaboration? Is the combination of "human + AI" still necessary?
Altman: AI tools are like Google and calculators in the past. You will become more powerful because of their existence, not weaker. Of course, this requires a change in the way of education. We should teach students how to think, not just memorize knowledge.
I actually think that in a world where AI is popular, real human - to - human connections will be more valuable. We're exploring a collaborative interface of "multiple people + AI". Imagine that during a meeting, an AI assistant participates in the discussion to help the team improve efficiency. This model will become more and more common.
Question: As AI systems are deployed on a large scale, what's the most underestimated risk? Safety, cost, or