OpenAI CEO warns: The US has seriously underestimated China's AI capabilities, and export controls won't work.
OpenAI CEO Sam Altman recently warned that the United States may have underestimated the complexity and severity of China's progress in artificial intelligence (AI), and stated that export controls alone may not be a reliable solution.
Altman recently gave a rare public interview to several media outlets. He said that he was somewhat worried about China's progress in AI. "I'm quite concerned about China's capabilities."
He warned that the AI competition between the United States and China is deeply intertwined, and its impact is far greater than simple rankings of "who is leading."
"In terms of reasoning ability, China may be building up faster. There are also research capabilities and productization levels. There are many aspects in this entire field. I don't think the issue is as simple as 'whether the United States or China is leading,'" he added.
On the other hand, despite the United States continuously escalating export controls on semiconductors, Altman still doubts whether these policies can keep up with technological development.
When asked whether further reducing the supply of GPUs to China through controls would work, Altman expressed doubts and said, "My gut feeling is that it won't work."
He said, "You can implement export controls on a certain link, but it may not be precise... Maybe people will build wafer fabs or find other workarounds." He was referring to semiconductor manufacturing facilities, which are specialized factories for producing chips.
"I want a simple solution. But my gut feeling is that it's difficult," he added.
The Impact of China's Open-Source Systems
Altman said that China's progress in AI has also influenced OpenAI's thinking about its own model release strategy.
Although OpenAI has long refused to fully open-source its technology, Altman said that competition from Chinese large models - especially open-source systems like DeepSeek - was a factor in OpenAI's recent decision to release its own open-weight models.
"Obviously, if we don't do this, most of the world's future systems may be built on Chinese open-source models. This is certainly a factor in our decision. Although it's not the only one, this issue seems quite significant," he said.
Earlier this month, OpenAI released two open-weight language models. This is the first time the company has released open-weight models since it launched GPT - 2 in 2019, marking a major strategic shift. Previously, OpenAI had long kept its technology behind application programming interfaces (APIs).
The new plain-text models are called gpt - oss - 120b and gpt - oss - 20b, and are designed as low-cost options that developers, researchers, and companies can download, run locally, and customize.
It is understood that a model is considered an open-weight model if its parameters (the values learned during the training process that determine how the model generates responses) are publicly available. Although this provides transparency and control, it is different from open-source. OpenAI has not yet released its training data or complete source code.
With this release, OpenAI has joined the trend. Currently, it is the only major US foundation model company that is actively inclined towards a more open approach.
If the parameters of an AI model (the values learned during the training process that determine how the model generates responses) are publicly available, the model is considered to have open weights. Although this provides transparency and control, it is different from open source. OpenAI still has not released its training data or complete source code.
However, the outside world has mixed opinions about the release of these open-weight models. Some developers believe that these models perform mediocrely and point out that the powerful features in OpenAI's commercial products have been stripped from the open-weight versions.
Altman did not refute this. He said that the team deliberately optimized for a core application scenario: local programming agents.
"If the world's needs change, you can also repurpose the model for other uses," he said.
This article is from the WeChat official account "Science and Technology Innovation Board Daily." Author: Huang Junzhi. Republished by 36Kr with permission.