GPT-5.4 was accidentally leaked. OpenAI's latest model aims to break through with these two major capabilities.
Is GPT-5.4 Leaked?
When waking up, this picture went viral on X:
In the code pull request of OpenAI's coding assistant Codex, the term "GPT-5.4" directly appeared, including the /Fast command for the fast mode.
Moreover, this is not the first time that people have found traces of GPT-5.4.
A few days ago, a developer at OpenAI submitted a code pull request on GitHub. In the change description of the version judgment condition, it was accidentally revealed that:
Behind the view_image_original_resolution feature switch that is still under development, support for the original resolution was added to the view_image interface.
When this feature switch is enabled and the target model is gpt-5.4 or a later version...
After that, gpt-5.4 was quickly changed to gpt-5.3-codex.
In addition, the GPT-5.4 model also appeared in the drop-down options of the Codex model:
All the signs seem to indicate that GPT-5.4 is not far away.
2 million Tokens Context Window?
In addition, there are rumors that GPT-5.4 will be equipped with a 2 million Tokens context window, which can achieve long - term memory of ultra - long content.
Netizens pointed out that to support the ability to "remember ultra - long content and not forget it for a long time", the amount of data that needs to be cached during model inference will expand sharply, which is itself a very challenging technical problem.
In the leaked code pull request, it was mentioned that a new feature switch was added for "GPT-5.4 or later versions", which can bypass the traditional image compression mechanism and directly retain the original image byte data at full resolution.
This means that GPT-5.4 may have pixel - level accurate visual analysis capabilities.
Front - end developers, designers, and engineers can finally upload high - precision UI prototypes or complex engineering schematics. The model can fully capture every detail in them, completely saying goodbye to visual illusions caused by image compression and blurring.
What's even more interesting is that when a user asked ChatGPT 5.2 about its model version, it actually started to seriously claim to be GPT-5.4...
Of course, based on what netizens know about Altman, it's possible that this is a hype.
Some netizens believe that:
The core thing that needs to be focused on is the accuracy (recall rate) of the model within the entire context window. If the content cannot be accurate, a 2 million Tokens context window, no matter how large, is meaningless.
If the accuracy in the 8 - needle test can exceed 90%, then it is a real major breakthrough.
Moreover, everyone is eagerly waiting for the release of DeepSeek V4.
Reference Links:
[1]https://x.com/i/trending/2028300584164700282
[2]https://x.com/nicdunz/status/2028305161324507194
[3]https://x.com/kimmonismus/status/2028123002156531714
This article is from the WeChat official account "QbitAI". Author: Xifeng. Republished by 36Kr with authorization.