Google and OpenAI both won the gold medal at ICPC 2025. GPT-5 won the championship with a perfect score, and Gemini solved a difficult problem that no human team could solve.
In the past few decades, the International Collegiate Programming Contest (ICPC) has always been regarded as the "Olympics" in the field of computer programming. However, this year, the limelight on the competition field was stolen by two "non - human" contestants - GPT - 5 from OpenAI and Gemini 2.5 DeepThink from Google DeepMind.
As participating models, GPT - 5 and Gemini 2.5 Deep Think were supervised by the official rules and organization of ICPC and participated in the same problem - solving session as human contestants. Although they did not compete directly with student teams, they delivered amazing results:
● GPT - 5 scored a perfect score, solving all 12 problems, which is equivalent to the "gold medal" level.
● Gemini 2.5 Deep Think solved 10 out of 12 problems in 677 minutes, also reaching the gold medal level. According to Google, such a result would rank second globally among human contestants.
It should be noted that the human gold - medal teams in this year's ICPC are from Saint Petersburg State University, the University of Tokyo, Beijing Jiaotong University, and Tsinghua University. Even the strong teams from these top universities did not achieve a perfect score (the best result was 11/12). In other words, this is the first time that AI has "overtaken" humans in this kind of algorithm competition.
ICPC: The "Olympics" for Programmers
ICPC is the world's top - tier collegiate programming competition. Since the 1970s, this competition has gathered the world's top algorithmic geniuses from universities. This year, a total of teams from 139 universities in 103 countries participated in the ICPC finals. The rules of the competition seem simple:
● Each team consists of three college students;
● Solve 12 algorithmic problems within 5 hours;
● The ranking depends on the number of problems solved and the time taken.
However, the difficulty behind it far exceeds that of ordinary programming competitions. It is reported that the ICPC problems often involve cutting - edge algorithms such as graph theory, number theory, dynamic programming, combinatorial optimization, and network flow. It examines both coding speed and mathematical foundation as well as teamwork. Over the years, the teams that have won gold medals in ICPC have almost all become core technical talents in global technology companies.
Precisely because of the authority and challenge of ICPC, the participation of AI in this session is particularly symbolic: it has directly pushed AI onto the most rigorous algorithmic arena.
GPT - 5 Delivers a Perfect Answer Sheet, and Gemini 2.5 Solves Problem C That No Human Team Could Solve
According to the official disclosure from OpenAI, GPT - 5 did not receive special training for ICPC and did not use any "cheating" tools when participating in the competition. Like other human teams, it directly received the same PDF competition questions, submitted answers through the official problem - judging system, and completed all the solutions within 5 hours.
The result is astonishing: 11 problems were solved on the first try, and the only difficult problem was solved on the 9th submission, finally achieving a perfect score of 12/12. It should be noted that the best result of the human teams this year was 11/12, and GPT - 5 directly scored a perfect score, which is extremely rare in the history of ICPC.
Based on this, OpenAI also shared GPT - 5's result on the X platform:
"We officially participated in the AI track of ICPC. Similarly, we had to solve 12 problems in 5 hours, and the answers were judged in real - time by the ICPC evaluation system. The result shows that 11 out of 12 problems passed on the first submission, and the most difficult one was solved on the 9th submission. Finally, GPT - 5 completed all 12 problems, while the best human team only solved 11."
Meanwhile, Google also announced the competition details of Gemini 2.5 Deep Think: it solved 8 problems in 45 minutes and 10 problems in 3 hours. Even more astonishing is that Gemini solved Problem C within the first half - hour of the competition - a difficult problem that no university team could solve.
It is reported that this problem requires finding a configuration of pipe switches in a complex network composed of multiple reservoirs and pipes to fill all reservoirs in the shortest time. Each pipe can be opened, closed, or partially opened, resulting in almost infinite combinations, making it extremely difficult to find the optimal solution.
Facing this problem, the problem - solving idea of Gemini 2.5 Deep Think is "ingenious":
1. First, set a "priority value" for each reservoir, indicating the degree to which it should be allocated relative to other reservoirs;
2. After setting the priority values, find the optimal pipe configuration through dynamic programming;
3. Further apply the minimax theorem to transform the problem into finding the "most constrained" priority combination;
4. Finally, in the convex optimization space, use nested ternary search to quickly converge to the optimal solution.
This idea is not the "standard approach" of the official problem solution but a path deduced by the model itself. In other words, Gemini demonstrated original algorithmic thinking beyond memory on the competition field. For this reason, Google also emphasized in its blog that this is not only a correct answer but also a "creative breakthrough".
Why Is This So Significant?
Actually, the high - score performances of large models in various exams and benchmark tests are no longer news:
● LLMs such as ChatGPT and Gemini have repeatedly achieved high scores in human exams such as the SAT, bar exams, and TOEFL;
● In July this year, Gemini won a gold medal in the International Mathematical Olympiad (IMO);
● LLMs have also "topped the charts" in various NLP and logical reasoning benchmarks.
However, these results are often questioned as being achieved by "memorizing training data" or "brute - force searching with massive computing power". Different from these, an on - site algorithm competition like ICPC has the following characteristics: first, the problems are novel and almost impossible to appear in the training corpus; second, it requires comprehensive application of mathematical modeling, reasoning, and code implementation; most importantly, a solution must be found within a limited time, rather than thinking offline slowly.
The performances of GPT - 5 and Gemini 2.5 Deep Think in ICPC prove that they already have the ability of on - the - spot reasoning, abstract modeling, and creative problem - solving, which is more telling than getting high scores in standardized exams. For this reason, many AI engineers sighed on social media: "In the past, we were worried that AI could only memorize question banks; now it has defeated human champions in an on - site competition. It feels like we are witnessing the moment of 'equal intellectual rights between humans and machines'."
This is not the end but the beginning. Whether AI will extend this ability to more complex real - world problems remains to be tested, but one thing is certain: today, AI is no longer just a "code - writing assistant" but truly has the strength to confront human intelligence head - on.
This article is from the WeChat official account "CSDN". It was compiled by Zheng Liyuan and published by 36Kr with authorization.