The academic circle is in an uproar. The reviewers of ICLR have exposed their identities, and it turns out that the low scores were given by friends.
A true open review, "The father of the gods grants me vision!"
Last night, no one knows how many people stayed up all night.
On the evening of November 27th, Beijing time, the domestic AI community was in an uproar. On the OpenReview platform, which is most commonly used for academic paper review, a front - end bug led to a database leak, turning the original double - blind review into an open - card situation.
The method of this information leak is extremely simple: Just enter a certain URL in the browser, replace the paper ID and reviewer number you want to view, and you can find the identity of any corresponding reviewer. You can know who reviewed your paper and how many points he/she gave you.
Since there is no threshold for operation, after the news spread, everyone instantly switched to the investigation mode. After all, in this day and age, who doesn't have some friction with reviewers? Finally, people can "avenge grievances and settle scores".
This incident created countless surprises, scares, anger, and wails. In WeChat groups and on Xiaohongshu, victims are everywhere telling their stories. There are those who expose others and those who are exposed. You can never guess who gave your paper a low score.
Reviewers give low scores for various reasons. Some fail to understand the author's original intention, some have personal grudges (such as group members giving low scores to each other). Even more abominable is to give a low score to "make way" for the paper they are writing in the same field. Someone used this leak to confirm that for a paper that once got a score of 1, the reviewer submitted another paper five months later and was reluctant to cite the author's submission.
Soon, there were more revelations on social media. Some reviewers who were suspected of giving maliciously low scores significantly increased their scores for the papers after being exposed.
Onlookers said that this exposure finally pushed the intensifying contradiction in the review of top AI conference papers to a new climax. The drama has reached a new level, from the "Dark Forest" to the "Broadcast Era".
Never think that you can really be anonymous on the Internet.
Soon, people found that this vulnerability in OpenReview is a system - level one. By replacing another segment of characters in the URL, you can also view ICLR papers from other years, as well as papers from top AI conferences such as NeurIPS, ICML, and ACL.
As is well - known, due to the popularity of the AI field and the sharp increase in submissions, all major conferences are facing a shortage of reviewers. People often complain about the decline in review quality. At ICLR 2026, Pangram Labs conducted data analysis and found that about 21% of ICLR peer reviews were completely generated by artificial intelligence, and more than half of the reviews showed traces of AI use.
On the other hand, 199 papers were found to be completely generated by AI, and 9% of the papers had more than 50% of their text generated by AI.
As one of the three top conferences in the AI field, ICLR has been attracting increasing attention from the academic and industrial circles in recent years. The 2026 conference will be held in Rio de Janeiro, Brazil, next April. This year's conference received 19,490 research paper submissions and 75,800 peer - review comments.
At around midnight on Friday, the bug was urgently fixed, and ICLR finally issued an official statement.
ICLR stated that anyone who uses, exposes, or shares the leaked information will be rejected from the conference and banned from ICLR for many years. The conference organizers also plan to take further actions in the future.
Subsequently, OpenReview also issued an official announcement.
However, this doesn't seem to have stopped some people's enthusiasm for watching the drama. It seems that someone crawled the entire list and conducted data analysis. Some people compiled a list of reviewers who gave unusually low scores.
Based on the review results of the first 10,000 submissions at ICLR 2026 and the reviewers' nationalities (primary languages), someone provided the average scoring habits. It seems that Chinese reviewers are generally more generous, while Korean reviewers are relatively stricter.
At this rate, it won't be long before we know who wrote the review comment "Who's Adam?" at NeurIPS in August this year.
Industry and academic leaders have also followed up on this event and provided comments.
Yisong Yue, a professor of computer and mathematical sciences at the California Institute of Technology, a member of the ICLR Council, and the chair of ICLR 2025, said, "Let's have a meeting now. I'm numb."
Overall, this ICLR leak seriously damaged academic fairness. The loss of reviewer anonymity hinders people's critical output of research, gives authors the possibility of additional counter - attacks, and thus destroys the original balance. This has affected the credibility of accepted papers. On the other hand, since there are sometimes malicious and irresponsible comments in the completely anonymous review, the heat generated by this leak is also worthy of people's reflection.
After this incident, will the anonymous review system change?
This article is from the WeChat official account "Almost Human" (ID: almosthuman2014). Author: A Melon - Eater. Republished by 36Kr with permission.