A Turing giant defects, the new rules of ICML sweep through the academic circle, and academic retail investors can only "run naked".
ICML has introduced a crazy review mechanism, which has unexpectedly become a lifesaver for the academic community. Even AI godfather Bengio has come out in support: In the era of information overload, we must learn to use "bias" to reduce noise.
In 2025, the flood of submissions to NeurIPS exceeded the 30,000 mark - which is almost double that of the previous year.
The cognitive overload caused by these 30,000 papers is enough to crash any existing human review system on the spot.
The "peer review" system in the academic community is practically dead.
Facing this review crisis, ICML 2026 finally couldn't sit still. They launched a highly disruptive new policy with a touch of game theory - self-rating by authors.
We can't handle the reviews anymore. You sort them out yourself: Among these papers in your hand, which ones are just fillers, and which ones are real winners?
This sounds like "letting the suspect judge their own case." Why can such a crazy policy be implemented? Because there stands a real god behind it - Yoshua Bengio.
Mechanism Mutation: From "Cat-and-Mouse Game" to "Surrendering with a Gun"
ICML 2026's move seems to be asking for power from the authors on the surface, but in fact, it gives them a "get-out-of-jail-free card."
In the past, peer review was a "cat-and-mouse game."
The authors were the "suspects," trying their best to package trash as gold; the reviewers were the "detectives," using a microscope to find flaws. Everyone didn't trust each other and hurt each other.
But now, ICML has laid its cards on the table: "There aren't enough detectives. Please, suspects, assist in solving the case."
You might ask:
Let the authors self-rate? Won't everyone give themselves full marks? Who would admit that what they wrote is trash?
This is exactly the most brilliant move in game theory: isotonic regression. The system doesn't care about the scores you give; it only cares about the order in which you present your papers.
You don't need to tell the system whether this paper is a 9 or a 3. You just need to tell the system: Among the 3 papers I submitted, A > B > C
Previously, you could hype up 3 mediocre 5-point papers as 9-point masterpieces, gambling that the reviewers would be blind and let them pass.
Now, if you try to save a 3-point lousy paper by ranking it ahead of a 9-point masterpiece, not only will you fail to save the lousy paper, but the score of the masterpiece will also be forcibly lowered.
Why does ICML dare to do this? Because they have data.
The results of a secret experiment in ICML 2023 showed that the order in which authors rank their own papers can more accurately predict the future fate of the papers.
Analysis of the experimental data from ICML 2023: With different numbers of submissions, the error of the blue bars (self-rated calibrated scores) is significantly lower than that of the red bars (original scores given by reviewers).
The data shows that the citation count of the paper ranked first by the author after 16 months is 200% that of the paper ranked last.
Many times, reviewers kill real masterpieces because they don't understand them; while the authors know exactly - "This one is just filler, and that one is the one that will change the world."
This is not just a rule adjustment; it's a disenchantment of the "sacredness of peer review."
The official finally admits that in this era of information overload, rather than trusting a stranger who has only read your paper for 20 minutes, it's better to use the authors' ambition to "win" and guide them to tell the truth.
The "Defection" of the Turing Award Winner: Objectivity is Dead, Long Live Bias
This move could be made because there sits a real "god" across the table - Yoshua Bengio
In the official discussion paper of JASA, Bengio didn't see this as a simple rule repair. He defined it as:
A powerful synergy between machine learning and statistics.
Error reduction under different game strategies. Through the "isotonic mechanism," the improvement in mean squared error can reach up to 23.48%.
Why has even the AI godfather "defected"?
In the traditional scientific utopia, we are superstitious about "objectivity." We think that reviewers are fair judges and authors are cunning defenders.
But in Bengio's view, this kind of purism is not only naive but also inefficient in the face of the noise of 30,000 submissions.
Bengio and his collaborator Dinghuai Zhang pointed out an extremely profound philosophical shift in their comments:
Acknowledge the rule of "noise": When the review system has degenerated into a "random number generator" due to overload, blindly pursuing absolute objectivity is essentially a form of inefficiency caused by arrogance.
Embrace "subjective" signals: Since the authors are the ones who know their papers best, why block this information source with the highest "signal-to-noise ratio"?
Therefore, the so - called "bias," as long as it is corrected statistically, is the most precious "feature." This is not just a game of scores but also a return to the "slow science."
Do you think Bengio only supports a scoring algorithm? No.
As a leading figure who has long called for opposing "Publish or Perish," Bengio values the "self-reflection" function behind this mechanism.
He even proposed a more radical "multi-dimensional dimensionality reduction" concept in his comments:
In the future, authors should not only rank their papers as "good" or "bad" but should be required to confess in multiple dimensions:
Is this paper novel but rough?
Or is it rigorous but old - fashioned?
This is a technical correction of the impetuous academic atmosphere.
When an author is forced to rank their three papers, they must ask themselves in the middle of the night: "Am I really just filling up space?"
This "mandatory self - reflection" might be the more important value in Bengio's eyes than just screening papers.
Algorithm Folding: The Rich Get Richer, the Poor Barely Survive
If you think at this point: "Great, finally someone is going to regulate the reviewers who give random scores!"
Don't celebrate just yet. Take a look at your chips first.
This seemingly perfect "game theory skill" actually hides an accepted "wealth threshold."
The premise for the "self - rating mechanism" to work is that you must submit at least two papers.
You need to have Paper A and Paper B first, and then the system can calibrate the scores based on the logic of A > B.
If you only submit one paper? Sorry, the system can't help you.
You still have to "run naked" in the old system full of AI junk reviews and random scores.
This is a precise dimensionality - reduction strike against "academic retail investors."
According to the official statistics of ICML 2023, 75.5% of the authors submitted only one paper.
This means that the 25% of "big players" with multiple papers in their hands no longer just have research results; they have "arbitrage chips" that can hedge against each other and calibrate scores.
They can use sophisticated ranking to use statistical algorithms to put a "bulletproof vest" on their masterpieces and block the bullets from blind reviewers.
The confidence interval of MSE reduction. At a 99% confidence level, the error reduction remains in a very high range.
While 75% of the "ordinary players" are still facing a "Russian roulette."
If you encounter a reviewer in a bad mood who gives you 3 points, you have no mathematical tools to fight back.
The more chips you have, the thicker your armor. By observing the comparison between the red and blue bars, it can be seen that the more papers an author submits (on the x - axis), the more stable the space for reducing errors (the decline of the blue bars) using the self - rating mechanism tends to be.
Weichen Wang from the University of Hong Kong and Chengchun Shi from LSE believe that this mechanism actually rewards "padding."
In order to get the "calibrated" qualification and that algorithmic protective shell, laboratories will be forced to split one result into three (Salami Slicing) to meet the entry requirements for "self - rating."
In the past, the strong won by quality; in the future, the strong may harvest by "quantity stacking" and "ranking strategies."
ICML 2026's new policy may indeed solve the problem of "review accuracy," but the way it solves it is - to give priority to protecting those with more resources.
This is a blatant "algorithm folding."
ICML 2026 has smashed not only the review rules but also our last bit of fantasy about the "academic utopia."
When human reviewers are completely suffocated in the flood of 30,000 papers, it is an inevitable historical trend for machines and algorithms to take over the referee's power.
This is no longer an era of competing for who is more "objective"; it is an era of competing for who understands "game theory" better.
ICML has told us with algorithms that in today's era of explosive computing power, "honesty" is no longer a noble moral self - discipline but a cold "survival strategy" enforced by game theory.
Wake up, the old world has collapsed.
This article is from the WeChat official account "New Intelligence Yuan." Author: New Intelligence Yuan. Republished by 36Kr with permission.