Nearly 30,000 submissions have flooded NeurIPS, with 400 papers being strongly rejected. PhD students are caught in a fierce competition, and a nightmare has befallen this top AI conference.
A "warehouse overflow crisis" is unfolding at top AI conferences.
Due to venue limitations, NeurIPS 2025 is informing "Senior Area Chairs" (SAC) to reject accepted papers.
A SAC revealed that approximately 400 papers were directly rejected, even though they had passed the initial review by three reviewers and the AC.
She complained that it was extremely unfair. If the capacity was insufficient, the conference should be split or expanded, rather than randomly rejecting so many papers.
Previously, ICLR also adopted a similar "tactic".
Even more absurdly, some people with a score of 5444 became victims.
This "last-minute change" is ultimately attributed to the explosive total number of submissions.
Months ago, someone noticed that the submission IDs for NeurIPS had already reached 23,000. It's very likely that the final number will exceed 30,000, which is simply crazy.
Professor Subbarao Kambhampati from ASU severely criticized that rejecting AI papers due to "resource limitations" is like shooting oneself in the foot.
Some people also protested, saying, "This simply doesn't make sense!"
Nearly 400 Papers Rejected
NeurIPS Overflow
This year marks the 39th annual meeting of NeurIPS.
Different from previous years, to address the challenge of the rapidly expanding conference scale, NeurIPS 2025 decided to set up a satellite venue in Mexico City for the first time.
That is to say, NeurIPS 2025 is the first dual - city conference, taking place:
· From December 2nd to 7th at the San Diego Convention Center
· From November 30th to December 5th in Mexico City
Apparently, even after setting up two venues, there is still not enough space to showcase all the accepted papers.
Suddenly, rejecting excellent papers has become their "best strategy".
The question raised by a Reddit user four months ago has now hit the nail on the head.
This academic farce has broken the hearts of countless researchers and sparked widespread dissatisfaction in the academic community.
For a moment, it has exposed some original authors who were extremely angry because their papers with high scores, or having issues like uncited references, were rejected.
Experts Propose Two Tracks
Some people suggest splitting NeurIPS into different directions. Nowadays, almost everyone in the field of machine learning, including those in ML, natural language processing, computer vision, etc., crowds into one top - tier conference, which is extremely absurd.
Similarly, some other netizens also suggest setting up a Findings track to solve the problem.
Coincidentally, Professor Kambhampati from CS pointed out that if the offline venue is really insufficient, they can learn from the ACL model - set up two tracks:
One is the "main conference", and the other is the "Findings".
Since Spotlight and Oral can be differentiated from Poster, why not set up a Findings track below to specifically accept papers that have received good reviews from reviewers but are rejected simply due to insufficient venue space?
Nowadays, all papers are published on arXiv before submission. Being accepted as a Findings paper can be regarded as a "symbol" of community recognition. Even if it doesn't reach the Oral level, it's better than being rejected.
This view has resonated with many people and received widespread praise in the comment section.
Some Reddit users have pointed out that for most doctoral students and scholars, publishing papers in top - tier conferences like NeurIPS and CVPR is to obtain an "entry ticket".
Nowadays, publishing in top - tier conferences has become a "hard currency" for students to graduate, find faculty positions, and apply for research funds.
Some first - class universities require doctoral students to be the first author of a paper in one of the top three ML conferences, and the competition is extremely fierce.
AAAI 2026 Sets a Record with 29,000 Submissions
Nearly 70% from China
Coincidentally, AAAI 2026 has received a record - breaking nearly 29,000 submissions, with a total of over 75,000 authors.
After removing papers that do not meet the submission policy (such as missing PDF files, non - anonymous manuscripts, exceeding the page limit, or authors exceeding the submission limit), there are still approximately 23,000 papers entering the review process.
This number is almost twice that of AAAI 2025!
It's worth mentioning that among the approximately 29,000 total submissions, nearly 20,000 are from China.
If we consider Chinese surnames in papers from other countries and regions, the proportion of Chinese authors should be even higher.
In terms of research areas, the top three research keywords in terms of submission volume are:
Computer vision (nearly 10,000 papers)
Machine learning (nearly 8,000 papers)
Natural language processing (over 4,000 papers)
To review such a large number of submissions, the conference recruited over 28,000 members from the Program Committee (PC), Senior Program Committee (SPC), and Area Chairs (AC).
Among them, the scale of the Program Committee for AAAI 2026 is nearly three times that of AAAI 2025.
In addition to increasing the manpower, AAAI will also use a series of AI tools to assist in the review process, such as detecting and countering collusion among reviewers.
Meanwhile, the review comments given by AI will also be an important reference.
Some sharp - eyed netizens noticed that in the official "Reviewer's Guide", it initially mentioned "human - written" and "AI - generated", but later it became "human - generated".
We don't know if the official used AI when