HomeArticle

The results of AAAI 2026 have been announced. A high score of 88887 was achieved. The acceptance rate for the 23,000 submissions was only 17.6%.

新智元2025-11-10 17:49
20,000 papers in a fierce competition, and are there also "favorites with connections"?

The acceptance results of AAAI 2026 are officially announced! This year, the number of submissions soared to 23,680, and the acceptance rate was only 17.6%. The competition was far more intense than in previous years. Some researchers who successfully got their papers accepted showed off their acceptance transcripts, and some even achieved a high score of 88887.

The acceptance results of AAAI 2026 are out!

In recent days, the AAAI organizing committee has been sending out emails, revealing the acceptance results of this annual top conference in the AI circle.

The emails indicate that AAAI 2026 received a total of 23,680 paper submissions, setting a new historical record. In contrast, AAAI 2025 received 12,957 valid submissions.

Meanwhile, 4,167 papers were accepted, with an acceptance rate of only 17.6%. For comparison, this year, AAAI accepted 3,032 papers, with an acceptance rate of 23.4%.

Address: https://papercopilot.com/statistics/aaai-statistics/aaai-2026-statistics/

As one of the recognized top conferences in the field of AI, AAAI was founded in 1980 and is held annually.

This year marks the 40th annual conference of AAAI, which will be held at the Singapore Expo from January 20th to January 27th, 2026.

Some researchers who received acceptance emails have shown off their transcripts one after another.

Netizens Show Off Their Transcripts

In the research group led by Professor Zhang Ming from Peking University, a fifth - year Ph.D. student, Gu Yiyang, as the first author, had his paper titled "CogniTrust: A Robust Hashing Method with Verifiable Supervision Based on Cognitive Memory" accepted by AAAI 2026.

This year, he already has four first - author papers in CCF - A conferences. The first three were accepted by Artificial Intelligence, NeurIPS, and ACM MM respectively.

Currently, many data labels have problems such as being damaged, incomplete, or blurred. These noisy labels can seriously affect the reliability of AI model learning.

Inspired by the way humans remember, the team proposed CogniTrust, a new framework that combines verifiable supervision with a ternary memory model: episodic memory, semantic memory, and reconstructive memory.

These components together form a closed - loop mechanism that verifies, calibrates, and integrates supervision from both spatial and semantic perspectives.

Experiments show that CogniTrust can verify the supervision signals structurally and provide an interpretable basis for label decisions.

Jia Xiaojun from Nanyang Technological University shared that he and his team had five papers accepted at AAAI 2026, including 3 posters and 2 oral presentations.

These papers focus on areas such as privacy protection of large models, secure alignment, multimodal security, robustness of autonomous driving, and secure communication of multi - agents.

Two Oral Papers:

MPAS: A Parallel Multi - Agent System Based on Graph Message Passing, which breaks the sequential communication limitation, reduces the communication time from 84.6s to 14.2s, and significantly enhances the robustness against backdoors.

SECURE: A Fine - Tuning Security Constraint Method is proposed, which punishes orthogonal updates to keep the model within a "narrow security basin", reducing 7.6% of harmful behaviors and improving performance by 3.4%.

Three Poster Papers:

GeoShield: The first adversarial framework for geographical privacy protection of VLMs. Through feature decoupling, exposure identification, and scale - adaptive enhancement, it effectively prevents the model from inferring geographical locations and significantly outperforms existing methods.

EmoAgent: The first adversarial framework for emotion in multimodal inference models, which reveals the "security - inference paradox". By hijacking the inference path with exaggerated emotion prompts, it exposes deep - seated security misalignments.

PhysPatch: An Adversarial Patch Framework for Autonomous Driving that can be physically implemented. It jointly optimizes patch parameters and semantic positions and has high transferability and practical deployment value on various MLLMs.

Song Wenxuan, a Ph.D. student from the Hong Kong University of Science and Technology (Guangzhou), also had two oral papers accepted, both of which are research on VLA (Vision - Language - Action) large models. One of them, ReconVLA, achieved a high score of 88887.

ReconVLA proposes a new idea for VLA visual representation learning. By introducing "visual tokens" to guide the auxiliary task of reconstructing the "gaze area", it implicitly enhances the practical implementation ability of VLA.

The other paper, VLA - Adapter, as a lightweight VLA base model that has gained wide attention, has received 1.6k stars on GitHub. It achieves SOTA performance on mainstream benchmarks (CALVIN, LIBERO) with a small model of only 0.5B. Both works are fully open - sourced.

Li Kai from Tsinghua University and his team have 1 oral and 2 poster papers.

The oral paper received a high score of 689 (with a confidence level of 5 for the 9 - point part). The reviewer commented:

I'm very curious that this idea has never been explored in our field.

The DegVoC proposed by the team borrows the idea of "compressed sensing", models the vocoder as an anti - degradation problem, and models it as initialization and deep prior regularization using the iterative optimization solution idea.

Empirical results show that DegVoC achieves SOTA performance of current GAN/DDPM/FM methods with significantly lower overheads of 3.89M and 45.62GMACs/5s.

The other two poster papers are:

One paper proposes SepPrune, a structured pruning framework for deep speech separation models. It introduces an innovative "differentiable mask strategy" that allows the model to automatically eliminate redundant channels through gradient learning.

The pruned model converges 36 times faster than training from scratch. Moreover, with only one epoch of fine - tuning, the model can recover up to 85% of the performance of the pre - trained model.

The other paper proposes an FGNet framework that efficiently transfers the powerful prior knowledge learned by the large visual foundation model Segment Anything 2 (SAM2) in a large number of natural images to the field of EM neuron segmentation.

Even when the weights of SAM2 are completely frozen, the performance of the new method is comparable to the SOTA. After fine - tuning, it significantly outperforms all existing solutions.

Hamid Rezatofighi, an associate professor from Monash University, said that his team also had three papers (1 oral) accepted.

More scholars have shared the results of their accepted papers.

Competition Among 20,000 Papers: Are There "Insiders"?

On Reddit, the discussion about this year's AAAI is in full swing.

Different from previous years, the total number of paper submissions to AAAI 2026 exceeded 20,000, breaking the record of previous years.

Previously, some netizens reported that AAAI 2026 received nearly 30,000 submissions.

According to openaccept statistics, the acceptance rate of AAAI 2026 is the lowest in the past three years. This is not good news for submitters.