One-month mutation leaves Linux kernel guru bewildered: Last month it was "AI junk," but this month AI bug reports suddenly seem reliable?
Recently, developers involved in open - source project maintenance may have a strange illusion: it seems that there are more and more accurate Bug reports. To be more precise, the Bug reports generated by AI have suddenly become "reliable".
This is not an accidental phenomenon in individual projects, but a change that has almost synchronously occurred across the entire open - source world. At the recent KubeCon Europe, Greg Kroah - Hartman, a core maintainer of the Linux kernel, gave a somewhat disturbing piece of information:
"About a month ago, something seemed to change. Now, the AI - generated reports we receive are all truly valuable Bug reports."
The problem is - no one knows what happened.
From 'AI junk' to 'real reports' in just one month
Greg recalled that just a few months ago, the Linux kernel team was being "harassed" by something: "We used to call it AI slop."
Most of these AI - generated security reports had obvious problems: illogical reasoning, non - existent vulnerabilities, confusing descriptions, and even inconsistent code paths. For maintainers, they were more of a distraction than a help.
Fortunately, the Linux kernel maintainer team is large enough to tolerate such distractions. However, for some small projects, the situation is not so optimistic. For example, the cURL project led by Daniel Stenberg once directly stopped its Bug bounty program due to the flood of AI junk reports because it was simply unable to distinguish between true and false reports.
But then a turning point suddenly appeared - Greg described it straightforwardly: "After a certain point in time, the situation suddenly changed."
Now the situation is as follows:
● Most of the Bug reports submitted by AI are verifiable real problems;
● The report structure is clearer, and the analysis path is more reasonable;
● It is no longer 'random guessing' but a security analysis approaching the level of human developers.
More importantly, this is not a phenomenon unique to Linux.
"All open - source projects have started receiving high - quality, real and effective reports generated by AI, no longer the junk content of the past." Greg said that the security teams of major mainstream open - source projects often communicate privately, and everyone has observed the same change: "Now all open - source security teams are experiencing this."
When asked "what exactly changed", his answer was very direct: "I don't know. Really, no one knows."
Greg speculated that either a large number of AI tools suddenly became much more powerful, or many people started to seriously study this area. It seems that many different teams and companies are making efforts simultaneously.
But regardless of the reason, one thing is certain: the entire open - source security ecosystem is synchronously experiencing this "AI leap".
AI is not only finding bugs but also starting to 'fix bugs'
The changes don't stop there. Currently, in the Linux kernel, the main role of AI is still concentrated in the code review stage, with a small amount used for generating patches and rarely directly for writing core code. But Greg said: "For some simple problems (such as error - handling logic), AI can already generate 'dozens of usable patches'."
Greg gave a real - life example: He once used a very simple and even 'casual' prompt to ask AI to analyze the code and provide a repair plan. As a result, the AI provided 60 problems and corresponding patches at once. About one - third of them were wrong - but even the wrong ones pointed to some real risks. The remaining two - thirds were repair plans that could work directly.
Of course, these patches cannot be directly merged. They still need to be sorted out manually, supplemented with change descriptions, and integrated into the code. But the key point is: this proves that they are no longer 'useless AI junk' but 'usable semi - finished products'.
As Greg said: "These tools work very well. We can't ignore them. They are developing rapidly and becoming stronger and stronger."
Linux starts to 'arm AI in reverse' to improve speed
With the surge in AI - generated content, a new problem has emerged: human maintainers are starting to 'fall behind in reviewing'.
To address this, the Linux community has started to introduce AI in reverse to solve the problem. A key tool is Sashiko, developed by Google and later donated to the Linux Foundation. Its goal is clear: to conduct a round of AI pre - review before patches enter the manual review stage.
Meanwhile, each subsystem is also accumulating its own 'AI review experience'. "Different subsystems will optimize their capabilities and prompts accordingly - for example, what points the storage module should focus on and what points the graphics module should focus on. Everyone is contributing optimization plans in the public community. This is the right way and it's very good."
Greg also mentioned that Chris Mason, a senior kernel developer currently working at Meta, pioneered an AI - based review workflow, which has been running in the eBPF and network modules for a long time; the systemd project also uses similar tools in its pure C codebase.
However, he also emphasized that AI review is a supplement rather than a replacement for manual review: "In terms of review, AI can provide many high - quality opinions, but it cannot cover all situations, and some conclusions are still wrong. However, it can point out many obvious problems."
After all, overall, the real value of AI review does not entirely lie in 'whether it is correct' but in - it is fast enough.
In the traditional process, it may take days or even longer for a patch to be seen by maintainers after submission. However, AI can provide preliminary feedback within a few minutes. This will have a chain reaction: developers can correct problems and submit new versions more quickly; patches with obvious problems can be filtered out in advance; and maintainers can focus their energy on more complex decisions.
In a sense, AI has changed code review from 'queuing for review' to 'instant feedback'.
But the cost is also real: the workload is increasing
It sounds like everything is getting better, but Greg's summary is quite reserved: "The amount of things we need to review has increased."
AI has lowered the threshold for participation and also increased the degree to which the content 'looks reasonable', which has directly led to a surge in input. For large projects like Linux, this is still within the tolerable range. However, for small and medium - sized open - source projects, this increase may be overwhelming.
Therefore, security projects such as OpenSSF and Alpha - Omega are trying to provide more tools to help maintainers deal with this 'AI input flood'.
Therefore, for all open - source maintainers, the real challenge is no longer 'whether to use AI' but: how to turn AI into productivity without being overwhelmed. And judging from the current trend, this 'infrastructure competition' regarding AI has just begun.
Reference link: https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/
This article is from the WeChat official account "CSDN". It was sorted out by Zheng Liyuan and published by 36Kr with authorization.