Has the world changed and Linus compromised? AI-generated code can be included in the Linux kernel, but humans will be held accountable if something goes wrong.
Linux Sets Rules for AI-Generated Code: Copilot Can Be Used, but Humans Bear the Blame for Issues.
After months of debate, Linus Torvalds and Linux kernel maintainers have finally established the first set of rules for AI-assisted code in the Linux kernel. This new set of regulations aligns with Torvalds' consistent pragmatic style: AI tools can be used, but the Linux kernel's high requirements for code quality will not be relaxed in the slightest.
The new guidelines mainly set three things.
First, AI agents cannot add the Signed-off-by tag
Only humans can legally sign the Developer Certificate of Origin (DCO) for the Linux kernel. This is a legal mechanism to ensure code license compliance. In other words, even if the patch you submit is entirely written by AI, the responsibility still lies solely with you, not the AI or its provider.
Second, the Assisted-by (assistance source) must be marked
If an AI tool is used during kernel development, the source should be clearly stated so that everyone can know how much the AI has participated in this submission. Related code contributions need to add the Assisted-by tag in the following format:
Assisted-by: AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2]
Here, you need to clearly state what model, what agent, and what auxiliary tools you have used. For example, AGENT_NAME is the name of the AI tool or framework you used; MODEL_VERSION is the specific model version used; [TOOL1] [TOOL2] are optional, used to state what specialized analysis tools were also used this time, such as coccinelle, sparse, smatch, clang-tidy. Everyday basic tools like git, gcc, make, and editors do not need to be included.
For example, if you used Claude's claude-3-opus and also used coccinelle and sparse for analysis, you can write it like this:
Assisted-by: Claude:claude-3-opus coccinelle sparse
Third, humans bear full responsibility
Putting the previous points together, the meaning is actually very clear: AI can participate in writing code, but the responsibility cannot be shifted to the AI. Whether the code generated by AI has undergone a complete review, whether the license is compliant, and if there are bugs or security vulnerabilities later, it is ultimately the person who submitted the code who is responsible.
Don't try to sneak problematic code into the kernel. The incident at the University of Minnesota in 2021 is a typical cautionary tale. Otherwise, you can basically say goodbye to your identity as a Linux kernel developer and any opportunity to participate in serious open-source projects.
In that incident, a group of researchers from the University of Minnesota, in the name of security research, deliberately submitted patches with defects and even potential vulnerabilities to the Linux kernel to test whether the community could identify and block these "problematic fixes." The problem was not only that the patches themselves were problematic, but also that they did not inform the community in advance and directly used the maintainers and developers as experimental subjects.
After the incident was exposed, Greg Kroah-Hartman publicly criticized this practice for wasting the community's time and destroying collaborative trust. The relevant submissions were intensively reviewed and rolled back, and the University of Minnesota's subsequent contributions were also banned by the Linux community, becoming a negative example still frequently cited in the kernel community to this day.
1 Claude is already powerful, but it's just a tool in the kernel
The Assisted-by tag is both a transparency mechanism and a "reminder mark." It allows maintainers to conduct more rigorous reviews of AI-assisted patches without stigmatizing the use of AI.
The emergence of this tag actually stems from a significant controversy.
The starting point of the controversy was Sasha Levin, an Nvidia engineer and well-known Linux kernel developer. At the North American Open Source Summit in 2025, he shared some of his practices of using LLMs to improve the kernel.
He believes that an LLM is essentially a pattern-matching engine with a huge number of parameters and can be regarded as a "super-large state machine." The difference is that the state machines commonly found in the kernel are deterministic, while the state transitions of an LLM are probabilistic. Given a piece of context, it will predict "the next most likely word." For example, if you input "the Linux kernel is written in...", it will almost certainly output "C", but there is also a small probability of outputting "Rust" or "Python."
At the same time, an LLM works based on a "context window," which is the input text it can "remember" when answering. For a system like Claude, the context window is about 200,000 tokens, which is enough to cover a complete kernel subsystem.
Levin does not think that LLMs will replace humans in kernel development. He prefers to regard it as the "next-generation compiler." In the past, developers wrote assembly code, and then high-level languages emerged; at that time, some people did not approve, saying that "real developers should allocate registers themselves." But in the end, everyone accepted higher-level tools, and productivity increased accordingly. The evolution of LLMs is similar - it is not perfect, but it is already sufficient to bring about an increase in efficiency.
He gave an example: A patch merged into Linux 6.15 was signed by him, but the code was actually completely generated by an LLM, including the changelog. Levin reviewed and tested the code but did not write it himself. He believes that such "small and clear" tasks are exactly the strengths of LLMs, but it is not yet possible for them to independently write a brand-new device driver.
LLMs are also very helpful in writing commit messages, which is often more difficult than writing the code itself, especially for developers whose native language is not English.
He also showed a section of the modification in the patch.
Here, when switching from one hash API to another, the "size" needed to be changed to "a power-of-two representation." The LLM correctly understood this and made the corresponding modification. It also realized that a mask operation was actually redundant after the patch and directly deleted it. Levin said that this code was both correct and efficient.
Regarding this matter, the community also discussed more issues. For example, someone asked: Will errors be introduced due to excessive trust in the LLM's output? Levin's answer was: LLMs can make mistakes, and humans can too, and often do. Someone else asked about the code's license issue, and he said that he hadn't thought deeply about it and intuitively believed that the code generated by the model could be used.
Finally, someone asked if this method could be used for automatic review before code merging. Levin said that it was technically feasible, but the scale was too large and the cost was too high, so it was not realistic at present, but it might be possible in the future.
A direct result of this storm was that Levin himself began to support the establishment of transparent rules for AI use. He submitted the first version of the proposal, which was the prototype of the later kernel AI policy. Initially, he suggested using Co-developed-by to mark AI participation.
Subsequently, both offline discussions and exchanges on the Linux Kernel Mailing List (LKML) debated whether to introduce a new Generated-by tag or reuse the existing Co-developed-by. In the end, the maintainers chose Assisted-by, which more accurately reflects the role of AI as a "tool" rather than a "co-author."
The final choice of Assisted-by instead of Generated-by is mainly based on three considerations:
First, it is more accurate. In kernel development, AI is more often used for assistance (code completion, refactoring suggestions, test generation) rather than complete code generation;
Second, the format is consistent. It is consistent with existing tags such as Reviewed-by, Tested-by, and Co-developed-by;
Third, it is a neutral expression. It indicates the participation of the tool without implying that the code is "untrustworthy" or "inferior."
This pragmatic approach is actually consistent with Torvalds' attitude. He said, "I don't want the kernel development documentation to become some kind of AI position statement. There are already enough voices saying 'the world is coming to an end' and 'AI will completely change software engineering.' I don't want the documentation to take sides. For me, it should be - AI is just a tool."
2 The world has changed overnight, and the rules have to change too
Behind this decision, an important real-world change is that AI programming assistants have suddenly become "really useful" in kernel development.
Last month, Greg Kroah-Hartman, the maintainer of the Linux stable kernel, mentioned that a few months ago, the kernel community was mainly dealing with so-called "AI slop," that is, those obviously incorrect and low-quality AI-generated security reports. "It was even a bit funny at that time and not very worrying." Of course, the Linux kernel has many maintainers, and for them, the pressure from such junk reports is far less than that of cURL. Daniel Stenberg, the founder and main developer of cURL, once suspended the bug bounty due to too many AI junk reports.
But the situation has changed. Kroah-Hartman said that about a month ago, around February, a turning point occurred, and "the whole world switched. Now what we receive are'real' reports."
And it's not just Linux. He mentioned that many open-source projects are now receiving AI-generated reports, and these reports are good and real. There has always been informal communication among the security teams of major open-source projects, and everyone has observed the same change. In other words, this is no longer just a problem for Linux but a new situation that the entire open-source security community is facing simultaneously.
As for what exactly happened, no one can say for sure. When asked about the reason for the change, he said, "We don't know. No one knows why. Maybe many tools suddenly got better, or maybe people started using these tools seriously. It seems that many different teams and different companies are doing this simultaneously."
"These tools are really useful, and we can't ignore them: they're here, and they're getting stronger."
On Monday, Beijing time, Linus Torvalds released Linux kernel 7.0. For him, this upgrade to 7.0 doesn't mean a "major version turning point." It's more just that after the version number reached x.19, it was simply incremented to x.0 to avoid making the numbering seem too messy.
However, there is still a sentence in the release notes that is particularly worth noting. Torvalds wrote, "I suspect that in the next period, as people use AI tools extensively, they will continue to uncover various minor issues, so this may become the 'new normal' for some time. As for how long it will last, we'll have to wait and see."
In addition, although the Linux kernel has introduced an AI disclosure policy, maintainers will not rely on so-called AI detection tools to identify undisclosed AI code. They still have to rely on the old methods: in-depth technical experience, pattern recognition ability, and the most traditional code review. As Torvalds said in 2023, "Judging others' code requires a certain taste."
That's the problem. As he said, "It's meaningless to discuss AI junk code because those who write it won't label it voluntarily." The difficult part has never been the obviously problematic junk code, but those patches that seem completely normal on the surface: they meet the current requirements, have a consistent style, and can be compiled smoothly, but there are subtle bugs or hidden long-term maintenance costs inside.
Therefore, the implementation of this new policy does not rely on catching every violator but on constraining behavior by increasing the cost of violation. Just ask those who have been "educated" by Torvalds in person for submitting poor-quality patches. Although he is much more gentle than before, you still don't want to be on his bad side.
Reference materials:
https://docs.kernel.org/process/coding-assistants.html
https://www.youtube.com/watch?v=ec7gDUFm2-Q
https://lwn.net/Articles/1026558/
https://lore.kernel.org/lkml/CAHk-=wg0sdh_OF8zgFD-f6o9yFRK=tDOXhB1JAxfs11W9bX--Q@mail.gmail.com/
https://www.theregister.com/2026/04/13/linux_kernel_7_releaseed/