HomeArticle

After months of debate, Linus has finally made a decision. Linux has officially "legislated" for AI code: using AI is allowed, but humans must take the blame.

CSDN2026-04-14 21:40
It's not just Linux; the entire open-source community is in "turmoil."

If in the past two years, the attitude of the open - source community towards AI was still stuck in the debate of "whether to use it or not"; then now, this question has been completely rewritten by reality into - how to minimize the risks on the premise of the inevitable use of AI?

Recently, the long - standing controversy surrounding AI - generated code finally came to an end in the Linux kernel community: Linus Torvalds and the Linux kernel maintainers officially formulated a set of project - level AI code usage specifications.

To put it simply, it's just one sentence: AI can be used, but you must be responsible for everything it does.

Linus's attitude: AI is just a tool, and "banning AI" is meaningless

In the past few months, the Linux kernel community has actually been in a delicate tug - of - war:

● On one side, there are more and more common AI programming tools, such as Copilot launched by GitHub and large models like Claude;

● On the other side, there are the maintainers' deep - seated anxieties about code quality, legal risks, and community culture.

The outbreak point of the debate occurred at the beginning of this year. At that time, kernel developers from Intel and Oracle had a public disagreement over "whether AI code should be strictly restricted", and the discussion once escalated into a fierce confrontation at the community level. Some advocated strong supervision or even a ban, while others believed that this was just a normal stage in technological evolution.

Finally, Linus Torvalds stepped in person and put an end to this debate with one sentence: "Discussing AI junk code is actually meaningless. It's just plain stupid."

In his view, AI is essentially the same as editors and compilers, just a tool. What really needs to be supervised is "people", not the tools they use. Therefore, instead of trying to "ban AI", it's better to firmly bind the responsibility to the person who submits the code.

Regardless of whether AI is used or not, the person who submits the code is responsible

The most core change in this new policy seems to be just an adjustment of a label: AI - generated code cannot use the Signed - off - by label, but must add the Assisted - by label to clearly mark AI participation.

Let me briefly introduce: In the Linux development process, Signed - off - by has always been a label with great legal significance. It means that the submitter promises that the code source is legal and they have the right to submit it. Now, this label is clearly prohibited from being used for AI - generated content, and is replaced by a new label - Assisted - by.

The purpose of this adjustment is actually very clear:

● AI participation must be clearly marked (transparency)

● The ultimate responsibility still fully belongs to human developers (accountability)

In other words, whether the code is written by you or generated by AI, as long as you submit it: if there are bugs, performance issues, or even security vulnerabilities - the responsibility is yours. The Linux community didn't try to define the "credibility" of AI, but directly bypassed this difficult problem and brought the problem back to the most traditional engineering principle: whoever submits is responsible.

Why is the open - source community so anxious about AI?

If you only look at the surface, it's easy to understand this debate as a conflict between "old - school engineers vs modern new tools". But in fact, this controversy is not a "technical problem", but a legal one.

The open - source world has long relied on a key mechanism: DCO (Developer Certificate of Origin), that is, when developers submit code, they need to declare that they have the right to submit this code. But the problem is - AIs like Copilot and ChatGPT are trained based on a large amount of open - source code, including strong - restriction licenses such as GPL (GNU General Public License) and various data with unclear copyrights.

This will lead to an awkward situation: developers actually can't fully prove the "legal source" of AI - generated code. Red Hat clearly warned in a previous analysis that using AI - generated code may inadvertently lead to open - source license violations and may even fundamentally impact the DCO system.

In addition to legal risks, there is also a more realistic problem: there is too much AI code, and the quality is uneven. The open - source community even gave it a name, "AI slop" (AI junk code).

These codes often seem to have a complete structure and correct grammar, but are full of logical loopholes and even contain a lot of "hallucinations". In reality, there have already been many real cases:

● The maintainer of cURL was flooded with a large number of error reports generated by AI and had to shut down the vulnerability reward mechanism;

● The white - board tool tldraw started to automatically close external PRs to reduce invalid submissions;

● In projects like Node.js and OCaml, there have even been patches of tens of thousands of lines generated by AI, which caused disputes among maintainers.

Finally, what the community really can't accept is not AI itself, but the behavior of "hiding the use of AI".

There was a typical incident within the Linux kernel: Sasha Levin, an NVIDIA engineer and kernel maintainer, once submitted a patch completely generated by a large model without any AI label. Although this code could run, it introduced performance regression problems and misled other maintainers during the review stage.

Afterwards, even Linus Torvalds admitted that because there was no AI label, this code was not fully reviewed. So, the essence of the problem is actually very clear: the open - source community is not afraid of you using AI, but strongly dislikes you "pretending to have written it yourself".

It's not just Linux; the entire open - source circle is in "turmoil"

Similar conflicts don't only occur in Linux: In the Mod community of the classic game Doom, the GZDoom project was also split due to the issue of AI use.

At that time, the project leader Christoph Oelckers was found to have used AI to generate code without disclosure. When facing community doubts, he simply "gave up": "If you're not satisfied, you can fork." As a result, the community really did so - a new branch called UZDoom quickly emerged, a large number of core developers migrated there, and the original project was severely damaged.

Facts have proved that in the open - source world, once transparency is broken, division is almost an inevitable result.

In contrast, the final answer given by the Linux kernel community is actually very "engineer - minded": whether the code is good is more important than whether it is written by AI. You can use AI to generate code, but if the code has problems, if it is "AI junk", if it crashes the kernel, then - the person who clicks "submit" has to be personally responsible to Linus.

In the open - source world of Linux, this is probably the strongest constraint mechanism.

Reference link: https://www.tomshardware.com/software/linux/linux - lays - down - the - law - on - ai - generated - code - yes - to - copilot - no - to - ai - slop - and - humans - take - the - fall - for - mistakes - after - months - of - fierce - debate - torvalds - and - maintainers - come - to - an - agreement

This article is from the WeChat official account "CSDN", organized by Zheng Liyuan. It is published by 36Kr with authorization.