HomeArticle

19,000 lines of Claude Code code triggered a joint "blockade" by hundreds of people. Core members of Node.js petitioned: AI-assisted development should be prohibited in the project.

CSDN2026-03-30 19:48
There has never been a real stop to the controversy over AI writing code.

There has never been a real halt to the controversy surrounding AI writing code. But this time, the fire has spread to one of the most core infrastructures - Node.js.

Recently, a petition addressed to the Node.js Technical Steering Committee (TSC) has attracted wide attention. In just a few days, more than a hundred open - source developers, front - end engineers, and programmers have signed it. They are calling for "the Node.js community to ban AI - generated code from entering the core repository", bringing the internal differences in the community to the forefront.

Regarding this controversy, some people think it is a commitment to code quality, while others believe it is a panicked reaction to technological progress.

A 19,000 - line code PR generated by Claude Code

Looking back at the whole event, it all started with a code submission in January 2026.

Matteo Collina, a member of the Node.js Technical Steering Committee (TSC) and the maintainer of the Fastify framework, at that time, in order to add the long - awaited virtual file system (VFS) feature to Node.js, submitted a PR containing about 19,000 lines of code, covering about 80 files.

For this purpose, he also specifically published a long article titled "Why Node.js Needs VFS", detailing the pain points of Node.js in the absence of a virtual file system. For example, when an application is packaged into a single executable file, a large amount of additional resources need to be included; in testing, a memory file system consistent with the module system cannot be obtained; in a multi - tenant environment, isolation can only be achieved through error - prone path verification; and dynamically loading generated code at runtime can only rely on temporary files.

Currently, existing solutions such as memfs and unionfs can only simulate the fs in a "patch" way and cannot truly access the module resolution process, resulting in calls like import() bypassing the VFS.

In Matteo Collina's view, the community has long clearly put forward this requirement, but there has always been a lack of a truly implemented solution.

Against this background, he started developing the VFS implementation during the Christmas holidays last year and submitted this PR containing 19,000 lines of code in January this year.

Normally, this should have been a happy event, but he added a disclaimer in the PR description, which ignited the fuse of the controversy:

I used a large number of Claude Code tokens to create this PR. All changes have been reviewed by myself.

In addition, Matteo Collina also wrote in his blog:

What was originally just a holiday experiment ultimately became PR #61478: a node:vfs module for Node.js, covering 66 files with nearly 14,000 lines of code.

To be honest, such a large PR usually takes several months of full - time work to complete. The reason this time it could be completed during the holidays is that I used Claude Code. I let the AI handle those boring and repetitive parts - it was these tasks that made a 14,000 - line PR possible, but no one wants to write them manually: implementing different variants of each fs method (synchronous, callback, Promise), integrating test coverage, and generating documentation. And I focused on architecture design, API design, and reviewed every line of code.

Without AI, this holiday side project would never have been completed - it simply wouldn't have happened.

As is well - known, Claude Code is an AI programming tool launched by Anthropic, with a huge 200K token context window, capable of generating complex cross - file logic and even completing code refactoring. However, in the core Node.js project that runs on millions of servers around the world, AI - generated code clearly touches the bottom line of many veteran developers.

In the view of many developers, the problem is not just that Matteo Collina "used AI", but also that this PR is huge in scale, modifies core modules, has an opaque generation method, and there are also issues regarding code copyright ownership.

So in the next two months, this PR has undergone as many as 128 review attempts and 108 comments. Its huge volume has almost brought the regular peer - review process to a standstill.

As of March 26, 2026, this PR has not been merged into the main branch.

From a PR to a discussion on "whether to ban AI"

The escalation of this controversy stems from an open petition initiated on GitHub by Fedor Indutny, the main author of the Node.js TLS module and a former TSC member: "Petition to the Node.js TSC: Ban the Use of AI in Core Code".

In a short period of time, more than 100 core developers, including Kyle Simpson, the author of YDKJS, and Andrew Kelley, the chairman of the Zig Software Foundation, have signed in support.

This petition states:

Node.js is a critical infrastructure on millions of servers around the world and also supports the command - line tools that developers use daily. Diluting the core code that has been carefully polished over the years with AI - generated code goes against the project's mission and values and may damage the reputation foundation built by public contributions, which is what gives Node.js its current status and social value.

Meanwhile, this petition also lists three core reasons for opposing AI - generated code:

Ethics: Mainstream large - language model companies have used unethical data sources during the training process, including copyrighted works and open - source code under different licenses without proper attribution.

Education: There is evidence that the use of large - language models can hinder the learning process. Since open - source projects often attract new contributors, lowering the code - quality threshold may weaken people's understanding of the Node.js core, thus affecting the long - term sustainability of the project.

In addition, code review is not only to find bugs, fix security issues, and ensure that the code complies with the project's style and architecture specifications, but also an important part of helping submitters learn and grow. However, large - language models themselves do not learn, which means that the time invested in review cannot be translated into the improvement of contributors' abilities and is thus repeatedly "wasted".

Privilege: Using large - language models usually requires a paid subscription or a large amount of hardware resources to run locally (and the output quality is often lower). Therefore, the generated code submitted should be reproducible by reviewers and should not require them to cross thresholds such as paid subscriptions to verify the results.

Opposing views and different stances

As soon as this petition was released, many netizens expressed their support: "The pressure of manually reviewing 19,000 lines of code is huge. AI assistance may lead to an inflation in the number and scale of PRs, ultimately overwhelming the review system that relies on volunteers."

There are also many developers who hold opposing views. Among them, James Snell, a member of the TSC, said that the only criterion for judging code should be quality, not the development tool.

As the person involved, Matteo Collina even wrote a long and forceful response in his blog. He used the "pasta - maker" theory to refute: My grandmother used a pasta - maker called "Nonna Papera" to make pasta - almost every Italian family has one. But no one would say that it wasn't her pasta. She chose the flour, eggs, and decided on the thickness and shape. The tool just helped her complete the production. Similarly, I decided on the architecture, designed the API, and reviewed every line of code. These codes belong to me.

He further emphasized that the DCO (Developer Certificate of Origin) has never cared about how the code is written, only about whether the contributor has the right to submit and take responsibility. This is no different from accepting any open - source contribution.

Regarding the next step, Matteo Collina said, "The Node.js TSC is about to vote on the disclosure and signature norms for AI - assisted contributions. The community consensus is that responsibly accepting AI - assisted contributions can accelerate the development of open - source projects; simply banning AI tools will only limit the sources of contributors. The most important role in software development has never changed - it's not the person or tool writing the code, but the person who understands, reviews, and is responsible for the code."

Sharp contrast: The AI counter - attack in the Linux kernel, turning "garbage" reports into "gold" overnight

It's worth mentioning that while Node.js is in an uproar over AI code, the Linux kernel community is experiencing an "AI is amazing" moment.

According to foreign media The Register, Greg Kroah - Hartman, a core maintainer of the Linux kernel, revealed at KubeCon Europe that the application of AI in the Linux kernel has achieved an overnight counter - attack from "garbage" to "gold".

Going back to before February 2026, the Linux kernel community was still troubled by "AI Slop" - the security reports generated by AI were full of obvious errors and had no reference value. In Greg's words, "it's a bit of a joke, and there's no need to worry at all."

This was also the norm in the entire open - source circle at that time: Daniel Stenberg, the founder of cURL, once complained that AI Slop had caused the effectiveness of the bug - bounty program to plummet, and finally stopped this six - year - old program in January 2026; the Ghostty project even introduced a zero - tolerance policy in the same month, and those who submitted AI garbage code would be permanently banned.

But February 2026 became a critical turning point for AI. Greg admitted that he "didn't know what happened, and the world suddenly changed": the once AI garbage reports overnight became high - quality and effective reports. Not only the Linux kernel, but all open - source projects are receiving "real bugs and real suggestions" generated by AI.

Even more surprisingly, this change is not an isolated case. After private communication among the security teams of major open - source projects, they found that everyone is experiencing the same AI upgrade, but no one can explain the reason - is it that the AI tools have suddenly evolved, or have developers mastered more efficient usage methods?

Greg's own experiment further confirms the power of AI: he used a simple prompt to let the AI analyze the kernel code, and the AI instantly output 60 problems and repair solutions. Two - thirds of the patches are directly usable. Although the remaining one - third have errors, they also point out real problems. These patches only need simple manual optimization to be integrated into the kernel development process, and the efficiency improvement is obvious.

Now, in the Linux kernel community, the question is no longer "whether to use AI", but "how to use it better". Currently, AI mainly plays the role of a code - review assistant and has not become the main author of the core code, but the boundary between the two is blurring: the community has already introduced the "co - develop" label to mark patches generated with AI assistance; in simple scenarios such as error detection and conditional judgment, AI can already generate dozens of usable patches and is fully capable of basic development work.

Conclusion

It's still hard to predict whether the core developers of Node.js will one day be as surprised by the output of AI as Greg Kroah - Hartman.

What is certain is that almost all mature open - source projects will sooner or later face the same challenge: AI won't stop, and code output will only become faster. Currently, simply banning AI is not feasible. The key lies in controlling code quality and ensuring that every line of code has reliable manual review.

Finally, how much of the code you write now is really "written by you"? Welcome to share your views on AI - generated code in the comments section.

References:

https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/?td=rt-4a

https://www.reddit.com/r/javascript/comments/1rz2pc6/petition_no_ai_code_in_nodejs_core/

https://github.com/indutny/no-ai-in-nodejs-core

https://github.com/nodejs/node/pull/61478

https://adventures.nodeland.dev/archive/who-is-responsible-for-ai-generated-code/

This article is from the WeChat official account "CSDN". Compiled by Tu Min. Republished by 36Kr with authorization.