19,000 lines of Claude Code "AI junk" invades Node.js: The world's top open-source project is on the verge of collapse.
Recently, Node.js, one of the most influential projects in the open-source world, is facing an unprecedented choice. A controversy over whether to allow AI-generated code into its core codebase is triggering a fierce debate in the tech community.
Here's what happened.
1 19,000 lines of Claude Code in Node.js spark controversy
In January 2026, a remarkable Pull Request (PR) was submitted to the Node.js core codebase. This PR contained nearly 19,000 lines of code (specifically, about 14,000 lines across 66 files), aiming to introduce a brand-new virtual file system (VFS) feature to Node.js.
The submitter is Matteo Collina, a member of the Node.js Technical Steering Committee (TSC), co-founder and CTO of Platformatic, and the maintainer of the Fastify framework. He clearly stated in the PR description: "I used a large number of Claude Code tokens to create this PR. But all the code has been reviewed by myself."
The appearance of this PR should have been regarded as a technological victory - within just one Christmas holiday, a large-scale feature that usually takes months of full-time work to achieve was completed.
Matteo Collina publicly stated that AI handled a large amount of repetitive work, such as implementing all fs methods, writing test coverage, and generating documentation, while he himself focused on architecture and API design and checked the code line by line. Without AI, this could never have been completed as a holiday side project.
He wrote in his blog titled "Why Node.js Needs a Virtual File System": "To be honest, a PR of this size usually takes several months of full-time work to complete. The reason this was successful is that I used Claude Code. I let AI handle the tedious parts, the work that made a 14,000-line PR possible but that no one wants to write by hand: implementing each fs method variant (synchronous, callback, Promise), configuring test coverage, and generating documentation. I focused on architecture, API design, and reviewing the code line by line. Without AI, this could never have been a holiday side project, it could never have been realized."
Originally, some time had passed since the incident, but a few days ago, Fedor Indutny, a long-time Node.js core contributor, questioned Matteo Collina's act of submitting a PR with code generated by the Claude Code tool.
Fedor Indutny's concern is not about the code quality, but whether the AI-assisted code complies with the DCO 1.1 clause (a legal certification that every Node.js contributor must sign when submitting a PR). He even launched a petition asking the Node.js Technical Steering Committee (TSC) to vote to ban the use of AI-assisted development in the core project.
The core arguments of the petition include:
The importance of infrastructure: Node.js is a critical infrastructure running on millions of servers, supporting engineers through the command-line tools they use daily. Diluting the core code painstakingly written over the years is considered to go against the project's mission and values.
DCO compliance controversy: Although the legal opinion of the OpenJS Foundation believes that LLM-assisted changes do not violate the DCO, the petitioners think this is just the tip of the iceberg.
Ethical considerations: Some large model companies have used improper sources of materials in their training, including copyrighted works and unauthorized open-source code.
Educational impact: There is evidence that using large models can hinder students' learning process. Lowering the code quality standard may lead to a decrease in the understanding of the Node.js core, endangering the long-term development of the project. The code review process is not only to find bugs and security issues, but also to help submitters learn and grow. However, LLM itself does not have the ability to learn, and the review time is wasted while the contributor's skills are not improved.
Privilege issue: Using large models requires a paid subscription or a large investment in hardware to run locally. The submitted generated code should be reproducible by reviewers without using a paid-subscription LLM tool.
In short, the most important part of the reasons given by the petitioners points to the "infrastructure attribute" of Node.js and the auditability of the code.
As a critical runtime environment running on millions of servers worldwide, the core code of Node.js has long relied on developers to maintain it manually in a highly prudent manner. In their view, this "traceable and understandable" code production method is an important part of the project's credibility. Once AI-generated code is introduced, especially large-scale changes, it may weaken this engineering tradition and even shake the reputation foundation of Node.js in the developer ecosystem.
Another core of the conflict lies in "auditability". In the traditional development process, code is not only a carrier of execution logic, but also an embodiment of design decisions. Reviewers can understand the developers' trade-offs in performance, compatibility, and architecture by reading the code. However, AI-generated code often lacks a clear design context, making the review process degenerate from "understanding the design" to "checking the implementation". When this problem is combined with a change scale of 19,000 lines, the complexity of code review is exponentially amplified.
2 Code submitter responds to the controversy: If there's a bug, it's my responsibility
The petition has sparked heated discussions in the community, but it is not a one-sided show of support. The "AI empowerment faction" represented by Matteo Collina put forward a strong counterargument.
Collina elaborated on his views in the blog post "The DCO Debate: Who Is Responsible for AI-Generated Code?". He compared AI to "the pasta machine used by grandma" - the tool helps with the production, but the final product is still grandma's responsibility.
"I chose the architecture. I shaped the API based on the feedback from all reviewers. I made design decisions, caught and fixed the problems introduced by AI, and I understand the function and reason of every part of the code. I signed the DCO. My name is on it. If there's a bug, it's my responsibility. If there's a licensing issue, I certified the compliance."
Collina also put forward an important point: Reviewers should also be regarded as co-authors. "Maintainers who review PRs, suggest changes, catch edge cases, and help shape the final implementation - aren't they co-authors of this work? This has always been the case for every PR in Node.js history."
In addition, Collina hopes that the community can reach a consensus on the real meaning of "manual review" in AI-assisted contributions. Simply saying "I've reviewed it" is not enough. We need to be able to answer questions like: Do you understand the function of this code? Can you explain the design choices? Can you respond to feedback without asking AI again? Can you maintain this code a year later? These are the questions we've always asked contributors. The tools may change, but the expectations for people remain the same.
Collina also clarified in the article that the broader open-source ecosystem has formed a preliminary consensus on the issue of AI-assisted contributions.
Linux Kernel Community: As the creator of the DCO, the Linux Kernel Community has a clear policy document on AI-assisted contributions. Their coding-assistants.rst requires a strict human-machine cycle process. AI agents are not allowed to add the Signed-off-by tag. Only humans can legally certify the DCO. The person submitting the code must review all AI-generated code, check the license compliance, and add their own signature. AI assistance must be disclosed through the Assisted-by tag.
Red Hat's legal team: CTO Chris Wright and legal advisor Richard Fontana published a detailed analysis, directly answering the DCO question. They explained that the DCO has never been interpreted as requiring every line of a contribution to be the individual creative expression of the contributor. Many contributions contain routine, non-copyrighted materials, and developers still sign. The real point of the DCO is responsibility. Under disclosure and human supervision, AI-assisted contributions are fully compatible with the spirit of the DCO.
OpenJS Foundation: Node.js' own legal institution directly stated its position on the PR. Executive Director Robin Ginn confirmed that the foundation has consulted legal counsel and is satisfied with the DCO compatibility of AI-assisted contributions, and promised to formally record this position.
These three independent organizations - the creator of the DCO, one of the world's largest open-source legal teams, and Node.js' own foundation - have all reached the same answer: AI does not violate the DCO. Accountability is important.
3 The community is in an uproar
Meanwhile, users in communities such as Hacker News and Reddit are also in an uproar about this matter!
On Reddit, some developers directly pointed the finger at AI and opposed its entry into the core code.
The user said: "Frankly, although I support large-scale code refactoring and automated generation, directly using large models to manage such changes is not the optimal solution. I prefer to see the author implement the refactoring through AST (Abstract Syntax Tree) transformation scripts or other programmable scripts. This method has clear logic and allows me to more intuitively understand the essence and reason of the code changes. In contrast, the changes generated by large models are non-reproducible and rely on paid subscription tools, increasing the collaboration threshold. In addition, if the refactoring is too complex, I suggest breaking it down into multiple small incremental PRs to reduce the overall complexity step by step."
Another user complained. Since the PR submitter himself said the code was written by AI, why should the reviewers be asked to manually find the bugs? He wrote:
"To put it bluntly, this PR is too large for anyone to guarantee the quality. The submitter himself said 'I couldn't have written so much without AI', which means even the author himself may not be able to fully control these 20,000 lines of code. Since you can 'generate it with one click', why should the reviewers be required to sit for days and nights to manually find the bugs?"
The user also mentioned that the copyright issue is a big pitfall. "Everyone knows that AI will 'pay tribute' to the code in the training set. In case it casually gives you a piece of someone else's closed-source patented code, you simply can't tell. It's okay for minor repairs, but this large-scale 'copying' has too high a legal risk - who can guarantee that the background of this code is clean? In case of a lawsuit in the future, who will take the responsibility?"
Another user did some calculations and said:
"I calculated that it's 19,000 lines of code. If it takes 2 minutes to review each line, that's (19000 ✖️2) ➗ 60 ➗ 7, which is approximately 90 working days (based on 7 hours a day).
Are you sure these codes have really been read line by line? I mean, since the author is too lazy to write, can they really be patient enough to read all these codes?
If this were the business code of a private website or a small company, it might be okay to take this risk; but for an open-source project that countless people rely on as infrastructure, seeing such a large amount of 'seemingly okay' code generated by AI is really creepy."
But some users believe that since these codes are from a Node.js core maintainer and he has manually reviewed the code, the submitted codes are trustworthy. It is unfair to question them just because AI was used.
Some users view this matter dialectically. He said: "I firmly oppose a 'one-size-fits-all' ban on large models, but I also oppose the behavior of submitting a huge amount of code without restraint just because AI speeds up the process. Just because we can now be 100 times faster than before doesn't mean we can just stuff 100 times the scale of PRs into the community."
4 Node.js founder: Future software doesn't need humans to write code by hand
In fact, Collina's act of submitting a PR with Claude Code is in line with the view of Node.js founder Ryan Dahl two months ago.
In January this year, Ryan Dahl posted on X, saying, "The era of humans writing code is over. Machines can now complete in seconds what used to take months."
Contrary to the view that the emergence of artificial intelligence will make developers redundant, Dahl emphasized that human developers are still essential, and they actually have more valuable skills than ever. Developers no longer need to perform low-level programming tasks such as inputting commands, because these tasks are now done by AI algorithms. The value of developers lies more in their creativity and problem-solving abilities.
"The human job is no longer to write every line of code, but to coordinate AI tools to build systems at an unprecedented speed and quality."
In addition, entry-level positions will change. Those entry-level positions that focus only on writing CRUD applications or basic functions have disappeared. But new positions are emerging: AI prompt engineers, AI quality assurance experts, and AI integration architects.
In this context, understanding domain expertise is more important. He once mentioned that understanding healthcare, finance, logistics, or any specific industry is far more important than mastering the syntax of React. AI can write code, but it cannot replace in-depth domain knowledge.
Reference links:
https://yakhil25.medium.com/the-era-of-human-written-code-is-over-ryan-dahls-wake-up-call-to-software-engineers-dc6a4907b8ac
https://adventures.nodeland.dev/archive/who-is-responsible-for-ai-generated-code/
https://blog.platformatic.dev/why-nodejs-ne