HomeArticle

Even Karpathy is scared. A 90-million-level AI package was poisoned, and it was actually saved by a hacker writing a bug.

新智元2026-03-26 15:02
A poisoning incident that lasted less than an hour tore open a fatal crack in the "trust chain" of AI infrastructure. Even more incredibly, the entire industry managed to escape disaster, all thanks to the hacker accidentally creating a bug themselves.

Just now, the tech community has experienced a nerve - wracking "supply chain poisoning" crisis.

On the morning of March 24th, an ordinary version update, LiteLLM 1.82.8, appeared on PyPI.

Every day, the terminals of millions of developers around the world automatically pull such updates. No one noticed that there was a carefully - designed malicious code hidden in this version:

As long as you execute the command "pip install litellm", the SSH keys, cloud service credentials, database passwords, cryptocurrency wallets on your machine... will be encrypted and packaged within seconds and sent to a server masquerading as an official one.

Then, if your machine is connected to a Kubernetes cluster, the malicious code will automatically spread horizontally and implant backdoors on each node.

LiteLLM is one of the most core infrastructures in current AI application development. Its open - source package has 97 million monthly downloads globally.

But this time, this pipeline was almost penetrated from the inside, and it was purely by accident that the disaster was averted.

There was a bug in the attacker's own code, which caused the target machine to crash directly. If there had been no such bug, no one would know how long this "poisoning" would have spread.

Witnessing this, Andrej Karpathy posted a message late at night, exclaiming that it was a "software scare".

The "theft list" listed in Karpathy's post is comparable to a "disaster scene":

AWS/GCP/Azure credentials, Kubernetes configurations, CI/CD secrets, database passwords, SSH keys...

After the "scare", Karpathy said something that might change the entire industry's development paradigm:

"I'm becoming more and more resistant to dependencies."

pip install, and then you lose everything

This is not an ordinary vulnerability alert. As long as you type this seemingly harmless "pip install litellm" in the terminal, your entire machine will instantly open wide to the attacker.

According to security analysis, this poisoning is not simply about stealing a few passwords, but a "comprehensive looting" of the entire machine and the entire cluster.

The attacker's theft list almost covers everything that is most core to developers:

  • SSH private keys and configurations
  • AWS/GCP/Azure credentials
  • Kubernetes configurations and Tokens
  • API Keys in.env
  • git credentials
  • Database passwords
  • Shell history
  • SSL private keys
  • CI/CD secrets
  • Cryptocurrency wallet files
  • Sensitive information in environment variables

What's even scarier is that LiteLLM is by no means an obscure tool.

As a key middleware connecting major language model providers, it is truly the "water, electricity, and gas" of the AI application layer, with a monthly download volume of up to 97 million!

Many projects use it to connect to multiple model providers such as OpenAI, Anthropic, Google, and Azure in a unified way.

Many Agent frameworks, MCP Servers, and LLM orchestration tools also introduce it as a bottom - layer dependency.

This has upgraded the impact of this poisoning to a gap being torn in the entire AI dependency chain.

Even if you have no idea what LiteLLM is, as long as you execute "pip install dspy" (which depends on litellm >= 1.64.0), or install any other large - scale AI project that depends on this package, you will be affected as a victim of the transitive dependency.

One instance of poisoning can instantly spread to countless AI projects along the intricate dependency tree.

This is why Karpathy directly defined it as a "horror story in the software world".

An epic vulnerability exposed only because the hacker "wrote bad code"

You might think that since the impact is so great, security companies must have sounded the alarm immediately?

But the truth is a bit absurd and ironic.

The first to notice the anomaly were the engineers of the Callum McMahon team.

At that time, they were using an MCP plugin in Cursor, and this plugin would pull in LiteLLM as a transitive dependency.

After installing the poisoned LiteLLM 1.82.8, the machine suddenly started acting abnormally: the memory was filled up crazily, and finally it crashed directly.

Later, when they investigated further, they found that the problem lay in a.pth file.

For many Python developers,.pth files usually have almost no presence.

But precisely because of this, it can automatically execute code when the Python interpreter starts. The attacker stuffed a malicious launcher here.

According to the original design, it would quietly start a child process and execute the subsequent logic of stealing secrets and external transmission.

But it was written poorly.

Because the child process itself would trigger the same.pth file again, each new process would generate new processes, leading to exponential self - replication, and finally evolving into a fork bomb that quickly consumed all the machine's resources.

That is to say, this attack was exposed only because it self - destructed.

If the attacker hadn't made this blunder through "vibe coding", this extremely hidden backdoor might have lurked across the network for weeks or even months without being detected.

The entire industry's security defense line actually relied on the hacker making a bug, and the entire ecosystem's security detection mechanism was completely ineffective at this moment.

This is also the most terrifying part of the whole thing.

A meticulously planned "supply chain poisoning"

Judging from the sophistication of the attack method, this is an organized "supply chain poisoning".

First, the attacker somehow managed to break into the maintainer's PyPI account, directly bypassed the official CI/CD release process, and forcibly uploaded the backdoored versions v1.82.7 and v1.82.8 to the PyPI repository.

In the GitHub source code repository of LiteLLM, you can't even find the corresponding tag or release records.

Second, the attack payload is divided into three extremely professional stages:

The first stage is a precise and large - scale "collection".

The second stage is to use the built - in 4096 - bit RSA public key in combination with AES - 256 - CBC for high - intensity "encryption", and transmit it externally to a highly deceptive fake domain name (models.litellm.cloud).

The third stage is the most fatal. If there are Kubernetes credentials in the environment, it will directly "move horizontally", read all the Secrets in all namespaces in the cluster, and create privileged Pods on all nodes to implant persistent backdoors.

The malicious litellm_init.pth file of LiteLLM 1.82.8 is automatically triggered during the startup phase of the Python interpreter, and then executes the three - stage payload attack.

The most chilling detail occurred after the incident:

When the community tried to discuss this matter in a GitHub issue, the issue was directly closed by the owner with the reason "not planned", and then flooded with spam from hundreds of bot accounts.

It is obvious that the maintainer's account and development environment have been fully taken over, and the attacker is systematically covering up the traces.

Currently, the LiteLLM official has intervened, suspecting that this incident is related to a larger - scale Trivy security incident (suspected use of stolen credentials), and has urgently contacted the Google Mandiant security team for evidence collection.

Fortunately, users of the official Docker image were spared because they had fixed the version.

But this complete set of APT (Advanced Persistent Threat) methods has announced an escalation of the "war".

Karpathy's "anti - dependency manifesto", the AI development paradigm is about to change

This disaster is profoundly changing the underlying understanding of software engineering among top - level technical elites in Silicon Valley.

The traditional concept of software engineering has always taught us: don't reinvent the wheel. Dependencies are the solid bricks for building a grand pyramid.

But after this incident, Karpathy threw out his "anti - dependency manifesto" in a tweet:

This is also why I'm becoming more and more resistant to dependencies. I'm more inclined to directly "scrape" a piece of functionality using an LLM when the functionality is simple enough and actually feasible.

From "dependencies are bricks" to "dependencies are time bombs", this is not just an emotional outburst, but a major turning point in the development paradigm of the AI era.

Every time you execute "pip install", you are introducing a fatal unknown risk at some deep and unfathomable link in the entire dependency tree.

When large models are powerful enough to directly generate and replace the underlying logic of third - party dependencies, "using fewer dependencies" will no longer be a code - cleaning obsession, but will become a core security strategy.

There is also a highly ironic closed - loop hidden in this:

Callum was affected because he used the AI coding tool Cursor and introduced the AI middleware litellm through the MCP plugin.

The AI toolchain itself has actually become the biggest attack surface.

AI is rapidly creating new paradigms for solving problems, but at the same time, it is also creating unprecedented new vulnerabilities.

The "trust crisis" behind 97 million downloads

The LiteLLM official has taken measures to stop the damage:

Remove the contaminated versions; rotate the maintainer's credentials; establish new authorized maintainers; contact Mandiant for forensic analysis; prompt users to check the affected versions, find IoCs, and rotate all credentials.

https://docs.litellm.ai/blog/security-update-march-2026

Although the LiteLLM incident ended in a "farce" of less than an hour, the contaminated versions have been withdrawn, and the credentials have been rotated, what is really disturbing is that this incident did not turn into a larger - scale silent disaster not because the security system detected it, but because the hacker made a mistake.

The monthly download volume of 97 million means that the entire industry has made 97 million "trust gambles".

This "poisoning incident" has exposed that the trust model of the open - source supply chain in AI infrastructure may also be extremely fragile:

You think you are trusting thousands of lines of reviewed open - source code, but in fact, you are only trusting that a package maintainer far away hasn't lost his PyPI account password or that his computer hasn't been infected with a Trojan.

This systematic security risk is undoubtedly a heavy blow to the AI ecosystem that is overly dependent on open - source.

If AI developers and enterprises rely heavily on open - source packages such as PyPI, and the security of the infrastructure is only based on the illusory assumption that "the upstream hasn't been hacked", it may only be a matter of time before a similar danger repeats.

LiteLLM is just the beginning. No one can answer the most disturbing question:

Which package will be the next to be poisoned?

Reference materials:

https://x.com/karpathy/status/2036487306585268612?s=20%20 

https://futuresearch.ai/blog/litellm-pypi-supply-chain-attack/ 

This article is from the WeChat official account "New Intelligence Yuan", author: New Intelligence Yuan, published by 36Kr with authorization.