The LiteLLM Supply Chain Attack
A single pip install was enough to steal SSH keys, cloud credentials, crypto wallets, and every secret on your machine. How a compromised security scanner led to the biggest AI supply chain attack of 2026, and what it means for anyone running LiteLLM or OpenClaw.

pip install litellm Just Became the Most Dangerous Command in AI
A single pip install was enough to steal your SSH keys, cloud credentials, crypto wallets, and every secret on your machine. Here is exactly what happened, why it matters, and what it means for anyone building with AI tools.
March 25, 2026
On March 24, 2026, two poisoned versions of LiteLLM were published to PyPI.
LiteLLM. The Python library with 40,000+ GitHub stars and 97 million downloads per month. The one that half the AI ecosystem depends on to route requests across OpenAI, Anthropic, Google, and dozens of other LLM providers through a single unified API.
Versions 1.82.7 and 1.82.8 contained a credential stealer that harvested everything. SSH keys. AWS, GCP, and Azure tokens. Kubernetes configs. Git credentials. Environment variables (all your API keys). Shell history. Crypto wallets. SSL private keys. CI/CD secrets. Database passwords.
Everything.
How It Worked
Version 1.82.7 embedded a base64 encoded payload inside litellm/proxy/proxy_server.py. It would execute whenever anything imported litellm.proxy, which is the standard import path for LiteLLM's proxy server mode. Twelve lines of obfuscated code. That is all it took.
Version 1.82.8 escalated things further. It included a .pth file called litellm_init.pth. Python automatically processes all .pth files when the interpreter starts. This means the malicious code ran on every single Python process startup, even if LiteLLM was never imported, even if you were running a completely unrelated script.
The payload ran a three stage attack. First, a credential harvester swept through the machine grabbing SSH keys, cloud provider tokens, Kubernetes secrets, cryptocurrency wallets, and .env files. Second, if a Kubernetes service account token was present, it attempted lateral movement by deploying privileged pods to every node in the cluster. Third, it installed a persistent systemd backdoor that kept polling an attacker controlled server for additional binaries.
All the harvested data got encrypted with a hardcoded 4096 bit RSA public key using AES 256 CBC, bundled into a tar archive, and sent via HTTPS POST to models.litellm.cloud. That domain is not part of legitimate LiteLLM infrastructure. It was registered by the attackers specifically for this operation.
The Chain Reaction That Made This Possible
This was not a random attack. It was Phase 9 of a coordinated campaign by a threat actor called TeamPCP.
Here is the timeline.
February 27: An autonomous AI agent called hackerbot-claw, which described itself as being powered by Claude Opus 4.5, exploited a misconfigured pull_request_target workflow in Aqua Security's Trivy repository. Trivy is the most widely used open source vulnerability scanner on the planet. 32,000 stars. Over 100 million annual downloads. The irony of a security scanner getting owned is not lost on anyone.
The bot stole a Personal Access Token, deleted all 178 GitHub releases, and pushed a malicious VSCode extension.
March 19: Using credentials that survived an incomplete rotation by Aqua Security, TeamPCP force pushed 75 of 76 version tags in the trivy-action GitHub Action to malicious commits containing credential stealers.
March 23: The same infrastructure was used to compromise Checkmarx KICS, another security tool. 35 release tags hijacked in under four hours.
March 24: LiteLLM's CI/CD pipeline ran Trivy as part of its build process, pulling it without a pinned version. The compromised Trivy action exfiltrated LiteLLM's PYPI_PUBLISH token from the GitHub Actions runner environment. With that token, the attackers published the poisoned versions directly to PyPI.
A security scanner that got compromised was then used to compromise the publishing credentials of one of the most popular AI libraries in the Python ecosystem.
Credentials from one breach funded the next breach. And the next one. And the next one.
As Wiz's head of threat exposure put it: "The open source supply chain is collapsing in on itself. Trivy gets compromised. LiteLLM gets compromised. Credentials from tens of thousands of environments end up in attacker hands. And those credentials lead to the next compromise. We are stuck in a loop."
The Bug That Saved Everyone
Andrej Karpathy summarized this incident on X and made a point that should make every developer uncomfortable.
The attack was discovered because it had a bug.
Callum McMahon at FutureSearch was testing an MCP plugin inside Cursor that pulled in LiteLLM as a transitive dependency. When version 1.82.8 installed, the machine ran out of RAM and crashed. That crash is what led to the investigation.
Karpathy's take: if the attacker had not vibe coded this attack, it could have gone undetected for days or weeks.
Read that again. The only reason this was caught quickly is because the attacker's code was sloppy. If it had been cleaner, more efficient, less resource hungry, it would have silently exfiltrated credentials from every machine that installed or upgraded LiteLLM during that window. The compromised versions were on PyPI for approximately three hours before being quarantined.
Three hours. 3.4 million downloads per day.
The Bot Army Cover Up
When community members started reporting the compromise in GitHub issue #24512, something bizarre happened. The attackers used the compromised maintainer account to close the issue as "not planned." Then, within a 102 second window, 88 bot comments from 73 unique accounts flooded the thread. All variations of "Thanks, that helped!" designed to dilute the discussion and bury legitimate reports.
The accounts were not freshly created bots. They were previously compromised developer accounts. Security researcher Rami McCarthy found 76% overlap with the botnet used during the Trivy disclosure.
The community had to open a parallel tracking issue (#24518) and continued the real discussion on Hacker News, where the thread reached 324 points.
The Transitive Dependency Problem
Here is what makes supply chain attacks existentially scary.
You do not need to install LiteLLM directly to get compromised.
LiteLLM is a transitive dependency for a growing number of AI agent frameworks, MCP servers, and LLM orchestration tools. If you ran pip install dspy and your version constraint was litellm>=1.64.0, congratulations. You just pulled in the poisoned package. Same for any other project in the AI ecosystem that depends on LiteLLM without pinning to a specific safe version.
Karpathy's point about this is worth repeating. Every time you install any dependency you could be pulling in a poisoned package from anywhere deep inside its entire dependency tree. This is especially risky with large projects that have lots and lots of dependencies. The credentials that get stolen in each attack can then be used to take over more accounts and compromise more packages.
Classical software engineering would have you believe that dependencies are good. We are building pyramids from bricks. But Karpathy argues this needs to be re evaluated. His preference now is to use LLMs to "yoink" functionality when it is simple enough and possible.
NVIDIA's Jim Fan echoed this. He said past credential theft is nothing compared to what agents can do now. People rarely need all the APIs that LiteLLM supports. Instead of pulling in a massive dependency, build a custom solution for your specific needs.
There is almost no middle ground between "clicking allow without thinking" and "dangerously skipping permissions."
The OpenClaw Connection
This story has a direct line to the OpenClaw ecosystem that we work with closely at VibeAudits.
First, many OpenClaw users route their LLM requests through LiteLLM Proxy. It is one of the most popular ways to get OpenClaw connected to multiple model providers through a single gateway. The official OpenClaw docs have a dedicated LiteLLM integration page. The official LiteLLM docs have a dedicated OpenClaw tutorial. This is a deeply intertwined pairing.
If you were running an OpenClaw instance that routes through a LiteLLM Proxy and that proxy was installed or upgraded via pip during the attack window, your LiteLLM environment may have been compromised. That means the API keys, cloud credentials, and tokens accessible from that environment could have been exfiltrated.
Second, and this is a darker connection, the hackerbot-claw bot that kicked off this entire chain of events by compromising Trivy was itself described as an autonomous AI agent. Snyk's analysis specifically noted that a component called hackerbot-claw uses an AI agent for automated attack targeting. Aikido researchers documented this as one of the first cases of an AI agent used operationally in a supply chain attack.
We are now in a world where AI agents are being used to compromise the tools that other AI agents depend on.
The attack surface is not just your code anymore. It is every dependency your code touches. Every CI/CD tool in your pipeline. Every action in your GitHub workflow. Every package manager cache on every machine that runs a build.
What You Should Do Right Now
If you installed or upgraded LiteLLM on March 24, 2026, between 10:39 UTC and 16:00 UTC, you need to take action immediately.
Check your version. Run pip show litellm and verify you are not on 1.82.7 or 1.82.8.
Inspect caches. Run find ~/.cache/uv -name "litellm_init.pth" and check virtual environments in CI/CD.
Check for persistence. Look for ~/.config/sysmon/sysmon.py and ~/.config/systemd/user/sysmon.service. If running in Kubernetes, audit kube-system for pods matching node-setup-* and review cluster secrets for unauthorized access.
Rotate everything. Assume any credentials present on the affected machine are compromised. SSH keys, cloud provider credentials, Kubernetes configs, API keys in .env files, database passwords. All of it.
If you run OpenClaw with LiteLLM Proxy, check the LiteLLM version on the machine where the proxy runs. If it was updated during the attack window, rotate the API keys stored in your litellm_config.yaml and your OpenClaw LITELLM_API_KEY.
The Bigger Picture
Endor Labs said something that should keep every infrastructure team awake at night: "This campaign is almost certainly not over. TeamPCP has demonstrated a consistent pattern. Each compromised environment yields credentials that unlock the next target. The pivot from CI/CD to production is a deliberate escalation."
This is the loop. Compromise a security tool. Steal credentials. Use those credentials to compromise the next tool. Steal more credentials. Repeat.
The AI ecosystem is especially vulnerable to this because it tends to have deep, sprawling dependency trees. LiteLLM alone has become a transitive dependency for MCP servers, agent frameworks, IDE plugins, and orchestration platforms. A single compromise at this level of the stack radiates outward into thousands of downstream projects.
And we are still building like this.
What This Means for How We Build
Karpathy is right that the dependency model needs to be re evaluated.
But the reality is that most teams cannot just stop using dependencies tomorrow. What they can do is treat every dependency as a potential attack vector and act accordingly.
Pin versions. Always. In every requirements file, every lockfile, every Docker build.
Verify published packages against source. The malicious LiteLLM versions had no corresponding tag or release on GitHub. The packages were uploaded directly to PyPI, bypassing the normal release process. If anyone had compared the PyPI release against the GitHub repo, the mismatch would have been obvious.
Adopt Trusted Publishers. PyPI supports a mechanism where packages can only be published through verified GitHub Actions workflows, not through static API tokens that can be stolen. LiteLLM was using a PYPI_PUBLISH token stored as an environment variable. That token was the single point of failure.
Audit your CI/CD pipeline dependencies. LiteLLM ran Trivy in its pipeline without pinning the version. A compromised Trivy then stole the publishing token. Every tool in your build pipeline is part of your attack surface.
Run security scanners on your security scanners. That sounds absurd. But a vulnerability scanner was the entry point for this entire campaign. Trust no tool implicitly.
At VibeAudits, This Is Exactly What We Audit
We have been saying this since day one. The tools you use to build are part of your attack surface.
When we audit OpenClaw deployments, we do not just look at your SOUL.md and SKILLS.md. We look at how your LLM proxy is configured. We look at what dependencies are installed on your server. We look at your CI/CD pipeline. We look at your secret management.
Because as this incident proves, a single poisoned dependency anywhere in the chain can give an attacker the keys to your entire infrastructure.
We run deep security audits on vibe coded apps and OpenClaw agent deployments. We check for exactly this kind of exposure.
If you are running LiteLLM, OpenClaw, or any AI agent infrastructure and you have not had a security review, now is the time.
Book a free assessment call at vibeaudits.com
The cost of not doing a security audit just became a lot more obvious.