LiteLLM Supply Chain Attack: What Security Leaders Need to Know
On 24 March 2026, LiteLLM — one of the most widely used open-source libraries for integrating large language models into applications — became the latest victim of a supply chain attack. For organisations that have adopted AI tooling in their infrastructure, this incident is a timely and serious warning.
What Happened
LiteLLM is currently investigating a suspected supply chain attack involving unauthorised PyPI package publishes. Current evidence suggests a maintainer's PyPI account was compromised and used to distribute malicious code.
The incident is believed to be linked to the broader Trivy security compromise, in which stolen credentials were reportedly used to gain unauthorised access to the LiteLLM publishing pipeline.
Two specific versions were affected:
v1.82.7 contained a malicious payload in proxy_server.py, and v1.82.8 contained an additional file — litellm_init.pth — alongside the same malicious payload.
Initial evidence suggests the attacker bypassed official CI/CD workflows entirely, uploading the malicious packages directly to PyPI.
Both versions have since been removed from the registry.
What the Malicious Code Was Designed to Do
This wasn't opportunistic noise. The compromised versions appear to have included a credential stealer designed to harvest secrets by scanning for environment variables, SSH keys, cloud provider credentials (AWS, GCP, Azure), Kubernetes tokens, and database passwords, then encrypt and exfiltrate that data via a POST request to models.litellm.cloud, a domain not affiliated with LiteLLM or its parent company BerriAI.
In plain terms: if you were running a compromised version, an attacker may have silently obtained the keys to your cloud infrastructure.
Who Is at Risk
The exposure window was narrow but significant. You may be affected if you installed or upgraded LiteLLM via pip on 24 March 2026, between 10:39 UTC and 16:00 UTC, ran pip install litellm without pinning a version and received v1.82.7 or v1.82.8, built a Docker image during this window that included an unpinned LiteLLM install, or had a dependency in your project that pulled in LiteLLM as a transitive, unpinned dependency, for example through AI agent frameworks, MCP servers, or LLM orchestration tools.
You are not affected if you are using LiteLLM Cloud, the official Docker image, were on v1.82.6 or earlier and did not upgrade during the affected window, or installed LiteLLM from source via the GitHub repository, which was not compromised.
How to Check and What to Do Now
If there is any doubt about whether a system was exposed, act on the assumption that it was.
Check your site-packages directory for a file named litellm_init.pth, and look for any outbound traffic to models.litellm.cloud — this domain is not affiliated with LiteLLM.
If either indicator is present, or if you installed during the affected window:
Treat any credentials present on affected systems as compromised; this includes API keys, cloud access keys, database passwords, SSH keys, Kubernetes tokens, and any secrets stored in environment variables or configuration files.
Rotate everything. Audit your CI/CD pipelines, Docker builds, and deployment logs. Pin LiteLLM to a known safe version, v1.82.6 or earlier, until a verified later release is confirmed clean.
The LiteLLM team has removed the compromised packages from PyPI, rotated maintainer credentials, established new authorised maintainers, and engaged Google's Mandiant security team to assist with forensic analysis of the build and publishing chain.
The Bigger Picture: AI Tooling Is Now an Attack Surface
This incident reflects a pattern that security leaders need to get ahead of. The rapid adoption of LLM frameworks, LiteLLM, LangChain, and dozens of others, has introduced a new class of open-source dependency into enterprise environments, often with far less scrutiny than traditional software packages receive.
Supply chain attacks work precisely because trust is assumed. Developers pull packages from PyPI expecting them to be safe; CI/CD pipelines install dependencies automatically; Docker images are rebuilt without a second look. That chain of assumed trust is exactly what attackers exploit.
For organisations using AI tooling in production, whether that's an NHS trust integrating AI-assisted triage, a defence contractor running LLM-powered analysis, or a MedTech firm embedding AI into a regulated device — the question isn't whether these tools carry risk. It's whether your organisation has the controls in place to detect and respond when that risk materialises.
Key Takeaways for Security Decision-Makers
- Dependency pinning is not optional. Unpinned installs are an open door to exactly this type of attack.
- AI/ML libraries deserve the same scrutiny as any other third-party software. Treat them as part of your supply chain risk programme.
- Transitive dependencies are invisible risk. Your teams may not know LiteLLM is even in their stack if it's pulled in by another tool.
- Incident response plans need to cover AI tooling. Credential rotation, forensic logging, and compromise detection should extend to wherever LLMs are deployed.
Periculo is a CREST-accredited cybersecurity consultancy specialising in Defence and Health Tech. If you'd like to discuss your organisation's exposure to supply chain and AI security risks, get in touch.
%20(1)%20(1).png?width=309&height=69&name=image-001%20(2)%20(1)%20(1).png)