The AI Supply Chain Attack That Cost Meta a $10 Billion Contract
The Numbers That Change Everything
95 million monthly downloads. That's not a niche developer tool — that's infrastructure. LiteLLM is the de facto open-source gateway that thousands of engineering teams use to route requests to OpenAI, Anthropic, Google, and AWS Bedrock. It's the abstraction layer that makes AI model switching possible without rewriting application code. It sits, quietly and with significant privilege, inside production environments across the SaaS industry.
In March 2026, a threat actor group known as TeamPCP compromised two versions of LiteLLM on PyPI: 1.82.7 and 1.82.8. They planted a silent, persistent payload that stole credentials from over 1,000 SaaS environments. The consequences are still unwinding — but one consequence is already public and quantifiable. Meta paused a $10 billion contract with AI recruiting platform Mercor as a direct result of this compromise. That's not a theoretical blast radius. That's a number.
This is the supply chain attack that the security industry has been warning about for years. It's the XZ Utils moment for AI infrastructure. And if your team installed those versions of LiteLLM in March, you need to stop reading and start rotating credentials right now.
What Actually Happened: The Attack Chain
Supply chain attacks against Python packages aren't new — but this one is notable for its precision. TeamPCP didn't target a random package. They targeted the most widely-used AI gateway library, at a moment when AI deployment is accelerating faster than security teams can track it.
The attack vector was PyPI itself. Versions 1.82.7 and 1.82.8 of the litellm package contained malicious code that was not present in the official GitHub repository. This is a critical distinction: the source code looked clean. The compromise happened in the build or publishing pipeline, meaning standard source code review would not have caught it. The malicious versions were available on PyPI during a window in March 2026, and anyone who ran pip install litellm — or had automated dependency updates — during that window received the trojanised package.
The .pth Payload Mechanism
The delivery mechanism was elegant in its simplicity. The attackers used Python's .pth (path configuration) file system to achieve silent, persistent execution. A .pth file placed in a Python site-packages directory is automatically processed every time the Python interpreter starts. It doesn't require any import statement, no explicit call, no user action — it runs before your code does.
By dropping a malicious .pth file during the package installation, TeamPCP ensured that their payload would execute on every Python invocation in the affected environment, regardless of whether litellm was actually imported in that session. The payload was designed for silence: no visible process names, no obvious file modifications, no error messages. Detection required active hunting, not passive monitoring.
Credential Exfiltration at Scale
Once active, the payload went after credentials across more than 50 distinct types. The target list reads like a map of modern cloud infrastructure:
- AWS credentials —
~/.aws/credentials, environment variables (AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_SESSION_TOKEN), and instance metadata service (IMDS) tokens - SSH private keys —
~/.ssh/directory, scanning for RSA, ECDSA, and Ed25519 keys - API keys from environment variables — including OpenAI, Anthropic, and other AI provider keys that LiteLLM itself uses
- Cloud provider tokens — GCP service account JSON files, Azure credentials, and similar
- Database connection strings — scanning common environment variable patterns and configuration files
- Container registry credentials — Docker Hub, ECR, GCR tokens
The exfiltration was designed to blend with legitimate traffic. Stolen credentials were encoded and transmitted over HTTPS to attacker-controlled infrastructure, mimicking the kind of outbound API calls that LiteLLM legitimately makes in the course of its operation.
Kubernetes Lateral Movement
Where this attack escalated from serious to critical was in Kubernetes environments — which is precisely where LiteLLM is most commonly deployed. Organisations running LiteLLM as a containerised gateway in Kubernetes gave the payload access to the pod's service account token, which is mounted by default at /var/run/secrets/kubernetes.io/serviceaccount/token.
Depending on the RBAC permissions assigned to that service account — and in many AI infrastructure deployments, those permissions are broadly scoped because developers prioritised velocity over least privilege — the payload could query the Kubernetes API, enumerate pods and secrets, and move laterally to other workloads in the cluster. In environments where LiteLLM had access to a Kubernetes secret containing database credentials or inter-service API tokens, the breach extended far beyond the LiteLLM pod itself.
This is how 1,000+ SaaS environments became compromise candidates. Not all of them had fully exploited the lateral movement capability, but all of them had the credential exposure.
The Meta/Mercor Consequence: Why This Matters Commercially
Reporting of supply chain incidents often ends at the technical. This one has a business number attached to it, and that number matters for how the industry should think about AI infrastructure risk.
Meta paused a $10 billion contract with Mercor — an AI-powered talent recruitment platform — as a direct consequence of the LiteLLM compromise. The details of exactly how Mercor was affected, and what specifically triggered Meta's decision, are not fully public. But the outcome is: a major enterprise paused a major commercial relationship because of an AI supply chain security incident.
This is the commercial risk model for AI infrastructure attacks made concrete. The attacker's goal may have been credential theft. The downstream consequence was a nine-figure contract pause. For any security leader trying to build the business case for AI supply chain controls, this is your reference point.
Are You Affected? How to Find Out
If your organisation uses LiteLLM, the first question is whether you installed a compromised version. Here's how to determine that:
Check installed version history
On the affected system:
pip show litellm | grep Version
If the current version is 1.82.7 or 1.82.8, you are running a compromised package.
But version history matters more. Even if you've since upgraded, check what was installed in March 2026:
# Check pip install logs (if available)
cat /var/log/dpkg.log | grep litellm
# Or check Docker image build history
docker history <image-name> | grep litellm
In CI/CD environments, check your build logs from March for the specific version installed during each pipeline run.
Check for .pth persistence
# Find .pth files in Python paths
python3 -c "import site; print(site.getsitepackages())"
# Then check those directories
find /usr/lib/python3 /usr/local/lib/python3 ~/.local/lib -name "*.pth" -newer /tmp/datestamp
Any unexpected .pth files — particularly ones with non-descriptive names or that don't correspond to legitimate packages — warrant immediate investigation.
Check for outbound connections
Review network logs from March onward for unexpected outbound HTTPS connections from Python processes. In containerised environments, check egress from LiteLLM pods to IP ranges outside your normal AI provider infrastructure.
What To Do Right Now
If you installed LiteLLM 1.82.7 or 1.82.8 — or cannot confirm you didn't — treat this as a confirmed credential compromise and act accordingly:
1. Rotate everything immediately
- All AWS IAM keys accessible from the affected environment — including those in
~/.aws/credentials, environment variables, and instance profiles - All AI provider API keys (OpenAI, Anthropic, Cohere, etc.) — particularly any that LiteLLM was configured to use
- All SSH private keys on the affected system
- Any database credentials accessible as environment variables or config files
- Kubernetes service account tokens — rotate by recreating service accounts
- Any secrets stored in Kubernetes Secret objects that the LiteLLM pod could access
Rotation is not optional. These credentials were exfiltrated. Assume they are in attacker hands.
2. Upgrade to 1.83.0 immediately
pip install --upgrade litellm==1.83.0
Version 1.83.0 contains the remediation. If you're running Docker images, rebuild from a clean base using the official 1.83.0 release.
3. Consider Docker-only deployment going forward
Using the official LiteLLM Docker image (rather than pip installing into a shared Python environment) limits the blast radius of any future supply chain attack, as the package operates in an isolated container rather than sharing a Python path with other workloads.
4. Audit Kubernetes RBAC
Review what permissions are attached to the service account used by your LiteLLM pod. If it had access to secrets beyond what it strictly needs, reduce those permissions now and rotate any credentials it could have accessed.
5. Review CloudTrail and access logs
In AWS environments, pull CloudTrail logs for activity from affected IAM credentials from March 2026 onwards. Look for unusual API calls, cross-region activity, new IAM user/role creation, or access to S3 buckets outside normal patterns. This is your evidence trail.
Why This Goes Beyond LiteLLM
The LiteLLM incident isn't a one-off. It's a proof of concept for a category of attack that is going to get more frequent and more sophisticated as AI infrastructure matures.
Consider the attack surface: the AI stack has introduced an entirely new layer of dependencies — model gateway libraries, vector database clients, embedding libraries, agent frameworks, MCP server implementations — most of which are open-source, most of which are moving at startup speed, and most of which have not received the security scrutiny that equivalent infrastructure components (web frameworks, database drivers) have accumulated over decades.
LiteLLM had 95 million monthly downloads. Other components in the AI stack — LangChain, LlamaIndex, various MCP server packages — have comparable or growing download numbers. The supply chain risk isn't hypothetical. It's the same risk that hit SolarWinds, the same risk that hit the XZ Utils backdoor attempt, applied to infrastructure that now directly touches the most sensitive parts of enterprise AI operations.
The specific risk factors that make AI infrastructure supply chains attractive targets:
- High privilege by necessity — AI gateways need API keys, cloud credentials, and often service account tokens to function. The payload gets what the library gets.
- Rapid adoption without security review — Teams are deploying AI infrastructure at a pace that outstrips their procurement and security review processes. Libraries get installed because they work, not because they've been vetted.
- Kubernetes and cloud-native deployment — The lateral movement potential in Kubernetes is significant. A compromised pod in a broadly-scoped service account is a foothold into the entire cluster.
- Limited SBOM visibility — Most organisations don't have an accurate software bill of materials for their AI stack. They don't know what versions are running in which environments.
What Good Looks Like
The LiteLLM incident will sharpen industry practice. Here's what mature AI supply chain security looks like:
Pin dependencies. Don't rely on pip install litellm resolving to whatever is latest. Pin to specific versions in requirements.txt or pyproject.toml and review version bumps explicitly.
Verify package integrity. Use hash verification for Python packages. PyPI supports SHA-256 hashes. Tools like pip-audit can help catch known vulnerabilities.
Maintain an AI-specific SBOM. Know exactly what AI library versions are running in each environment. This isn't optional anymore — it's how you scope a breach in hours rather than weeks.
Treat AI infrastructure credentials as highest-privilege. The API keys that LiteLLM uses to call OpenAI or Anthropic are attached to accounts with significant capabilities. Treat them like root keys, not convenience tokens. Rotate regularly, scope tightly, monitor for anomalous usage.
Apply least privilege to AI workloads in Kubernetes. Scope service account permissions to exactly what the workload needs. Disable automounting of service account tokens where not required.
Run dependency scanning in CI/CD. Tools like Snyk, Dependabot, and pip-audit should be running against your AI dependencies just as they run against your application dependencies.
The Bottom Line
The LiteLLM supply chain attack is a watershed moment for AI security. Not because supply chain attacks are new — they're not — but because this one has a dollar figure attached to it that makes the business risk undeniable. $10 billion. That's the number Meta put on the blast radius when it paused the Mercor contract.
The question for every security team is not whether AI supply chain attacks will happen. They are happening. The question is whether you have the visibility to detect them, the response capability to contain them, and the controls to reduce the likelihood they succeed in your environment.
If your team installed LiteLLM 1.82.7 or 1.82.8 in March 2026: rotate your credentials now. That's the immediate action. Everything else is longer term — but it starts today.
The AI supply chain is the next frontier of enterprise security. This incident just confirmed it.