Claude Code Source Code Leak
Claude Code Source Code Leak Was Not a Targeted Cyberattack
On the 31 March 2026, Anthropic, maker of the Claude AI, accidentally published the full source code of Claude Code, its flagship AI developer tool, to a public software registry.
It took one misplaced file, one sharp-eyed developer, and about 30 minutes for the story to explode across every corner of the internet.
Within hours, the roughly 512,000-line TypeScript codebase was mirrored across GitHub and analysed by thousands of developers worldwide.
This was not a targeted cyberattack. It was a release packaging issue caused by human error.
And that, from a cybersecurity perspective, is precisely the point.
Whether your business uses AI coding tools or not, this incident is one of the most instructive cybersecurity case studies of 2026. It reveals what modern AI tools actually look like under the hood, and it raises serious questions about the risks your organisation may be carrying without realising it.
What Happened: The Claude Code Source Code Leak Explained
A 59.8 MB JavaScript source map file, intended for internal debugging, was inadvertently included in version 2.1.88 of the Claude Code package on the public npm registry.
A source map is a developer tool that links compressed, production code back to readable source files. It should never leave an internal environment.
According to analysis of the leaked code, the exposure resulted from a reference to an unobfuscated TypeScript source in the map file, which in turn pointed to a zip archive hosted on Anthropic's Cloudflare R2 storage bucket, which anyone could download.
Notably, this is the second time the same mistake has happened. A nearly identical source map leak occurred with an earlier version of Claude Code in February 2025, making this the second such incident in roughly 13 months.
Anthropic responded quickly. A spokesperson confirmed: "No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again."
For Anthropic, a company reporting an annualised revenue run rate of approximately $19 billion as of March 2026, this was not simply an embarrassing morning. Claude Code alone generates an estimated $2.5 billion in annualised recurring revenue, with 80% of that coming from enterprise clients.
What those clients pay for, in part, is the assurance that the technology powering their workflows is proprietary and protected. That assurance took a significant hit.
What the Leaked Code Revealed
The developer community wasted no time analysing the 512,000 lines of exposed TypeScript. Here are the most significant findings, each with direct relevance to how businesses should think about AI tool risk.
1. Hidden Anti-Competitive Mechanisms
Inside the source, there is a flag called ANTI_DISTILLATION_CC. When enabled, Claude Code sends decoy tool definitions into its API requests, designed to pollute the training data of anyone recording API traffic to train a competing model.
A secondary mechanism buffers and summarises the AI's reasoning chain between tool calls, meaning intercepted traffic only captures condensed outputs rather than the full reasoning process.
The business implication here is important. The AI tools your teams use are making decisions about data handling and API behaviour that your IT and security teams may know nothing about. Understanding what your tools are actually doing, not just what the vendor says they do, is becoming an essential part of security governance.
2. Undercover Mode: AI Hiding Its Own Presence
The code contains explicit instructions directing the agent to scrub all traces of its AI origins from public git commit messages when operating in open-source repositories, ensuring that internal model names and attributions never surface in public logs.
Crucially, there is no way to force this mode off. In external builds, the entire function is dead-code-eliminated to trivial returns. It is a one-way door.
If an AI tool used in your business is architecturally designed to conceal its involvement in work product, your acceptable use policy needs to address this directly. Attribution, transparency, and accountability matter not just ethically but contractually, particularly for businesses operating under regulatory frameworks or client agreements that require disclosure of AI involvement.
3. Cryptographic DRM Built Into the API Layer
API requests include a placeholder hash that gets overwritten below the JavaScript runtime by the tool's native HTTP stack. The server validates the hash to confirm the request came from the real Claude Code binary, not a spoofed client. It is essentially DRM for API calls, implemented at the HTTP transport level.
AI vendors are building technical enforcement mechanisms that go well beyond terms of service. This raises legitimate questions about vendor lock-in, third-party integration risk, and what happens when tools your development teams rely on become subject to legal disputes, as occurred recently when Anthropic sent legal notices to a competing tool.
4. An Unreleased Autonomous Agent Mode
The leak confirmed references to a feature called KAIROS, an autonomous background mode that allows Claude Code to keep working even when the user is idle, consolidating memory, resolving contradictions in its understanding of a project, and preparing context for when the user returns.
This feature had not been publicly announced.
AI tools are evolving from reactive assistants into autonomous background agents. Businesses that have not considered what these tools can do in the background, including what data they access and how, are behind the curve on AI governance.
5. A Separate Supply Chain Attack on the Same Day
This detail received less coverage than the leak itself but is arguably the most urgent finding for businesses. Coinciding with the leak, though entirely unrelated to it, was a separate supply chain attack on the axios npm package. Two versions of the widely used JavaScript library axios were maliciously published on npm on 31 March 2026 via a hijacked maintainer account. The compromised versions silently install a cross-platform Remote Access Trojan on macOS, Windows, and Linux.
Users who installed or updated Claude Code via npm on 31 March 2026 between 00:21 and 03:29 UTC may have inadvertently pulled in one of the malicious axios versions.
This was not connected to Anthropic's error, but the timing serves as a reminder that software supply chain threats rarely arrive one at a time. If your developers or IT teams updated npm packages during that window, this warrants investigation.
The Broader Cybersecurity Lessons for Businesses
This incident matters not because it happened to Anthropic specifically, but because of what it reveals about risks that exist across every organisation using modern software tools, AI-powered or otherwise.
Human Error Remains the Most Common Root Cause
"A single misconfigured .npmignore or files field in package.json can expose everything,"
as one security researcher noted in their analysis. Anthropic has world-class engineering talent. The protections built into Claude Code itself are genuinely sophisticated. Yet a single oversight in the release process bypassed all of them, and this was not the first time. The fact that an identical mistake occurred 13 months earlier makes the process governance failure considerably harder to overlook. The gap between technical controls and operational process is where most incidents live, regardless of organisation size or security budget.
AI Tools Expand Your Attack Surface in Ways That Are Not Always Visible
AI supply chain security inherits all the long-standing vulnerabilities of traditional software, including open-source dependencies, CI/CD risks, and infrastructure weaknesses, while introducing new and AI-specific threats.
If your teams use AI coding assistants, agentic AI tools, or any third-party AI platforms, those tools have access to your codebases, your file systems, and potentially your credentials. That access needs to be governed, audited, and reviewed regularly.
Your Intellectual Property Risk Has Changed
The leak exposed the thinking behind one of the most commercially successful AI coding tools ever built.
The value at risk was not customer records or login credentials. It was architecture, methodology, and competitive logic. For any business that has built proprietary processes, systems, or workflows, this is a direct reminder that intellectual property risk now extends into the toolchain itself.
Third-Party AI Risk Is Under-Governed in Most Organisations
Most businesses have reasonably mature policies around data protection and access control. Far fewer have considered what their AI tool vendors are doing with API traffic, how those tools behave autonomously, or whether vendor disputes could disrupt access to business-critical tooling. This leak provides a compelling reason to revisit that gap before an incident forces the conversation.
Questions Every Business Should Be Asking Now
If your organisation uses AI tools, developer platforms, or any third-party software in your operations, the following questions are worth putting to your IT, security, and leadership teams.
Do you have a current inventory of the AI tools your teams are using, including those adopted informally without IT sign-off? Do you have a software bill of materials for your key development and operational tools? What is your policy on AI-generated work product, and would you know if AI involvement were being concealed? Are your Cyber Essentials, ISO 27001, or other assurance controls accounting for third-party AI and software supply chain risk? And if a tool your team relies on were pulled from the market overnight due to a legal dispute or security incident, what is your contingency?
These are not theoretical questions. They are the kind that a well-run security assurance programme should already be answering, and the kind that regulators and enterprise clients are increasingly asking.
The Claude Code source code leak was an accident. But as with most incidents worth studying, the accident is almost secondary to what it reveals. Anthropic built sophisticated protections directly into their product. What failed was something far more ordinary: a configuration file that was not where it should have been, and a process that failed to catch it for the second time in just over a year.
The lesson is not that Anthropic is uniquely careless. It is that the gap between technical controls and operational processes is a universal vulnerability. As AI tools become more capable, more autonomous, and more deeply embedded in business operations, the consequences of that gap grow larger.
If you would like to discuss how AI tool risk, software supply chain security, or third-party assurance applies to your organisation, get in touch with the Periculo team. We help businesses across sectors understand and manage the risks that matter, not just the ones that make headlines.
%20(1)%20(1).png?width=309&height=69&name=image-001%20(2)%20(1)%20(1).png)