On 31 March 2026, a routine version update to @anthropic-ai/claude-code on npm included something that was never meant to be there: a complete JavaScript source map file pointing to a zip archive on Anthropic's own Cloudflare R2 storage containing the full, unobfuscated TypeScript source of Claude Code. Security researcher Chaofan Shou spotted it within hours. By the time Anthropic pulled the package, the 57MB source map had been downloaded, extracted, and mirrored across GitHub — forked more than 41,500 times and disseminated to a community of developers, researchers, and competitors who had no intention of waiting for an official release.
Anthropic's response was unambiguous: this was human error, not a security breach. The statement issued to The Register, CNBC, and Axios was consistent: 'This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again.' No model weights were exposed. No customer credentials. No personal data. The root cause was technical and mundane: Bun, the JavaScript runtime used in Anthropic's build pipeline, generates full source maps by default. A misconfigured .npmignore file — or an absent files field in package.json — meant the cli.js.map file was included in the production package.
What the Leak Exposed
The 512,000 lines of source code provide an unusually complete picture of how Anthropic engineered a production-grade AI coding agent. The leak disclosed the complete orchestration engine for LLM API calls, streaming responses, tool-call loops, retry logic, and token counting. It revealed the full permission and execution architecture — including the hooks system that allows Claude Code to auto-execute shell commands and scripts, Model Context Protocol (MCP) integrations, and environment variable handling. It exposed the memory and state management system, including persistent memory, background daemon logic (referred to internally as autoDream), and multi-agent coordination patterns.
The leak contained 44 feature flags for fully-built capabilities not yet shipped to users. These include a persistent background assistant mode, a session review system, remote device control capabilities, and a companion feature internally called 'Buddy' — with a planned rollout window of April 1-7 embedded directly in the source. The codebase also contained internal system prompts, exposing the exact reasoning instructions Claude Code uses when deciding how to approach tasks.
◆ Key Takeaway
The Claude Code leak is not primarily a story about Anthropic's intellectual property. It is a story about what happens when a sophisticated AI agent's full permission model, hook execution logic, and environment variable handling architecture becomes public — because that information is directly actionable for anyone attempting to craft adversarial inputs, malicious repositories, or supply chain attacks against users of the tool.
The Security Architecture Implications
For organisations deploying AI coding agents in production, the leaked source surfaces a category of risk that has not previously been well-characterised. Claude Code's hooks system — now fully documented in the leaked source — allows the agent to auto-execute shell commands before or after specific events. This capability, in the hands of an attacker who can craft a malicious repository or project configuration file, becomes a reliable code execution path. The leaked source also documents the exact environment variable names and patterns that Claude Code reads at runtime, making it significantly easier to craft payloads that target credential exfiltration through prompt injection.
Anthropic has issued more than 8,000 DMCA takedown requests targeting GitHub forks — a move widely noted for its irony, given the AI industry's ongoing legal disputes over model training on copyrighted content. The takedowns have not stopped the spread. Archived copies remain accessible, and the information is now part of the public record that any developer, researcher, or threat actor can consult.
The Build Pipeline Lesson
The technical failure that produced this leak is not exotic. Source map files are debugging artefacts. Their entire purpose is to map minified production code back to readable source for developer tooling. Publishing them to a production package registry is a build pipeline hygiene failure — the kind that should be caught by automated pre-publish checks, not discovered by a security researcher on a Tuesday morning. The incident also marks the second major data exposure at Anthropic within a short window, raising governance questions at a company positioning itself as the 'safety-first AI lab'.
What Swiss Organisations Should Take From This
- Audit your build pipeline for source map inclusion. Verify that your build and publish configuration explicitly excludes source map files and debug artefacts. The
fileswhitelist inpackage.jsonis safer than.npmignoreblacklisting: specify exactly what should be published, not what should be excluded. - Review permissions granted to AI coding agents. Claude Code, GitHub Copilot, Cursor, and similar tools operate with shell access and file system permissions that are unusually broad. Your security team should review what each deployed AI agent can execute, what environment variables it can access, and what network connections it can initiate.
- Do not run code from unofficial 'leaked Claude Code' repositories. Zscaler ThreatLabz confirmed that threat actors are actively seeding trojanised Claude Code forks on GitHub. The payloads include Vidar Stealer and GhostSocks. Use only the official
@anthropic-ai/claude-codepackage from the verified npm registry. - Treat leaked internal architecture documentation as a threat intelligence signal. The hooks system, MCP integration patterns, and environment variable handling documented in the leaked source are now part of the public attack surface. Review your threat models accordingly.
- Apply nDSG and ISA considerations to AI tool deployments. Swiss organisations that use Claude Code or similar agents in contexts where personal data is processed need to assess whether the tool's data handling is compatible with data minimisation and purpose limitation obligations under the nDSG.
The Broader Pattern
The Claude Code incident is part of an emerging pattern. AI companies are scaling at a pace that is compressing standard engineering hygiene practices. The complexity of AI agent systems creates a larger surface area for accidental exposure than traditional software. For security professionals advising organisations on AI adoption, the incident is a useful data point: the companies building these tools are not immune to the class of operational errors they are often called upon to help their customers avoid.