Anthropic Performs Unplanned Open Source Drop: 512K Lines of Claude Code Leak Via npm Update
Anthropic accidentally published 512,000 lines of Claude Code's proprietary source code through a debug file bundled into a routine npm update on March 31. The leak exposed the full architecture of the company's flagship AI coding tool, which generates an estimated $2.5 billion in annualized recurring revenue. In what can only be described as the most expensive debugging session in history, someone at Anthropic hit "publish" instead of "delete" and handed the entire neighborhood the blueprints to their golden goose.
Security researcher Chaofan Shou spotted the exposed source map file in Claude Code version 2.1.88 and posted a download link on X. The codebase spread across GitHub within hours, accumulating tens of thousands of forks before Anthropic's DMCA takedowns hit. Shou probably had the most viral X thread of his life while simultaneously becoming every open source maintainer's worst nightmare and dream come true simultaneously.
The incident landed just five days after a separate CMS misconfiguration exposed roughly 3,000 internal files, including details on the unreleased "Mythos" model. Two accidental disclosures in one week raise operational questions for a company valued at $350 billion and reportedly considering an IPO in Q4 2026. At this point, Anthropic's security team might want to look into whether someone is running a loyalty program for whistleblowers.
Korean-Canadian developer Sigrid Jin, profiled by the Wall Street Journal for consuming 25 billion Claude Code tokens last year, completed a clean-room Python rewrite before sunrise. His repository, claw-code, hit 50,000 GitHub stars within two hours of publication. Jin went from "guy who uses a lot of tokens" to "the fastest open source maintainer in the west" in a single all-nighter. Sleep is for people who don't have 50K stars to chase.
The leaked files revealed an internal feature called "Undercover Mode," built specifically to prevent Claude from leaking Anthropic's secrets. The code also exposed 44 feature flags, an unreleased background daemon called KAIROS, and internal model codenames, including "Capybara" for a Claude 4.6 variant. Nothing says "we take secrecy seriously" like naming your stealth mode "Undercover" and your secret AI "Capybara." At least they didn't call it "Incognito Mode" and the model "Hamster."
Anthropic confirmed the leak to multiple outlets, calling it a packaging error caused by human error. Enterprise clients, who account for 80% of Claude Code's revenue, now face a tool whose security logic and permission bypass techniques sit on the open internet. Nothing like finding out the AI you've been paying $2.5 billion ARR for has been reverse-engineered by a guy in a Discord server named "cat_p
Share Article
Quick Info
Disclaimer: This content is for information and entertainment purposes only. It does not constitute financial, investment, legal, or tax advice. Always do your own research and consult with qualified professionals before making any financial decisions.
See our Terms of Service, Privacy Policy, and Editorial Policy.