vxlabs.
aianthropichype around mythos

Anthropic's Hype Machine: The Apple of AI?

·By sadique

There is something very familiar about Anthropic. The curated mystery. The slow drip of information. The carefully staged reveals — except when they are not staged at all, and something slips out the back door before the front door is even unlocked.

That feels a lot like Apple. And I say that as someone who uses Claude every single day.

Three Things That Happened in Two Weeks

First: the CMS leak. On March 26, security researchers Roy Paz (LayerX Security) and Alexandre Pauwels (University of Cambridge) — not Anthropic — discovered that nearly 3,000 internal files were publicly accessible in a misconfigured data store. Inside: a draft blog post announcing Claude Mythos (codenamed Capybara), benchmark numbers, and details of an invite-only CEO retreat at an 18th-century English manor. Anthropic called it "human error in the CMS configuration" and downplayed the exposed material as "early drafts." The irony: Anthropic is a company that regularly talks about risk levels and authorised approvals.

Second: the source code leak. Five days later, security researcher Chaofan Shou spotted that version 2.1.88 of the Claude Code npm package had shipped with a 59.8 MB source map file exposing the complete, unobfuscated TypeScript codebase — 512,000 lines across 1,906 files. Within hours it had 84,000 GitHub stars and 41,500 forks before DMCA takedowns arrived. The root cause: someone forgot to add *.map to .npmignore. Boris Cherny, Anthropic's head of Claude Code, confirmed it was a "plain developer error" — and added that "100% of my contributions to Claude Code were written by Claude Code." This was at least the second such incident in 13 months. To make things worse, the leak coincided with a separate malicious Axios supply chain attack on npm the same morning — anyone who updated Claude Code between 00:21 and 03:29 UTC may have also pulled a Remote Access Trojan.

Third: Project Glasswing. Two weeks after the first leak, Anthropic officially announced Claude Mythos Preview — backed by AWS, Apple, Google, Microsoft, Cisco, CrowdStrike, and more. Real benchmark jumps. Real zero-days found in every major OS and browser. A $100M credit commitment. Everything the leaked draft had already promised, now polished and officially packaged.

Every "accident" fed the official story. Every leak made the real reveal land harder.

The Pentagon Chapter

Before all of this, Anthropic walked away from a $200M Pentagon contract. The DoD wanted standard "any lawful use" language — effectively removing Anthropic's two safeguards: no mass surveillance of Americans, no fully autonomous weapons. Anthropic refused. Dario Amodei said they could not "in good conscience" comply. The DoD labelled them a "supply-chain risk to national security" and Trump directed all federal agencies to stop using Anthropic products. Hours later, OpenAI signed its own Pentagon deal.

Within 24 hours, Claude hit number one on the US App Store.

Were they right to hold the line? Probably yes — those are not trivial things to hand over without safeguards. But the public refusal created an enormous wave of goodwill at exactly the right moment. Safety, in Anthropic's hands, is also excellent marketing. Worth noticing.

My Honest Take

Anthropic is not a dishonest company. The Mythos capabilities appear real — AWS, Microsoft, and Google don't lend their names to announcements without internal validation. The principles Amodei defended are genuinely important ones.

But they are extremely good at narrative. Whether the leaks were truly accidental or not, Anthropic was clearly ready when they happened. The partner ecosystem, the benchmarks, the safety framing — all of it was in motion long before anything slipped out.

That is either a very well-run company or a very well-managed story. Probably both.

The real test comes when Mythos-class capabilities hit general availability — not a curated beta, but actual developers building actual things. Anthropic promised a 90-day public report. I'll be watching.


Source: Anthropic — Project Glasswing

Sadique Sulaiman writes about AI and developer tools at vxlabs.in