Anthropic's Secret Weapon Just Leaked — And It Changes Everything

Anthropic's Secret Weapon Just Leaked — And It Changes Everything

March 28, 2026·4 min read
← The Signal Archive

Anthropic's most powerful unreleased model — Claude Mythos — was accidentally exposed in a public database. The leaked document warns of unprecedented cybersecurity risks. Dario Amodei's safety reputation is now on the line.

Anthropic's Secret Weapon Just Leaked — And It Changes Everything

What does a $61 billion AI company do when its most powerful, unreleased model accidentally becomes public knowledge? Apparently, it confirms the leak and hopes nobody panics.

That is exactly what happened to Anthropic this week.

On March 27, 2026, nearly 3,000 unpublished internal assets — draft blog posts, product announcements, technical documentation — were found sitting in an unencrypted, publicly searchable database. The culprit: a misconfiguration in Anthropic's own content management system. Files set to public by default, no one noticing, no one catching it. At a company that employs some of the best security researchers in the world, this is the kind of embarrassment that doesn't wash off quickly.

Dario Amodei, CEO of Anthropic — the man whose company just leaked its biggest secret

Among the leaked assets was a draft announcement for something called Claude Mythos, operating under the internal product codename Capybara. This is not a minor iteration. According to the leaked document, Dario Amodei's team describes Mythos as representing a "step change" in capability — a new model tier that sits entirely above the current flagship Opus line, delivering "dramatically better performance" on coding, academic reasoning, and — crucially — cybersecurity tasks.

That last part is where it gets uncomfortable.

The leaked draft apparently includes Anthropic's own warning that Claude Mythos could pose "unprecedented cybersecurity risks." The specific concern: the model's ability to identify and exploit software vulnerabilities at a level no previous system has demonstrated. Anthropic, a company whose entire brand is built on the idea that safety and capability can coexist, had quietly built something powerful enough that they felt compelled to put a warning label on their own internal announcement.

And then they accidentally published that warning to the open internet.

Anthropic — the $61 billion AI lab that preaches safety while building its most dangerous model yet

The irony is almost too clean. A company publicly committed to responsible AI development, whose co-founder and CEO Dario Amodei has spent years writing about the existential risks of advanced AI systems, leaked the existence of its most dangerous model through a basic database misconfiguration. Not through a sophisticated hack. Not through a disgruntled employee. Through the kind of infrastructure mistake that a junior DevOps engineer would catch on day one.

What this reveals is not that Anthropic is incompetent. It is that the gap between what these companies say publicly and what they are building privately is wider than anyone outside the industry understands. Sam Altman at OpenAI talks about cautious deployment. Amodei talks about safety-first development. And simultaneously, both organizations are in an arms race where "step change" models with "unprecedented cybersecurity risks" are being prepared for release.

The race dynamic explains everything. When you are competing with OpenAI's GPT-5, Google's Gemini Ultra 2, and whatever Elon Musk's Grok team is running at xAI, you do not slow down because your own safety team is nervous. You build Mythos. You build Capybara. You prepare the announcement. And you hope nobody finds it before you're ready.

This week, someone did.

The AI safety paradox — building tools that could become cybersecurity nightmares

Anthropic has confirmed the model exists and is in testing. They have not confirmed a release date or addressed the specific capabilities described in the leaked document. What happens next will be a real test of whether Dario Amodei's safety commitments hold when the company is sitting on a model it believes could be a cybersecurity nightmare — and when OpenAI is presumably a few months behind with something comparable.

The leak is embarrassing. The model is real. And the AI safety conversation just got a lot more urgent.

Deep Dive

See our previous coverage: Sam Altman Just Turned ChatGPT Into an Ad Platform — $100 Million in Six Weeks

And: Ilya Sutskever Left OpenAI to Save the World. His New Company Just Raised $2B With No Product.

Found this useful? Share it.

Get posts like this in your inbox.

The Signal — AI & software intelligence. 4x daily. Free.

Subscribe free →

More from The Signal