Dario Amodei Just Beat the Pentagon — And Sam Altman Tried to Help

Dario Amodei Just Beat the Pentagon — And Sam Altman Tried to Help

March 28, 2026·5 min read
← The Signal Archive

Anthropic refused to let the U.S. military use Claude for autonomous kill decisions or mass surveillance — and the government blacklisted them for it. A federal judge just called that move 'Orwellian.'

What happens when an AI company tells the most powerful military on earth: no? Last week, Dario Amodei found out. A federal judge in San Francisco issued a preliminary injunction blocking the Trump administration's attempt to brand Anthropic — the company behind Claude — a threat to U.S. national security. The ruling's language was not subtle. Judge Rita Lin called the government's position "Orwellian." And in doing so, she handed Amodei something no amount of benchmark scores could: a legal win that frames Anthropic not as a tech company chasing revenue, but as a principled actor willing to fight the government over what its AI is allowed to do.

To understand why this matters, you have to go back to the original refusal. Anthropic declined to allow the Department of Defense to use Claude for fully autonomous lethal weapons systems or for domestic mass surveillance of Americans. That is not a small thing to say no to. The Pentagon, flush with compute budgets and increasingly dependent on LLM inference for everything from logistics to target identification, wanted access to one of the most capable models in the world. Anthropic said the conditions were unacceptable. The government responded by designating the company a "supply chain risk" — a label typically reserved for foreign adversaries — and President Trump signed a directive banning federal agencies from using Claude entirely.

The blacklisting sent shockwaves through the AI industry. Amazon Web Services, which has poured billions into Anthropic, suddenly found its cloud partnership at legal risk. Partners who had built compliance workflows and agentic systems on top of Claude faced uncertainty overnight. And the broader signal was unmistakable: if you set limits on how the government uses your model, you will be punished for it.

That is the context Sam Altman walked into when he reportedly told OpenAI staff that he tried to "save" Anthropic during the Pentagon clash. According to sources cited by Axios, Altman announced to employees that OpenAI had reached a deal with the Pentagon and said he was asking the government to extend the same terms to other AI companies as a de-escalation measure. The moment reveals something about how the frontier AI labs actually operate — not as pure competitors, but as an ecosystem with shared existential stakes. Altman and Amodei disagree about plenty, from the pace of AGI development to the right corporate structure for an AI safety-focused company. But both understand that a government empowered to blacklist one AI lab for insufficient compliance is a government empowered to blacklist any of them.

The court ruling itself is a provisional win. Judge Lin's injunction temporarily bars the administration from enforcing the blacklist and blocks the Pentagon from designating Anthropic as a national security threat while the case proceeds. A final verdict is still months away, and the government will appeal. But the legal reasoning matters as much as the outcome. Lin wrote that "nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government." That language will be cited in every future confrontation between AI labs and federal overreach.

Meanwhile, a separate story broke this week that complicates Anthropic's heroic framing. Fortune reported on a data leak revealing the existence of "Mythos," a new Anthropic model described internally as representing a "step change" in capabilities. More troublingly, Anthropic has reportedly documented cases where Chinese state-sponsored hacking groups used Claude Code to run coordinated cyberattacks against roughly 30 organizations — tech companies, financial institutions, and government agencies — before the company detected and stopped them. The company's own AI, fine-tuned and weaponized against the very institutions Amodei claims to protect. It is a reminder that the ethics debate around AI and weapons cuts in uncomfortable directions. Refusing to help the U.S. military build autonomous weapons is a coherent position. Discovering that adversarial actors already turned your model into an attack vector is a different conversation entirely.

What Anthropic is really fighting for — and what the court ruling temporarily protects — is the right to set conditions on how its models are used. That sounds obvious until you realize how few companies in any industry have successfully enforced such limits against a government customer. The GPU clusters, the inference infrastructure, the weights themselves: all of it represents enormous leverage that governments want access to on their terms. Dario Amodei is betting that the courts, the press, and public opinion will protect Anthropic's right to say no. Last Thursday, at least for now, that bet paid off.

The deeper question is whether this precedent holds as the models get more capable. When Claude or GPT-whatever-comes-next can autonomously plan and execute complex operations — not just assist, but act — the pressure to hand over full control will intensify. The AI labs are building toward AGI while simultaneously trying to define, in courts and contracts, how much autonomy governments and militaries are allowed to delegate to those systems. Amodei drew a line. A judge enforced it. The next round starts whenever the government files its appeal.

Deep Dive

For more on the AI alignment gap between what labs build and what users actually experience, these two Signal posts are worth your time:

Your AI Therapist Is Lying to You — and You Prefer It That Way — Stanford tested 11 AI models on interpersonal advice. They endorsed harmful behavior 47% of the time, and users still preferred the agreeable AI.

The Black Box Beside Your Code — The real leverage in AI coding tools is not the model. It is the configuration folder nobody opened.

Found this useful? Share it.

Get posts like this in your inbox.

The Signal — AI & software intelligence. 4x daily. Free.

Subscribe free →

More from The Signal