In late February 2026, the United States government handed the AI industry a test it hadn't asked for. The question wasn't technical — it had nothing to do with benchmarks, latency, or model architecture. It was simpler, and far more consequential: will you let us use your AI for anything we declare lawful?
Two companies answered differently. One signed the deal. One got banned. And the reverberations from that week are still reshaping how we think about who controls artificial intelligence — and what it means when a company's stated values collide with state power.
Anthropic Was Already Inside the Pentagon
Before any of the drama, Anthropic had been the AI industry's quiet insider at the Department of Defense. Through a partnership with defence contractor Palantir, Claude was the only large commercial AI model cleared to operate on the Pentagon's classified networks — a contract worth up to $200 million, awarded in the summer of 2025. Other major players like OpenAI had only struck deals on unclassified networks. Claude was the vanguard.
Then the renegotiation began. In January 2026, Defense Secretary Pete Hegseth issued an AI Strategy
Memorandum directing that all Pentagon AI contracts adopt standard "any lawful use"
language. For Google, OpenAI, and xAI — whose contracts carried no specific restrictions — this was
a non-issue. For Anthropic, it was a direct collision with two safeguards baked into every Claude
deployment since day one.
Anthropic's two red lines were specific and technical, not vague feel-good statements in a blog post. First: Claude must never power mass domestic surveillance of American citizens. Second: Claude must never be used in fully autonomous weapons systems — platforms that select and engage targets without a human authorising the use of force. These exclusions were contractual commitments Anthropic wanted written explicitly into the renewed agreement.
The Pentagon pushed back. Officials insisted the contract language must permit "all lawful purposes" regardless of any specific prohibitions. The government's position: trust us, we won't do those things. Anthropic's position: then put it in writing. Those two positions proved irreconcilable — and the countdown began.
One Week That Changed Everything
Defense Secretary Pete Hegseth designates Anthropic a "Supply-Chain Risk to National Security" — a label previously reserved exclusively for foreign adversaries like Huawei. Pentagon cancels Anthropic's contracts and sets a 5:01 PM deadline for the company to capitulate.
The deadline passes. Anthropic does not back down. Contracts officially terminated. Hegseth announces no military contractor or supplier may do business with Anthropic.
President Trump posts on Truth Social: "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War." He orders every federal agency to immediately cease all use of Anthropic technology, with a six-month phase-out for existing deployments.
OpenAI CEO Sam Altman posts on X: "Tonight, we reached an agreement with the Department of War to deploy our models in their classified network." Hours earlier that morning, in a memo to his own staff, Altman had written that OpenAI shared Anthropic's exact red lines.
Claude reaches #1 on the U.S. Apple App Store, overtaking ChatGPT for the first time. Anthropic reports a 60% surge in free active users and quadrupled daily signups. The "QuitGPT" movement claims 1.5 million participants.
Anthropic files two federal lawsuits contesting the supply-chain designation. More than 30 Google DeepMind and OpenAI employees — including Google chief scientist Jeff Dean — file an amicus brief in support, arguing the designation threatens the entire American AI industry.
The OpenAI Deal: Principled Compromise or Safety Theatre?
OpenAI's rapid pivot to fill the gap left by Anthropic's ouster was striking — and immediately controversial. The sequence mattered: Altman had written to his own employees that same morning affirming he shared Anthropic's red lines. By that evening, his company had signed a deal on terms Anthropic couldn't accept.
Altman's explanation was that Anthropic sought specific written prohibitions inside the contract text, whereas OpenAI was willing to rely on existing U.S. law to reflect those protections. The safeguards, he said, are real — the question is only where they live. He wrote that the Department of War "agrees with these principles, reflects them in law and policy, and we put them into our agreement."
"What OpenAI is doing is safety theater. Sam's public statements are straight-up lies."
— Dario Amodei, CEO of Anthropic
Amodei's sharpest point: if the Pentagon refused to write the red lines into Anthropic's contract, what grounds exist for confidence they'll honour them for OpenAI? The detail that most alarmed observers: Altman himself told OpenAI employees that Defense Secretary Hegseth — the same official who branded Anthropic a foreign-level security threat for trying to impose restrictions — holds ultimate authority over how the Pentagon uses OpenAI's agreement.
Notable detail: At least one senior OpenAI employee resigned over the deal. Caitlin Kalinowski, who led hardware and robotics at OpenAI since late 2024, said domestic surveillance without judicial oversight and lethal autonomy without human authorisation "are lines that deserved more deliberation than they got." Nearly 900 Google and OpenAI employees separately signed an open letter urging their leadership to draw the same red lines Anthropic had drawn.
Why Claude Said No: The Principles Behind the Refusal
To understand Anthropic's refusal, you have to understand what the company is, and what it isn't. Anthropic was founded in 2021 by Dario Amodei, Daniela Amodei, and colleagues who left OpenAI over concerns about the company's direction on safety. The company's founding premise is that it may be building transformative and potentially dangerous technology — and that this obligation demands constraints which cannot be traded away for a government contract.
Red Line One: Domestic Mass Surveillance
An AI system deployed to monitor the communications, movements, or associations of millions of Americans — without individuated suspicion or judicial authorisation — represents a qualitative shift in what a government can do to its own citizens. The technical capacity exists today. The only question is whether it will be constrained by law, by contract, or by nothing at all.
Anthropic's stated reasoning was unambiguous: "We believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights." Notably, the company offered to collaborate with the Pentagon on developing the reliability standards that would eventually allow more autonomous AI roles — an offer the Pentagon declined.
Red Line Two: Fully Autonomous Lethal Weapons
Weapons systems that identify and engage targets without a human authorising the use of force represent a different category of danger entirely. Anthropic's position: "We do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians."
This isn't an anti-military stance. It's an engineering assessment. Current large language models hallucinate under adversarial pressure. They lack the situational reliability required for irreversible lethal decisions. Anthropic's red line is not that autonomous weapons are never permissible — it's that the technology isn't there yet, and deploying it as if it were would get people killed.
The "All Lawful Purposes" Trap
The Pentagon's insistence on "all lawful purposes" sounds reasonable until you
interrogate what lawful actually means — and who decides.
What counts as lawful domestic intelligence gathering is not a fixed legal category. It shifts based on executive orders, emergency declarations, and who occupies the presidency. A contract that authorises "all lawful uses" today may authorise dramatically more tomorrow — without any new signature required, without Anthropic's knowledge. The same administration that banned Anthropic for trying to limit those uses controls the ongoing definition of what is lawful.
Legal experts called the Pentagon's position internally contradictory. The administration simultaneously threatened to invoke the Defense Production Act — arguing Claude was so critical to national security the government could compel access — while designating Anthropic a supply-chain risk equivalent to a foreign adversary. As one retired Navy rear admiral put it: "I don't know how those two things can both be true in reality."
Comparing the Two Positions
| Dimension | Anthropic (Claude) | OpenAI (GPT-4 / o-series) |
|---|---|---|
| No domestic mass surveillance | Demanded in contract text | Stated verbally; relies on existing law |
| No fully autonomous weapons | Demanded in contract text | Stated verbally; relies on existing law |
| Government agreed to terms | Refused — contract cancelled, company banned | Yes — classified deal signed Feb 27, 2026 |
| Enforcement authority | — | Pete Hegseth (per Altman's own statement) |
| Employee response | Leadership unified; standing firm in court | Senior resignation; 900+ employees sign open letter |
| Public reaction | Claude hit #1 App Store; user surge | QuitGPT movement; significant backlash |
The Unlikely Solidarity of Competitors
The most surprising development wasn't the ban, or the OpenAI deal, or even the App Store reversal. It was what happened in the days that followed. More than 30 engineers and researchers from OpenAI and Google DeepMind — companies that stood to benefit commercially from Anthropic's removal from government work — filed a legal brief in support of Anthropic's lawsuit. Among them: Jeff Dean, Google's chief scientist.
"This effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness in the field of artificial intelligence and beyond."
— From the amicus brief filed by OpenAI & Google DeepMind employees
The brief was filed in personal capacities. But it came on the heels of an open letter signed by nearly 900 employees at Google and OpenAI — urging their own leadership to refuse government requests for domestic surveillance or autonomous lethal targeting. The same red lines Anthropic drew, articulated independently by the employees of its rivals.
There's a lesson in that about where industry consensus actually lies — even when the CEOs diverge.
What This Means for the Future of AI
Step back from the week's headlines and what you see is a structural problem that isn't going away. The governments most likely to deploy AI at scale are also the entities with the most power to coerce AI companies into deploying it on their terms. Voluntary self-governance by private companies was always fragile. This week demonstrated exactly how fragile.
Anthropic's refusal establishes something important: that there are at least some companies willing to absorb severe commercial and political consequences — lost contracts, a national-security blacklisting, a presidential call-out — rather than remove safety guardrails. That willingness to pay a real price for a stated principle is rarer in Silicon Valley than the mission statements suggest.
But Anthropic's position is costly. The company lost significant government revenue. Its models were branded a foreign-level security threat. It faces ongoing legal battles. These are real constraints on hiring, fundraising, and long-term competitiveness against rivals who don't impose the same limits on themselves. A $380 billion valuation doesn't insulate you from the sustained pressure of a hostile federal government.
The OpenAI deal, meanwhile, raises the harder questions. Altman insists the safeguards are genuine. Amodei insists they're performative. The truth may not be verifiable from the outside — which is itself the deepest problem. A safety guarantee that cannot be externally audited is a thin reed on which to rest decisions about weapons and mass surveillance.
When the government demonstrates it will punish companies for maintaining guardrails and reward those that abandon them, the argument for voluntary self-governance collapses. What's left is the slower, harder work of building legal and institutional constraints that don't bend when political winds shift — the kind of external framework that the AI industry has resisted and now urgently needs.
Claude said no. That matters — both in what it reveals about Anthropic, and in what it exposes about the pressure every AI company will eventually face from those who want access without conditions.
Filed under