After Trump told agencies to “immediately cease” using Anthropic’s tools and Hegseth branded it a national security risk, the AI company is fighting back and the bill could hit multiple billions.

Anthropic didn’t blink.
On Monday, the AI company behind the Claude chatbot filed two federal lawsuits against the Trump administration, challenging what it calls an illegal, retaliatory campaign to destroy its business — all because it refused to let the Pentagon use its technology for autonomous weapons and mass domestic surveillance.
The suits landed in the U.S. District Court for the Northern District of California and the U.S. Circuit Court of Appeals in Washington, D.C. Both describe the government’s conduct the same way: “unprecedented and unlawful.”
The backstory isn’t subtle. Anthropic and the Pentagon had been in an increasingly contentious standoff over whether the company’s safety guardrails could constrain military operations. Talks dragged on for months. Then they blew up publicly.
On Feb. 27, Trump posted on Truth Social: “I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!” Hegseth followed minutes later, announcing on X that Anthropic would be designated a “Supply-Chain Risk to National Security.”
That label is not used lightly. Experts say it has historically been reserved for foreign adversary contractors — companies that could potentially sabotage U.S. interests. Slapping it on an American firm is essentially uncharted territory. And the consequences are concrete: defense vendors and contractors working with the Pentagon must now certify that they don’t use Anthropic’s Claude in any of their work.
So what’s actually at the center of this fight? Anthropic says the company drew hard lines against allowing its models to be used for lethal autonomous warfare — where AI makes kill decisions without human oversight — and against deploying Claude to surveil American citizens at scale. The Pentagon rejected those limits. Officials insisted they needed “full flexibility” to use AI for, in their words, any “lawful use.” Anthropic held firm.
What it cost them was staggering. CFO Krishna Rao put numbers to it in a court filing. “Across Anthropic’s entire business, and adjusting for how likely any given customer is to take a maximal reading, the government’s actions could reduce Anthropic’s 2026 revenue by multiple billions of dollars,” Rao said.
The bleeding has already started. Chief Commercial Officer Paul Smith said a partner with a multi-million-dollar annual contract had already switched from Claude to a rival model, wiping out an anticipated revenue pipeline of more than $100 million. Separately, negotiations with financial institutions worth roughly $180 million combined have fallen apart.
That’s not a paper threat. That’s money walking out the door right now.
Anthropic’s investors scrambled to contain the damage. A group that included some of those investors — as well as OpenAI — expressed concern over the government’s move. It’s a remarkable moment: OpenAI, Anthropic’s chief rival, stood in the same corner over the principle at stake, even as it quietly rushed through its own Pentagon deal in the aftermath.
An internal memo by CEO Dario Amodei, published by The Information, offered a glimpse at the raw frustration inside the company. Amodei wrote that Pentagon officials disliked Anthropic in part because “we haven’t given dictator-style praise to Trump.” He later apologized for the leak — not for the sentiment.
The support Anthropic picked up in court caught some observers off guard. Dozens of scientists and researchers from OpenAI and Google DeepMind filed an amicus brief in their personal capacities, arguing the supply-chain designation could damage U.S. competitiveness and chill public debate about AI safety. Scientists from two of the biggest rivals to Anthropic, telling a federal judge the company has a point.
Anthropic is not seeking monetary damages. Instead, it’s asking the court to immediately declare Trump’s government-wide shutdown directive unconstitutional and to reject the supply-chain risk designation entirely.
University of Richmond law professor Carl Tobias framed what comes next bluntly. “Anthropic may very well win in federal court, but this government is not shy about appealing,” he said, predicting a “scorched earth” approach from the Trump administration.
The White House, for its part, may be preparing to escalate further. Axios reported Monday that an executive order formally directing the entire civilian government to remove Anthropic’s AI from its systems is in the works.
If that order lands before a court steps in, this fight gets a lot uglier fast. Anthropic’s lawyers are racing the clock.





