US Government Terminates Ties with Anthropic, Enters Pentagon AI Deal with OpenAI
TLDR
- The US government mandated all federal agencies to halt the use of Anthropic’s AI technology, deeming it a national security supply-chain risk.
- Shortly after Anthropic was dropped, OpenAI inked a deal with the Pentagon to deploy its AI models on classified military networks.
- Anthropic’s $200 million Pentagon contract collapsed as it refused to permit its AI to be employed for autonomous weapons or domestic mass surveillance.
- OpenAI claims its deal includes the same restrictions Anthropic sought, yet critics question if the company will adhere to them.
- Anthropic intends to challenge the national security supply-chain risk designation in court, asserting it is legally unsound.
The US government ended its relationship with AI company Anthropic on Friday and categorized it as a national security supply-chain risk. Within hours, rival OpenAI announced a new deal to deploy its AI models on the Pentagon’s classified networks.
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.
In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.
AI safety and wide distribution of…
— Sam Altman (@sama)
President Donald Trump ordered every federal agency to immediately discontinue using Anthropic’s technology. Agencies already using it have a six-month period to transition away from the company’s Claude models.
Defense Secretary Pete Hegseth posted on X that Anthropic posed a “Supply-Chain Risk to National Security.” That label is usually reserved for companies from foreign adversaries like China.
This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.
Our position has never wavered and will never waver: the Department of War must have full, unrestricted…
— Secretary of War Pete Hegseth (@SecWar)
This move could affect Anthropic’s broader business. Companies working with the Pentagon may now have to prove they are not using Claude at all. Anthropic’s investors and partners include Nvidia, Amazon, and Google.
Anthropic was the first AI lab to deploy models inside the Pentagon’s classified environment. That contract, signed in July, was worth up to $200 million.
Talks broke down after Anthropic refused to guarantee its AI would be available for all lawful military uses. The company drew the line at autonomous weapons and domestic mass surveillance.
The Pentagon said it simply needed to trust the military to follow the law. Anthropic CEO Dario Amodei said Thursday the company “cannot in good conscience” agree to the request.
OpenAI Steps In
OpenAI CEO Sam Altman announced the new Pentagon deal late Friday on X. He stated the agreement includes the same prohibitions on mass surveillance and autonomous weapons that Anthropic had requested.
Altman also said he asked the government to offer the same deal terms to all AI companies. Elon Musk’s xAI had already been approved for use in classified settings by the military.
OpenAI President Greg Brockman and his wife donated $25 million to a pro-Trump political action committee last year. They are also spending more to support Trump’s AI agenda in upcoming elections.
Anthropic Fights Back
Anthropic said it was “deeply saddened” by the designation and plans to challenge it in court. The company called the ruling “legally unsound” and said it sets a dangerous precedent for American tech firms negotiating with the government.
The General Services Administration said it is removing Anthropic from its product listings for government agencies.
Some observers were critical of OpenAI’s move. Democratic politician Christopher Hale posted on X that he canceled his ChatGPT subscription and switched to Claude Pro Max.
Anthropic was founded in 2021 by researchers who left OpenAI over concerns the company was deprioritizing safety. Both companies have raised tens of billions of dollars recently and are each considering initial public offerings.
The fallout also touched on a specific incident. After Claude was used in a raid in Venezuela in January, an Anthropic employee asked a Palantir partner how the technology had been used. Pentagon officials viewed the inquiry as overstepping.
Anthropic said the exchange was routine technical communication between partners.