Understanding Anthropic’s Conflict with the Pentagon and the Opportunity for OpenAI

“Get involved in politics, or politics will come after you.”

That’s a well – known quote from long – time presidential candidate Ralph Nader over 20 years ago, and it harks back to what Greek general Pericles said around 420 BCE. Whether you like it or not, politics will find you and bring about changes. This leads us to Anthropic.

Anthropic, valued at $380 billion and having a large number of Silicon Valley’s most well – known investors on its cap table, is in what my colleague Jeremy Kahn calls… The company has been in a full – scale battle with the Pentagon. The standoff is as follows: Anthropic has refused to allow its technology to be used in mass surveillance or for lethal autonomous weapons. Secretary of War Pete Hegseth didn’t accept this, stating that technology should be used for “any lawful purpose.” And Anthropic didn’t give in.

Then come the series of consequences: The Pentagon terminated its $200 million contract with Anthropic and labeled the large – language – model giant a “supply chain risk.” This is possibly a “serious blow to Anthropic’s business” and has no precedent:

Legal and policy experts said that the government’s unprecedented decision raises profound questions about the relationship between the government and businesses in the U.S. It is the first time the U.S. has ever designated an American company as a supply chain risk, and the first time this designation has been used for a business that doesn’t agree to certain contractual terms. Anthropic said in a statement on Friday that it would take legal action to try to reverse the Pentagon’s designation.

Effectively, Anthropic’s access to the trillion – dollar defense industrial complex seems to have been violently shut. And, as…’s… OpenAI has stepped in to fill the gap. Sam Altman seemingly got the deal done quickly, and Sharon found out what he told employees:

Altman told employees at the all – hands meeting that the government is willing to let OpenAI build its own “safety stack” – that is, a layered system of technical, policy, and human controls that sits between a powerful AI model and real – world use – and that if the model refuses to perform a task, the government won’t force OpenAI to make it do so.

Still, even Altman is aware of how this looks… in an “Ask Me Anything” session on… that the deal… Meanwhile, the chaos continues. Anthropic’s Claude overtook ChatGPT in the… App Store over the weekend, and it seemingly had an outage this morning. I suspect it’s going to be an even busier week than usual in the AI field.

See you tomorrow,

Allie Garfinkle
X:

Email:

Submit a deal for the Term Sheet newsletter.

Lily Mae Lazarus curated the deals section of today’s newsletter..