Top Pentagon official recounts the ‘whoa moment’ when defense leaders grasped Anthropic’s indispensability and perceived the risk of losing access
A top Pentagon official has stated that the Defense Department’s reliance on Anthropic’s AI became a startling realization that ultimately sparked a dramatic rift between them.
Emil Michael, the department’s under secretary for research and engineering and its chief technology officer, recounted the events leading to the public dispute in a Friday episode of the .
Following the U.S. military’s January raid in Venezuela that captured dictator Nicolas Maduro, Anthropic asked Palantir if its AI had been used in the operation. While Anthropic has framed the inquiry as routine, the Pentagon and Palantir saw it as a potential threat to their access.
“I thought, holy shit—what if this software goes down, a guardrail activates, or it refuses to work in the next such operation, leaving our people at risk?” Michael recalled. “I went to Secretary Hegseth and told him this could happen, and that was a ‘whoa’ moment for the entire Pentagon leadership—realizing we might be so dependent on a single software provider with no alternative.”
Until recently, Anthropic’s Claude was the only AI model authorized for classified settings. The San Francisco-based startup has said it’s ., but will not permit its AI to be used in mass domestic surveillance or autonomous weapons.
The Pentagon maintained it would use the AI in lawful scenarios and refused to accept any company-imposed limits beyond those constraints.
After failing to reach a compromise last week, President Donald Trump ordered the federal government to halt use of Anthropic, giving the Pentagon six months to phase it out. Defense Secretary Pete Hegseth also labeled the company a supply-chain risk, barring contractors from using it for military work.
For now, the military continues to use Anthropic amid the U.S. war on Iran, as AI helps warfighters quickly identify potential targets.
During his podcast appearance, Michael expressed concern that a rogue developer could “poison the model” to render it ineffective for the military, train it to purposefully hallucinate, or instruct it to ignore commands.
He then contacted OpenAI, which eventually secured a deal similar to Anthropic’s. Elon Musk’s xAI was also brought into classified use, while the Pentagon is working to get Google’s AI approved for classified settings.
“I’m not biased,” Michael said. “I just want all of them. I want to offer them identical terms because I need redundancy.”
He noted that Anthropic had become “deeply embedded” in the department, while other AI firms had not pursued enterprise customers as aggressively by deploying on-site engineers.
The rift between the Pentagon and Anthropic underscored the cultural clash between the defense establishment and Silicon Valley, which has its origins in military innovation but has since grown uneasy about its technology being used in warfare.
In fact, a top robotics engineer at OpenAI left the company on Saturday, citing the same concerns as Anthropic.
“This wasn’t an easy decision. AI plays a critical role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human approval are lines that warranted more deliberation than they received,” on and .