Employees from Google and OpenAI support Anthropic in a legal fight that could redefine military use of AI

In its legal battle with the Trump administration over the Pentagon’s decision to brand it a “supply-chain risk,” Anthropic is gaining support from unexpected allies: employees of competing AI companies.
Over 30 staff members from OpenAI and Google DeepMind—including Google’s chief scientist Jeff Dean—submitted an amicus brief cautioning that a Pentagon blacklist of Anthropic could harm the entire U.S. AI industry.
“This push to penalize one of the leading U.S. AI firms will unquestionably affect the nation’s industrial and scientific competitiveness in artificial intelligence and beyond,” the employees stated in the filing. As researchers from rival companies unite behind Anthropic, a dispute that started over military contracts may evolve into a broader discussion about who controls AI.
The brief was filed just hours after Anthropic launched two lawsuits challenging the government’s designation of it as a supply-chain risk—a label previously applied only to foreign companies, designed to prevent adversaries from sabotaging U.S. military systems. Relations between the Trump administration and Anthropic deteriorated sharply last week after they failed to agree on a revised contract governing the use of Anthropic’s AI model Claude. Anthropic had sought to establish two “redlines” regarding the model’s use for domestic mass surveillance and autonomous weapons. The Pentagon, however, insisted that Anthropic consent to the U.S. military using its AI systems for “all lawful purposes.”
Anthropic refused to accept this wording. In response, the administration canceled its government contracts with the company and labeled it a national security risk.
Shortly after Anthropic’s negotiations collapsed, OpenAI quickly secured its own deal with the Pentagon, apparently agreeing to terms Anthropic had rejected. The contrast between the deals sparked a public dispute between the two companies’ CEOs. Anthropic CEO Dario Amodei called OpenAI’s approach to the deal “safety theater” and described OpenAI CEO Sam Altman’s public statements as “straight up lies.” Altman then indirectly criticized Anthropic, stating it is “bad for society” when companies abandon democratic norms because they disagree with those in power—an apparent response to Amodei accusing Altman of offering “dictator-style praise to Trump.”
While tensions among these company leaders may remain, the amicus filing represents an unusual display of unity among employees from competing firms. Though the employees noted they signed in a personal capacity, the brief follows an open letter signed by nearly 900 Google and OpenAI employees urging their own leadership to reject government requests to deploy AI for domestic mass surveillance or autonomous lethal targeting—the same “redlines” Anthropic sought in its Pentagon negotiations.
OpenAI has lost at least one employee due to the controversy. Caitlin Kalinowski, who led hardware and robotics at OpenAI since November 2024, resigned over the company’s Pentagon deal, stating that domestic surveillance without judicial oversight and lethal autonomy without human approval “are boundaries that merited more careful consideration than they received.”
Anthropic’s conflict with the Pentagon is already poised to have significant implications for AI control and the relationship between business and government, but it may also trigger a broader uprising of tech workers against their managers. Google faced similar employee dissent in 2018 when considering work with the U.S. military on Project Maven, which involved using AI to analyze aerial surveillance images. Employee objections led Google to decline renewing its work on drone surveillance analysis, which was later taken over by Amazon and Microsoft.