Anthropic CEO Dario Amodei asserts company’s patriotism in U.S. defense but remains firm on its ‘red lines’
President Donald Trump has alleged that Anthropic poses a threat to troops and national security, but CEO Dario Amodei has countered that his firm is patriotic.
In a statement made soon after Trump directed the federal government to take action, Amodei highlighted that the AI startup was the pioneer in serving the defense community within a classified environment.
“I believe we must defend our nation from autocratic rivals such as China and Russia,” he stated. “Consequently, we have been very proactive. We maintain a significant public sector team.”
Although Anthropic has supplied its AI to the government, the Pentagon insisted on unrestricted use across all legal situations. The company, however, stood firm on its established “red lines,” specifically prohibiting use in domestic mass surveillance and autonomous weapons systems.
Negotiations broke down without a deal, prompting Trump to prohibit Anthropic from all government agencies, though the Pentagon was granted a six-month period to wind down its use.
Defense Secretary Pete Hegseth further labeled the company a “supply-chain risk,” which prevents other Pentagon contractors from utilizing Anthropic’s AI for military purposes.
Amodei informed CBS that Anthropic supports 98%-99% of the military’s potential applications. His worry regarding mass surveillance is that modern AI fundamentally changes the game, even if such use is currently legal.
“That practice is not actually illegal. It simply was not practical before the AI era. So domestic mass surveillance is effectively outpacing the law,” he elaborated. “The technology is evolving so rapidly that it has become disconnected from legal frameworks.”
Concerning autonomous weapons, Amodei expressed that AI is not yet sufficiently reliable to remove human oversight entirely, citing the technical challenge of “basic unpredictability” in current models.
To date, he knows of no real-world instances where a user has encountered Anthropic’s red lines, but he conceded that it is unsustainable for a private corporation to make these determinations in the long run.
Amodei emphasized that Congress must ultimately establish regulations for AI use, but noted that legislators have been slow to move. The company also clarified it is “not categorically opposed to fully autonomous weapons,” but feels the technology’s reliability is not yet adequate.
For now, Anthropic remains willing to collaborate with the government and proposed that communication channels stay open.
“We are prepared to offer our models to every government branch, including the Department of War, the intelligence community, and civilian agencies, under the conditions we have set with our red lines,” he said.
The blacklisting of Anthropic by Trump and Hegseth occurred hours before a significant event, in what appears to be developing into a lengthy conflict with the objective of regime change.
AI has become a vital military asset, particularly for identifying targets and forecasting adversary actions through rapid intelligence analysis.
When CBS asked what he would currently say to Trump, Amodei answered, “I would tell him we are patriotic Americans. Every action we’ve taken has been for this nation’s benefit, to bolster U.S. national security. Our proactive deployment of models with the military was driven by our belief in this country.”
He continued, “However, the red lines we established were drawn because we are convinced that crossing them would violate American values. We aimed to defend American values.”
The supply-chain risk designation from the Pentagon chief remains a concern for Anthropic, a move that has the potential to hinder its expansion.
Amodei described the action as punitive but minimized the long-term consequences, stating it would not impact the non-defense related work of Anthropic’s customers.
“We are going to be fine,” he asserted. “The practical effect of this designation is quite limited. The tone of the secretary’s tweet, however, was intended to sow uncertainty, to make people think the impact would be far greater, to generate fear, uncertainty, and doubt. But we will not allow that to work. We will be fine.”