Palantir’s Alex Karp: No indication AI tools were for domestic spying in Anthropic-Pentagon dispute
Another firm, Palantir, finds itself at the center of the continuing dispute between AI company Anthropic and the U.S. Department of Defense regarding the military’s potential use of Anthropic’s large language models.
The Miami-based data analytics and AI platform Palantir is a major software supplier to the Defense Department and serves as the primary conduit for the Department’s use of Anthropic’s Claude model.
“We are legitimately still in the middle of all this,” CEO Alex Karp stated during an interview at the sidelines of the company’s biannual AIP conference on Thursday. “It’s our stack that runs the LLMs.”
Karp noted he has participated in many talks with all relevant parties but would not provide details, saying he does not wish to disclose private conversations or criticize individuals.
However, Karp emphasized one point: The Defense Department is not employing AI for mass domestic surveillance of U.S. citizens and, to his knowledge, has no intention to do so.
“Without commenting on internal dialogs, there was never a sense that these products would be used domestically,” Karp said. “The Department of War is not planning to use these products domestically. That’s a completely different matter… The terms the Department of War seeks are entirely focused on non-American citizens in a wartime context.”
Palantir conducts extensive business with the U.S. government, including the DoD. Anthropic entered a partnership with Palantir in 2024 to provide its AI technology to the DoD through Palantir. Anthropic also started direct collaboration with the DoD last year to develop a version of its technology tailored for the Defense Department.
The heated exchange between Anthropic and the Defense Department has persisted since approximately January, with both sides disagreeing on its origin. Recent statements from Undersecretary of Defense for Research and Engineering Emil Michael claim Palantir informed the Pentagon that Anthropic was asking if its models were used in the U.S. military operation to apprehend Venezuelan President Nicolás Maduro. (Anthropic has denied this account, stating it has not discussed Claude’s use for specific missions “with any industry partners, including Palantir, outside of routine discussions on strictly technical matters”). Since then, the parties have been embroiled in a conflict over Anthropic’s ability to impose contractual restrictions on model usage.
Anthropic CEO Dario Amodei has written several blog posts on the issue, beginning with a statement in late February claiming the Defense Department rejected safeguards to prevent its LLMs from being used for domestic mass surveillance or fully autonomous weapons. Secretary of Defense Pete Hegseth later labeled Anthropic a “supply-chain risk,” endangering many of the firm’s commercial ties and leading Anthropic to sue the Pentagon over this designation.
‘Totally in favor’ of domestic terms of engagement
Palantir, initially funded by the CIA’s venture capital arm and with software used in overseas counter-terrorism operations, has faced longstanding allegations of aiding government and intelligence agencies in spying on civilians and domestic suspects. Karp has consistently denied these claims for more than ten years and has discussed the need for technical safeguards on technology with potential domestic surveillance applications in the U.S. Early on, Palantir established a “Privacy and Civil Liberties” team—a multidisciplinary group of engineers, lawyers, philosophers, and social scientists—charged with integrating privacy-protecting features into its products and promoting responsible use. This team also implemented internal reporting channels, such as an ethics hotline, for employees to raise concerns about work they deemed unethical.
Nevertheless, civil liberties organizations continue to accuse the company of the opposite—assisting government surveillance. Its relationship with U.S. Immigration and Customs Enforcement (ICE), which started under the Obama Administration, has drawn particularly strong scrutiny and condemnation from external critics and its own staff. This criticism has intensified over the past year as the Trump Administration has directed ICE to conduct aggressive crackdowns in cities such as Minneapolis.
Karp told the publication he is “very sympathetic with arguments against using these products inside the U.S.” and expressed being “totally in favor” of establishing terms of engagement and limits for domestic agencies’ use of artificial intelligence.
“Quite frankly, I think we should self-impose them,” Karp remarked regarding these terms. “The Valley should have a consortium: This is what we’re going to do, and this is what we’re not going to do,” he added.
However, Karp made a clear separation between tech companies setting terms with domestic agencies and doing so with the Department of Defense, whose primary focus is managing U.S. relations with other nations and adversaries.
“What we’re talking about now is using products vis-a-vis someone who’s trying to kill our service members,” Karp said, adding that he personally advocates for “wide license” for the Department of Defense specifically.
“If we knew China and Russia and Iran wouldn’t build them, I would be in favor of very heavy—very heavy—legal constraints,” Karp stated. He observed, however, that American adversaries will develop and deploy such technology against the U.S. regardless. “I don’t think this is an opinion. I think this is a fact, and that fact means I think the Department of War should have wide license to use these products.”