Palmer Luckey says Silicon Valley gets the Pentagon wrong: ‘Maintain that this belongs to the people’

Who should have authority over AI? Should the corporations that deploy this powerful technology determine its destiny? Or should that authority rest with the government?

Palmer Luckey, founder of defense firm Anduril—which seeks to modernize the U.S. military—believes the solution is simple: grant that authority to the government. In a recent with the New York Post, the billionaire founder offered his perspective on the growing debate over who should decide how AI is utilized by the government.

For the billionaire, the government, and thus the people, should make specific usage decisions. Otherwise, technology firms could endanger democracy.

“We must maintain the stance that this belongs to the people,” he stated. “Anyone arguing that a defense firm should operate outside the law, beyond what legislators and elected officials decide regarding partnerships, is essentially stating they don’t believe in this democratic experiment and desire a ‘corporatocracy.’”

“In every instance, regarding whoever the U.S. government permits or prohibits me from selling to,” he added, “adopting any other stance means essentially allowing corporate executives to wield de facto control over American foreign policy.”

Luckey’s remarks coincide with Anthropic CEO Dario Amodei to permit the Pentagon unrestricted use of its AI systems for mass surveillance or to enable fully autonomous weapons operating without human supervision. Consequently, the Department of Defense the AI firm a “supply chain risk,” a classification typically applied to hostile foreign companies, such as the Chinese-based . Amodei indicated the designation will have minimal impact on business operations, and that it to reverse the classification. Nevertheless, the company continues negotiations with the Pentagon concerning use of its AI models and tools.

However, Amodei, together with Anthropic’s founders—who left OpenAI collectively to establish a company they claim emphasizes AI safety—insist the Pentagon’s demands go too far. “These threats don’t alter our stance: we cannot ethically comply with their request,” Amodei stated in a last week.

Anthropic didn’t promptly reply to ’s request for comment.

Silicon Valley versus Washington

The Department of Defense—and individuals like Luckey—believe it’s not up to private contractors to determine use cases, and instead, contend that authority belongs to the government. Shortly after Anthropic’s deal collapsed last month, Sam Altman’s OpenAI struck an agreement with the Pentagon to of the startup’s AI models and tools. Elon Musk’s xAI likewise to permit Pentagon use of its AI, creating competition for Anthropic’s formerly exclusive partnership.

Anthropic isn’t the first technology firm to resist the DOD. As Luckey points out in the interview, withdrew from the Pentagon in 2018, exiting , which entailed AI analysis of drone footage, following protests from thousands of employees concerned the program might lead to autonomous weapons.

“You’d end up with a world where Silicon Valley executives possessed greater foreign policy authority than the U.S. president,” Luckey remarked. “That’s extremely dangerous.”

For Luckey, the fundamental question is whether ultimate decisions on AI usage should rest with Silicon Valley or Washington. He believes that, irrespective of the administration in power, technology firms, and the private sector generally, must comply with that government’s foreign policy choices.
Yet even as the Anthropic-Pentagon dispute intensifies, Amodei stated in a Thursday that the two sides can identify shared interests. “Anthropic shares far more common ground with the Department of War than we have disagreements,” he said.