The deal between OpenAI and the Pentagon raises new questions about AI and mass surveillance

On Friday, just hours after publicly supporting rival Anthropic for standing firm against the Pentagon’s demands, OpenAI CEO Sam Altman announced that his company had reached its own deal with the Pentagon. This move came shortly after the US government took the highly unusual step of designating Anthropic a “supply chain risk.”

OpenAI’s decision received criticism from many AI researchers and tech policy experts, even though OpenAI stated that it had imposed limitations in its agreement regarding the surveillance of U.S. citizens and lethal autonomous weapons that Anthropic had wanted in its contract but which the Pentagon had rejected.

One of the main points of contention was domestic mass surveillance. Experts have long cautioned that advanced AI can take scattered, individually harmless data—such as a person’s location, finances, search history—and compile it into a comprehensive picture of anyone’s life, automatically and on a large scale. Anthropic CEO Dario Amodei said that surveillance poses serious and new risks to people’s “fundamental liberties” and that “the law has not yet caught up with the rapidly growing capabilities of AI.”

Although OpenAI said in a blog post that it had made a deal with the Pentagon that its technology would not be used for mass domestic surveillance or direct autonomous weapons systems, the two strict limits that Anthropic had refused to abandon, some legal and policy experts have raised questions about a possible legal gap. 

Part of the dispute depends on the unclear legality of the large-scale analysis of Americans’ data that is legal under current U.S. statutes, even if it seems indistinguishable from mass surveillance.

“Currently, under U.S. law, it is legal for government authorities to purchase commercially available information from data brokers and other third parties,” Samir Jain, the vice president of Policy at the Center for Democracy & Technology, said. “If you acquire a large amount of data and allow AI to analyze it, you may effectively end up conducting mass surveillance of Americans through that process. It is not currently restricted or prohibited by law.”

OpenAI claims that its “red lines” are enforced through the technical systems it plans to build and through the language in its contract with the Pentagon. According to a blog released by the company, the contract allows the Department of Defense to use the AI “for all legal purposes, in accordance with applicable law, operational requirements, and well-established safety and oversight protocols,” while explicitly forbidding unrestricted monitoring of Americans’ private information.

The issue is that what is considered “legal” can change. OpenAI’s contract refers to existing laws and Department of Defense policies, but those policies could be altered in the future. “Nothing in what they have released would prevent those policies from being changed in the future,” Jain said.

Some critics argue that existing intelligence authorities already permit forms of surveillance that OpenAI says it prohibits. Mike Masnick, founder of Techdirt, said that the agreement “absolutely allows for domestic surveillance,” citing Executive Order 12333, a long-standing authority that permits intelligence agencies to collect communications outside the United States, which can include Americans’ data when it is incidentally obtained.

Some of the debate focuses on specific parts of U.S. law that govern different national security activities. The actions of the U.S. military are generally governed by Title 10 of the U.S. Federal Code. This includes the work done by the Defense Intelligence Agency and the U.S. Cyber Command to support military operations. But some of the DIA’s work falls under a different part of U.S. law, Title 50 of the U.S. Code, which generally governs covert intelligence gathering and covert action. The work of the Central Intelligence Agency and National Security Agency generally also falls under Title 50. Some of the most sensitive Title 50 activities, especially covert actions, are carried out largely behind the scenes and require a presidential finding.

In a blog post published over the weekend, OpenAI shared a detailed account of its agreement with the Pentagon, and according to a social media post by a well-known OpenAI researcher Noam Brown, the company’s head of national security partnerships Katarina Mulligan told Brown that OpenAI’s contract does not cover Title 50 work by the intelligence community, one of the major concerns of critics. Representatives for OpenAI did not immediately respond to a request for comment from .

However, legal scholars have noted that the distinction between Title 10 and Title 50 activities is becoming increasingly blurred. In practice, the two can appear very similar, and both can involve analyzing data about foreign actors or tracking patterns. But that overlap creates a gray area for companies like OpenAI: a contract that bans Title 50 work does not automatically prevent Title 10 agencies like the DIA from using AI to analyze commercially available or unclassified datasets.

“If they are saying that their system cannot be used for any Title 50 activities, then that reduces the scope of activities for which the AI system can be used.” Jain said, “But that does not solve the problem.”