Controlling AI: A Practical Approach

Female engineer inspecting wafer chip in laboratory

Artificial intelligence (AI) has the potential to deliver significant benefits to society. However, it could also pose risks. As a general-purpose technology, the same tools that advance scientific discovery could also be used to develop harmful weapons. Governing AI effectively requires balancing the widespread benefits with the need to prevent the most powerful AI from falling into the wrong hands. The good news is that there are viable strategies for achieving this.

In the 20th century, nations established international institutions to promote the peaceful use of nuclear energy while limiting the spread of nuclear weapons. This was achieved by controlling access to key materials, such as weapons-grade uranium and plutonium. The risks associated with nuclear technology have been managed through international organizations, such as the International Atomic Energy Agency (IAEA). Today, 32 countries operate nuclear power plants, generating 10% of the world’s electricity, while only nine countries possess nuclear weapons.

Countries can adopt a similar approach to AI regulation. They can control access to specialized chips, which are essential for training the most advanced AI models. Several prominent individuals and organizations, including the U.N. Secretary-General António Guterres, have advocated for an international governance framework for AI, mirroring the existing framework for nuclear technology.

The most advanced AI systems rely on tens of thousands of specialized computer chips. These chips are housed in massive data centers where they process vast amounts of data for months to train the most capable AI models. These advanced chips are complex to manufacture, their supply chains are tightly controlled, and a large number of them are required to train AI models.

Governments can establish a regulatory system that restricts access to large quantities of advanced chips to authorized computing providers and licenses trusted AI companies to access the computing power needed to train the most capable—and potentially most dangerous—AI models.

This might seem like a challenging task. However, only a few nations are needed to implement this governance regime. The specialized computer chips used to train the most advanced AI models are concentrated in a limited number of locations. They rely on critical technology from three countries—Japan, the Netherlands, and the U.S. In some cases, a single company holds a monopoly over key aspects of the chip production supply chain. The Dutch company ASML is the world’s sole producer of extreme ultraviolet lithography machines, which are essential for manufacturing the most cutting-edge chips.

Governments are already taking steps to regulate these high-tech chips. The U.S., the Netherlands, and Japan have imposed export controls on their chip-making equipment, limiting their sale to China. The U.S. government has also restricted the sale of the most advanced chips—which are made using U.S. technology—to China. Additionally, the U.S. government has implemented requirements for cloud computing providers to identify their foreign customers and report when a foreign customer is training a large AI model that could be used for cyberattacks. Furthermore, the U.S. government has proposed—but not yet implemented—restrictions on the deployment of the most powerful trained AI models and their distribution. While some of these restrictions are driven by geopolitical competition with China, the same tools can be used to regulate chips and prevent adversarial nations, terrorists, or criminals from using the most powerful AI systems.

The U.S. can collaborate with other nations to build upon this foundation and establish a framework for governing computing hardware across the entire lifecycle of an AI model: chip-making equipment, chips, data centers, AI model training, and the trained models resulting from this production cycle.

Japan, the Netherlands, and the U.S. can play a leading role in creating a global governance framework that limits the sale of these specialized chips to countries with established regulatory regimes for governing computing hardware. This would involve tracking and accounting for chips, monitoring who is using them, and ensuring the safe and secure training and deployment of AI.

However, global governance of computing hardware can do more than just prevent AI from falling into the wrong hands—it can empower innovators around the world by bridging the gap between accessibility and control. Due to the intense computing requirements for training the most advanced AI models, the industry is shifting towards an oligopoly. This concentration of power is detrimental to both society and business.

Some AI companies have started to make their AI models open source. This is beneficial for scientific innovation and levels the playing field with Big Tech. But once an AI model is open source, anyone can modify it. This could lead to the removal of safeguards.

Fortunately, the U.S. government has begun piloting initiatives to make powerful AI models accessible as a public good for academics, small businesses, and startups. These models could be made available through the national cloud, enabling trusted researchers and companies to use them without releasing them publicly on the internet, where they could be misused.

Countries could even collaborate to create an international resource for global scientific cooperation on AI. Today, 23 nations participate in CERN, the international physics laboratory that operates the world’s most advanced particle accelerator. Nations should consider establishing a similar international computing resource for AI, empowering scientists worldwide to collaborate on AI safety.

AI holds immense potential. However, to harness AI’s benefits, society must also manage its risks. By controlling the physical inputs to AI, nations can effectively govern AI and build a foundation for a safe and prosperous future. It’s easier than many think.