Published daily by the Lowy Institute

AI’s Oppenheimer moment: Establishing regulations

Nuclear arms control is only one area that needs to catch up in the AI age.

Andrey Suslov/Getty Images
Andrey Suslov/Getty Images

This era of Artificial Intelligence is coming to be known as the “Oppenheimer moment”, where the rapid development of military AI demands concrete regulatory frameworks to prevent the potentially destabilising consequences of new technology, akin to the challenges nuclear arms control faced in its initial years and dramatised in last year's film about the life of physicist J. Robert Oppenheimer.

With AI, there is an overwhelming view that technological developments have overtaken international law. Creating an international law regulating AI in a comprehensive manner is necessary to ensure unhindered peaceful application and reduce the risk of accidental or inadvertent war. This is especially important for managing technological competition between the United States and China.

The APEC summit in San Francisco in November could not generate the drive to establish a dedicated platform for discussing the constraints of developing or using AI in directing autonomous weapons, including nuclear capabilities. The United States has said that it agreed with China to, among other things, “work together to assess the threats posed by AI”. But division is evident in politics. A memo titled “Biden Caves to China’s Xi” from the Republican National Committee alleged that US strategic advantages were being sacrificed to calm Chinese AI growth. They are mainly related to limiting AI use in nuclear weapons.

The concerns surrounding the military applications of AI technology are increasing worldwide. 

Recently, the United States passed the 2024 National Defense Authorisation Act, which includes a five-year implementation plan to adopt AI applications to accelerate decision advantage for both business efficacy as well as warfighting capability. China has also recently released the Global AI Governance Initiative “calling for all countries under the principles of extensive consultation, joint contribution, and shared benefits to enhance exchanges and cooperation, work together to prevent risks, and develop an AI governance framework based on broad consensus, to make AI technologies more secure, reliable, controllable, and equitable.”

A crucial step towards guidelines regarding AI nuclear safety came with US President Joe Biden issuing an Executive Order on AI. While limited in scope, the EO seeks to position the United States as a leader in AI regulation, particularly with the threat posed by deep fakes. While anticipating regulatory measures, major tech companies have expressed support and willingness to undergo voluntary safety and security testing. Biden’s order faces implementation challenges, including hiring AI experts and enacting privacy legislation. Although the order can be reversed by the next president, it established a foundation for a broader legislative approach by Congress to govern AI technology.

Biden’s order also reflects an effort to slow down China’s progress in AI technology, especially with recent regulations restricting Beijing’s access to powerful computer chips, which are essential for large language models used in AI systems. China’s President Xi Jinping has expressed concerns over US investment and export controls, stating that they have “seriously damaged China’s legitimate interests” and have deprived “the Chinese people of their right to development”.

The concerns surrounding the military applications of AI technology are increasing worldwide. Also, there are consequential challenges related to private-sector defence tech firms. The dual-use nature of AI emphasises the blurred lines between military and civilian domains. Private firms – such as software companies specialising in data analytics and data-driven decision-making – wield significant influence over military AI. There are concerns about accountability, transparency, and democratic oversight of military use of AI. The concentration of data and power in these firms raises issues comparable to those the states face. Thus, AI regulations must ensure corporate accountability in the context of military AI, such as drawing inspiration from existing frameworks.

A starting point would be an affirmative response by the United States to the Chinese call for prioritising ethics while developing AI. This will help with a delicate balance of legitimate military security needs and humanitarian concerns. Biden and Xi met ahead of the APEC summit and the discussion signalled a positive step. It indicates that major powers acknowledge AI’s potential threat in directing autonomous weapons.

The development of nuclear arms control offers a lesson that major powers only come towards arms control regarding any technology once there is parity vis-à-vis acquisition and development among those powers. However, in the wake of the speedy advancement of AI, states do not enjoy temporal benefits for protracted negotiations and dialogue. Confronted with these rapidly advancing technologies, complacency is not viable. Urgent regulatory discussions about AI in the military domain are needed, lest the opportunities for cooperation vanish.

It is now time to act so that the advancements in emerging technologies do not outpace the indiscriminate universal regulation of such technologies in multilateral settings and representative forums. There is also a need to strengthen existing international forums such as the Convention on Certain Conventional Weapons (CCW) rather than establish new governance bodies focused explicitly on AI in the military context. Concrete regulatory measures are also essential to navigate the political complexities and ensure responsibility and accountability for private industries in the military AI domain.




You may also be interested in