Published daily by the Lowy Institute

Australia and the UN: A new agenda for peace

Multilateral responses to the threats of new and emerging tech – from AI warfare to bioweapons – are crucial to success.

UN Secretary-General António Guterres briefs media on digital information integrity on 12 June 2023 (Mark Garten/UN Photo)
UN Secretary-General António Guterres briefs media on digital information integrity on 12 June 2023 (Mark Garten/UN Photo)

Last month, a New Agenda for Peace (NA4P) policy brief was released by the United Nations, outlining a vision for tackling today’s major challenges to peace and security. Among the issues at the top is the weaponisation of new and emerging technologies.

“From artificial intelligence to emerging biological risks,” wrote UN Secretary-General António Guterres in launching the policy brief, “new technologies – and the complex interaction between them – present a host of new threats that go far beyond our current governing framework”.

In the fields of science, recent breakthroughs in cyber and AI have led to revolutionary innovations from health to large language processing capabilities.

The same sentiments can be found in the national artificial intelligence (AI) strategy documents that have permeated the global AI environment in the past few years, including Australia’s Artificial Intelligence Action Plan. Across the various documents, two themes are clear: 1) innovations in emerging technologies are dangerously outpacing extant institutional and governance mechanisms designed for early technologies to mitigate conflict; and 2) while many opportunities for economic and societal gain exist with AI, its potential for military end-use is considerable.

In the fields of science, recent breakthroughs in cyber and AI have led to revolutionary innovations from health to large language processing capabilities, such as those used in Chat GPT. The UK-based company DeepMind has been using AI to map the structures of thousands of proteins that have led to discoveries for new drugs combatting malaria and antibiotic resistance as well as the engineering of new enzymes that break down plastic waste.

USS Theodore Roosevelt (U.S. Navy photo by Mass Communication Specialist 1st Class Chris Cavagnaro) 210114-N-IP531-1061
The incapacitation of the USS Theodore Roosevelt in March 2020 due to Covid-19 underscored the vulnerabilities faced by military forces through socially transmitted rather than kinetic means (Chris Cavagnaro/US Navy/Flickr)

However, the AI revolution also holds huge potential for harm – particularly in the military application domain. In Ukraine, Russia’s war of aggression has led to the employment of multiple AI systems, from geospatial intelligence to AI-enhanced loitering munitions, causing some to refer to the conflict as “a living lab for AI warfare”. Meanwhile, experiments in synthetic bioweapons, where the engineering of multicell organisms is combined with AI, has merged the domains of technology and science fiction to create potential for programmable viruses. Examples such as the incapacitation of the USS Theodore Roosevelt in March 2020 due to Covid-19 has underscored the vulnerabilities faced by military forces through socially transmitted rather than kinetic means, and the causes by which concepts of national biodefence have come to the fore.

As these very separate examples demonstrate, the applications for AI are broad. Their foundational architectures in the design principles, algorithms, and technologies used mean that many harmless and market-based applications can also be adapted and employed for military end-use.

It is this concern that is partly driving the inclusion of new and emerging technologies as one of the six most significant threats facing the world today in the NA4P. Spending themes across major countries demonstrate the demand for new accountability institutions. In the United States, for example, the Department of Defence is asking for US$1.4 billion for AI-related platforms for the 2024 budget cycle. In China, the government continues the longest uninterrupted period of increasing military spending made by any country, according to Stockholm International Peace Research Institute, with AI military costs already outpacing the United States’.

A diplomatic agenda that coexists alongside growing defensive AI capabilities will go far to assuaging regional apprehensions about converging defensive technology alignments.

Now more than ever, says Guterres, dialogue and governance must focus on mitigating the threats of AI and emerging technology competition. “The world is moving closer to the brink of instability, where the risks we face are no longer managed effectively through the systems we have.”

Despite being seemingly late in presenting its position, the NA4P has outlined in its recommendations the establishment of an independent multilateral accountability mechanism for malicious use of cyberspace by states. It has also called on states to adopt a legally binding instrument to prohibit weapons systems that can function without human control.

While this may seem to be a reasonable start, much more is needed.

As a policy brief intended to feed into the Summit of the Future in 2024, the NA4P and its recommendations may not be accepted by member states, especially those wary of binding instruments. Such regulatory frameworks will be derided by some as an attempt to stymie legitimate aims to develop defence and economic outcomes with defining potential, such as exists in AI and other critical technologies.

It is perhaps for precisely this reason that the NA4P and the UN have handed responsibility for norm and institution making to states – in particular coming together in new formats to innovative.

This presents an opportunity for groupings such as the United States, Japan and Australia’s trilateral security dialogue (TSD) to take the initiative in driving discussion and building consensus on norms relating to appropriate cyber, AI, and lethal autonomous weapons development.

The 2023 Joint Statement from the United States–Japan–Australia Trilateral Defence Ministers’ Meeting has outlined areas for expanded cooperation in AI and critical technology fields, including the creation of a Research, Development, Test and Evaluation framework to advance trilateral cooperation. This increasing integration across borders will provide valuable lessons for managing and mitigating the challenges posed by offensive autonomous capabilities in the Indo-Pacific region.

Additionally, there is space to leverage these emerging capabilities and cooperation arrangements for the broader provision of public goods, as outlined in the NA4P. A diplomatic agenda that coexists alongside growing defensive AI capabilities will go far to assuaging regional apprehensions about converging defensive technology alignments and begin the long march towards international norm making and guardrails on the appropriate uses and development of new and potentially destructive technologies.

In 1992, UN Secretary-General Boutros Boutros-Ghali triumphantly welcomed the end of “distrust and hostility” with the conclusion of the Cold War. Three decades later, António Guterres has acknowledged the return of both. This new agenda for peace comes at a time when a deepening schism in ideas about global governance has taken hold. The NA4P is in danger of reflecting today’s contested geopolitics and great power division, rather than proposing tangible ways to get beyond it all.

However, the call for urgent diplomatic intervention is nevertheless at the heart of the NA4P. Multilateral efforts to manage a world in transition, including the document structure and the frameworks for peace, must be the responsibility of states. This creates an opportunity for the diplomatic initiatives of states, and more particularly coalitions of states, to help shape the Summit for the Future when it comes to the regulation and governance of new and emerging technologies. The TSD with Australia taking a leading role is a grouping and institution well positioned to begin leading in this area.




You may also be interested in