Published daily by the Lowy Institute

Artificial Intelligence: The regulatory race to ensure a democratic future

Authoritarian governments have their own designs for AI in this new arena for strategic rivalry.

The "Bombe Machine" at Bletchley Park, a device used to help decipher the German enigma signals during the Second World War (mendhak/Flickr)
The "Bombe Machine" at Bletchley Park, a device used to help decipher the German enigma signals during the Second World War (mendhak/Flickr)
Published 2 Nov 2023   Follow @arcanakhalil

Bletchley Park, United Kingdom, is the site of some of the last century’s most consequential technological innovations. It was there that mathematician and computer scientist Alan Turing developed the decoding machines necessary to decipher encrypted Nazi messages during the Second World War. This work at Bletchley Park changed the course of the conflict and was a large part of the ultimate success of the Allied powers, securing the future of democracy for another generation.

It’s fitting then, that Bletchley Park is the forum for an upcoming global summit aimed at harnessing another, arguably more consequential technological innovation, also pioneered by Turing – Artificial Intelligence.

This week, the UK government is hosting the AI Safety Summit where it will bring together world leaders, tech titans, scientific experts and civil society representatives to focus on how to mitigate the risks of “Frontier AI”, defined as “highly capable general purpose AI models that can perform a wide variety of tasks and match or exceed capabilities present in today’s most advanced models”. In other words, artificial intelligence models that can exceed the intelligence, power and control of humans.

This is not an idle concern. In May, thousands of tech leaders, normally champions of innovation over caution, signed an open letter calling for a pause on the development of advanced AI, warning that it could pose an existential threat to humanity.

The summit this week marks another development in the broader effort to foster international cooperation to harness and regulate this potentially runaway technology. The United Kingdom, United States and European Union have all put forward their own frameworks for AI regulation and governance. Australia is also finalising its own regulatory framework. Numerous other countries have adopted some form of AI regulatory policy in the past few years, including Singapore’s Model AI Governance Framework, Japan’s Social Principles of Human-Centric AI, and Canada’s directive on the use of AI in government, among many others.

As the power and safety risks emanating from AI increase, so too does the temptation for more mass monitoring and surveillance.

Regulatory approaches differ among jurisdictions, from the risk-based comprehensive legislative framework of the EU AI Act to the recent Executive Order announced by US President Joe Biden that places reporting requirements on companies developing AI technologies and the use of the Defense Production Act, which the government can use to compel these businesses to take action in the interest of national security.

Reconciling and coordinating regulatory approaches among like-minded countries is needed to ensure safety and mitigate the risks, some of them potentially existential, from AI. Coordinating regulation is also important to supporting innovation and to harness the potential that AI has to assist with addressing global challenges.

But international cooperation among like-minded countries is equally important to ensuring that AI’s future is a democratic one. Disparate regulatory approaches among like-minded democracies and open societies are also happening alongside coordination efforts among other international bodies and efforts by techno-authoritarian governments such as the Chinese Communist Party. China, also at the forefront of AI progress, has developed its own frameworks and principles on AI and is seeking to exert its influence in the development of international AI regulations and standards.

This means that not only will there be strategic competition among nations to develop and deploy AI technologies to advance their economic and national security interests, but there remains contestation around the values and principles that govern AI’s development and usage.

And as the power and safety risks emanating from AI increase, so too does the temptation for more mass monitoring and surveillance and limitations on its access and use, even within democratic nations. This will erode principles central to democracy such as privacy, due process, and human rights.

AI can also threaten democracies in other ways, potentially reducing accountability and trust and increasing inequality. AI technologies must also be designed and regulated to be transparent and explainable, other key elements to decisions and actions taken in democratic societies.

For democracy to survive into the next century, regulating the safety of AI technologies is paramount but so too is regulating them based on democratic principles. Democratic principles must not only be integrated into the development and deployment of AI technologies, but these efforts must be internationally coordinated and agreed upon.

Just as human rights principles underpin the institutions and norms that make up the international rules-based order constructed by Allied powers post-war, so too must they be the basis underpinning current and future AI development and regulation.

Main image via Flickr user 

You may also be interested in