Published daily by the Lowy Institute


Regulating artificial intelligence: How the EU just got us closer

Global tech giants are increasingly speaking out about the potential harms presented by unregulated advances in AI (Getty Images)
Global tech giants are increasingly speaking out about the potential harms presented by unregulated advances in AI (Getty Images)
Published 18 Jul 2023 12:00    0 Comments

Last month, a draft European Union Artificial Intelligence Act was agreed in the European Parliament. Arguably, the draft Act represents the most comprehensive and radical step towards regulating AI in the world. The final Act is expected to be passed later this year, with far-reaching consequences for international regulatory standards.

Altman warned a US Senate Committee that “if this technology goes wrong, it can go quite wrong...we want to be vocal about that”.

The Act comes at a time when key technological leaders in the development of AI are increasingly voicing their concerns about the dangers of AI for humanity. Indeed, there are many. After resigning from Google earlier this year, the “Godfather of AI”, Geoffrey Hinton, warned of the serious harms posed by the technology, and argued “I don’t think they should scale [AI] up more until they have understood whether they can control it”. OpenAI, the creator of ChatGPT, now markets its generative AI as “creating safe [Artificial General Intelligence] that benefits all of humanity”. Nevertheless, the company’s CEO, Sam Altman, recently called for greater government regulation of AI. In May, Altman warned a US Senate Committee that “if this technology goes wrong, it can go quite wrong...we want to be vocal about that”.

Certainly, there are enormous potential and realised benefits in using AI, especially in fields such as healthcare. Yet mathematicians, computer scientists and global tech giants who had previously been focused on those benefits – including mathematician Terence Tao, winner of the 2006 Fields Medal – are increasingly breaking the “taboo” and speaking out about their concerns for the technology’s harmful effects on humanity. 

As Australia and other states consider closing the regulatory gap on AI and attempt to create the preconditions for safe, accountable and responsible AI, what can they learn from the draft EU AI Act?

Sam Altman, chief executive officer and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction. Photographer: Eric Lee/Bloomberg via Getty Images

Despite boasting the third-largest economy in the world, the European Union has placed human rights ahead of technological advantage. Among other measures, the draft EU AI Act aims to introduce three risk categories. First, where AI models are viewed as an unacceptable risk, they are banned. An example of this would be an AI-enabled system that the government of China has introduced to perform a type of social-credit or social-scoring regime. The EU AI Act seeks to prohibit authorities across the EU from introducing such social-scoring systems, as well as “real time” biometric identification systems in public spaces for the purposes of law enforcement.

The draft Act would mean AI systems considered high risk applications would need to adhere to particular legal obligations.

Australia and others could consider adopting a similar approach. This would not only ensure AI is aligned with Australia’s democratic and social values, but also establish that AI systems that perform social crediting are prohibited under Australian laws. It could also deter the malign use of AI, such as that described by legal academic Simon Chesterman as the “weaponisation of AI – understood narrowly as the development of lethal autonomous weapon systems lacking ‘meaningful human control’ and more broadly as the development of AI systems posing a real risk of being uncontrollable or uncontainable”.  

Second, the draft Act would mean AI systems considered high risk applications would need to adhere to particular legal obligations. This necessarily requires that an AI model be developed in such a way that it can be explained, is transparent, fair and unbiased, and demonstrates standards of accountability in terms of training and operationalisation.

Third, those models considered not a risk or of minimal risk, which is the vast majority of AI systems, would remain without restriction. This category would include AI models used in computer games, for example.

Rather than limit innovation, this process of categorising the risk of AI systems through a legal standard would provide businesses and consumers with clear rules around what is considered trusted, accountable and responsible AI – particularly in a world that is becoming less trustful of the technology. A recent survey by the University of Queensland, entitled Trust in Artificial Intelligence: A Global Study 2023, found that 61 per cent of respondents were wary of AI, and 71 per cent believed AI regulation was required.

By no means is the draft EU AI Act a silver bullet. Yet it is a vital and significant first step to preventing the worst effects of AI on humanity and starts a necessary debate on how we would like to control its future.


Existential risk is a whole-of-society challenge

The 2023 Doomsday Clock is displayed before a live-streamed event with members of the Bulletin of the Atomic Scientists on 24 January 2023, Washington, DC (Anna Moneymaker/Getty Images)
The 2023 Doomsday Clock is displayed before a live-streamed event with members of the Bulletin of the Atomic Scientists on 24 January 2023, Washington, DC (Anna Moneymaker/Getty Images)
Published 11 Jul 2023 03:00    0 Comments

Existential risks have been in the news recently. In May, a group of prominent artificial intelligence experts and industry leaders warned that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This followed an open letter from experts and industry leaders in March that called for a moratorium on the AI arms race so safety and governance systems can be developed to reduce the “profound risks to society and humanity”.

Nuclear risk has also come into focus following Russian President Vladimir Putin’s disastrous Ukraine invasion, with the Kremlin and its mouthpieces issuing several veiled (and not-so-veiled) nuclear threats and suspending participation in a key nuclear arms reduction treaty with the United States. Former Russian President Dmitry Medvedev said in March that Western military aid to Ukraine would “bring the nuclear apocalypse closer”. Atomic scientists believe the hands of the doomsday clock have approached 90 seconds to midnight – “the closest to global catastrophe it has ever been” – largely in response to these developments.

New research suggests a fifth of global ecosystems could collapse before 2100.

Climate change was also a factor in that pronouncement. New research suggests a fifth of global ecosystems could collapse before 2100, and a recent Intergovernmental Panel on Climate Change report warns of “a rapidly closing window of opportunity” for action, with the impact of decisions made this decade reverberating “for thousands of years”. While there is uncertainty over whether climate change risk is existential (as opposed to “merely” catastrophic), it’s widely accepted that its effects can be drivers for risks that are.

And Covid-19 served as a stark reminder of the chaos that can be wrought by pandemics in a globalised world. When the next one emerges (by accident or otherwise), and if its transmissibility and mortality rates are sufficiently high (a particular risk with “gain of function” and biological weapons research), it could represent an existential threat.

Farmer tractor  Christoph Schmidt/picture alliance via Getty Images)

The above are all examples of what the late Kofi Annan famously termed “problems without passports”, insofar as their resolution is contingent upon collective action. A 2021 UN report from the Secretary-General describes the need to act on existential risk as an “ethical imperative” and states that the very future of humanity depends “on solidarity and working together”. That such a clarion call for multilateralism was required reflects the grim reality that, at a time when it has perhaps never been more needed, there is a deficit in international cooperation.

In relation to contemporary challenges, let alone potential future risks, it’s not uncommon to hear variations of what essentially amounts to fatalistic acceptance that nothing much can be done; that this is just “how it is” in an anarchic international system where the pendulum has swung away from cooperation and towards competition. But if such logic prevails as conventional wisdom over the longer term – if the can is continually kicked down the road – then the unavoidable result will be that existential risk compounds to unsustainable levels. Philosopher Toby Ord estimates the probability that humanity will become extinct, or its potential irrevocably destroyed, at one in six this century.

A whole-of-society approach to existential risk mitigation need not be expensive or laborious.

Fortunately, there are proactive steps that individual governments can take to lead by example and prepare the groundwork for when the climate for international cooperation is more amenable. Models that have been proposed include establishing a parliamentary commissioner supported by multidisciplinary teams (New Zealand); implementing a “Three Lines of Defence” strategy through the creation of an Office of Risk Management and National Extreme Risks Institute (United Kingdom); and setting up an interagency taskforce as “the first critical step towards a national preparedness plan” (United States). A recent report from the Australian National University recommended that Australia institutionalise a “more holistic approach” to risk by establishing a national risk assessment that brings together existing “effective but siloed risk management”. All of these emphasise the importance of a coordinated approach across agencies and departments.

A whole-of-society approach to existential risk mitigation need not be expensive or laborious. Governments can lead the way through official statements and documents that draw attention to the challenge (which has recently occurred, for example, in the United States and United Kingdom). Rather than being seen as a solution in and of itself, a whole-of-society approach should be understood more as a guiding framework. The ability of governments to exercise influence isn’t limited to departmental directives, but in a galvanizing narrative that helps bring academia, business, civil society and other sub-national stakeholders on board, establishing a common baseline for action. A unified front at the national level in turn gives coherence to international efforts.

Small outlays on research could likewise have outsized effects in a field that is profoundly underfunded. The Biological Weapons Convention, for example, “has just four employees, and a smaller budget than an average McDonald’s [US$1.4m]”. Supporting organisations and institutions that work on existential risk is positive sum, with efforts that reduce risk in one area also contributing to a more sustainable risk profile overall.

These might seem small-fry steps when considering the potential stakes, but a coordinated, incremental approach to existential risk is better than a scattergun approach (or none at all). The scale and intergenerational nature of the challenge mean that’s probably the only realistic way forward. It can seem overwhelming – even an indulgence – to look more than a few decades ahead given the challenges we face in the present. But the sooner that work to address existential risk begins, however modest, the less likely the potential outcomes are to materialise.


The Quad: Tackling the spider, not cobwebs, in cyberspace

The systemic consequences of software insecurity have devastated state economies and healthcare systems (Getty Images)
The systemic consequences of software insecurity have devastated state economies and healthcare systems (Getty Images)
Published 10 Jul 2023 03:00    0 Comments

The Quad – the strategic security dialogue between Australia, India, Japan and the United States – is accustomed to dealing with threats to international security, including those emanating from the cyber domain. Indeed, in March 2021, the Quad Leaders made a commitment to “advance security and prosperity” and “address shared challenges, including in cyber space”.

The four governments appreciate the vital nature of cooperation on matters of cyber resilience – a point brought into sharp relief by the Director of the US Cybersecurity and Infrastructure Security Agency who remarked, “Everything is connected. Everything is interdependent. So, everything is vulnerable.”

Governments have hitherto focused on the cobwebs, rather than the spider, in failing to target a major cause of our collective vulnerability as societies and economies: software insecurity.

In this context, the Quad is working to bolster software security under the Quad Cybersecurity Partnership (QCP). In May 2022, the four governments agreed to use their “collective purchasing power to improve the broader software development ecosystem”. In May 2023, they specifically committed to acquiring software that meets certain “high-level secure software development practices”.

In a working paper and policy brief for the German Institute for Global and Area Studies, I argue that these commitments form the missing link in the cyber diplomacy of the Quad governments. These governments have hitherto focused on the cobwebs, rather than the spider, in failing to target a major cause of our collective vulnerability as societies and economies: software insecurity. This is curious as much research from across stakeholder groups indicates that global software security is far from robust and cyber risk is increasingly being transmitted through software supply chains.

The systemic consequences of software insecurity were evident in the NotPetya and WannaCry attacks of 2017 that devastated the Ukrainian state and economy, and the British healthcare system, respectively. Attacks that were enabled by the exploitation of vulnerabilities in the Windows operating system.

And yet, software vendors such as Microsoft continue to market bug-laden code with seeming impunity, requiring end-users to clean up the mess. This is of critical importance when we consider how software is woven into the very fabric of our societies and economies.

Jeanette Manfra, chief cybersecurity official for the Department of Homeland Security (DHS), speaks about the Wannacry virus as they announce that the US believes North Korea was behind the cyber attack, during a briefing at the White House in Washington, DC, December 19, 2017. (Photo by SAUL LOEB / AFP) (Photo by SAUL LOEB/AFP via Getty Images)

It is well past time for governments to intervene and incentivise vendors to invest in the security of their software development life cycles (SDLCs). Broadly speaking, an SDLC is the process by which software is designed, created and maintained.

The Quad is betting on economic incentives – the opportunity for vendors to sell to the four governments. And by committing to amend their procurement regulations, the four governments are seeking to drive change in vendor behaviour faster than would be achieved through legislation, which may take a long time to draft, consult on and enact.

The significance of the Quad’s commitments is that the benefits will flow through to all societies and economies, not just those in the group. This is particularly enabled by two facts: American software vendors looking to sell to Quad governments dominate the global market, and the US government alone proposes spending of US$74 billion on IT for federal civilian agencies in the 2024 fiscal year. In aiming to qualify for US government procurement, software vendors will need to improve the security of their SDLCs, thereby improving the security of all their products. This benefits each of their users, not just their US government customers. The benefits are especially significant when the users are operators of critical infrastructure assets – systems vital to national security.

The Quad’s efforts are carved out of the criticality of software security to our lives and livelihoods.

Implementation of the Quad’s software security commitments will enable and safeguard economic development around the world, given that secure digital technologies are critical to achieving the UN Sustainable Development Goals, in particular SDG 9, which centres on industry, innovation and infrastructure.

The Quad’s policy commitments also uphold the norms for responsible state conduct in cyberspace approved by the UN General Assembly in 2015, namely to “ensure the integrity of the [technology] supply chain”, including by requiring “vendors to incorporate safety and security in the design, development and throughout the lifecycle of ICT products”.

These policy commitments have a decent chance of being implemented because they drive the Quad’s internal credibility, and they build on its agenda to tackle cyber-enabled threats to security, including the central pillars of the QCP. They leverage the political will of each country to uplift software security. And the Quad Senior Cyber Group, which developed the software security principles underlying the commitments, enables the close working relationship required for the four governments to implement them.

We should also recognise that said software security principles are a template for any government to use for their procurement regulations, or indeed any vendor looking for guidance on how to harden their SDLCs. Indeed, the Quad has invited countries to “adopt these principles in pursuit of this shared vision for secure software”. In this fashion, the Quad’s policy commitments will position the grouping as a positive force for encouraging the cyber resilience of societies and economies while tackling the threats to national and international security that cyberspace enables.

The Quad’s efforts are carved out of the criticality of software security to our lives and livelihoods.

And so, this vital work carves the Quad in code.


Don’t believe the hype: the many futures of AI

In recent months, experts have warned of AI’s potentially existential consequences if left unchecked (Getty Images)
In recent months, experts have warned of AI’s potentially existential consequences if left unchecked (Getty Images)
Published 5 Jul 2023 12:07    0 Comments

The portrayal of artificial intelligence (AI) in popular culture tends to gravitate towards extremes: either humanity benefits from the riches and efficiencies of ubiquitous automation, or the machines rise up and kill us all. These visions of the future are more than just the hypothetical playthings of science fiction; they are powerful social and political forces that can drive real-world change.

In recent months, experts have warned of AI’s potentially existential consequences if left unchecked, the European Union has pioneered the “AI Act” to regulate the eponymous technology, and a chorus of tens of thousands – including Silicon Valley trailblazers – are petitioning for a six-month cessation of AI model training. Equally, many technologists see humans as roadblocks to AI progress, with some convinced that producing a “sentient” AI can only benefit humanity. These dystopian and utopian visions are united by a sense of inevitability, which can ultimately overlook the myriad other potential AI futures, and can distract from some of the more immediate issues associated with the technology.

The recent releases of generative AI products such as ChatGPT, DALL-E and competitors have raised questions about what consequences will follow from humanity’s first real interface with AI at scale.

Futures studies – or futurology – would caution against presupposing inevitability. This discipline seeks to dispel the assumptions and biases preloaded into our visions of the future and open up consideration for all possible futures. By so doing, it cuts through the noise and hyperbole and focuses on what matters.

AI is one such object of public debate that would benefit from applied futurology. The recent releases of generative AI products such as ChatGPT, DALL-E and competitors have raised questions about what consequences will follow from humanity’s first real interface with AI at scale. Some of those identified from early interactions with the technology are amplified plagiarism, mis- and disinformation, and offensive, biased and discriminatory outputs.

But more philosophical questions have since followed: will AI take our jobs? Is it sentient? Will it replace us? Futures studies would say these are all possible but would first seek to understand what assumptions are baked in. In this case, it appears most likely to be a poor collective understanding of AI, underpinned by the natural human proclivity to reach for extremes.

WASHINGTON, DC - MAY 16: Samuel Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law May 16, 2023 in Washington, DC. The committee held an oversight hearing to examine A.I., focusing on rules for artificial intelligence. (Photo by Win McNamee/Getty Images)

Fundamentally, AI technologies such as ChatGPT and DALL-E are only as good as the data, models and computing power – or “compute” – infrastructure supporting them. The focus on the symbiosis of these three elements has allowed AI development to scale rapidly over the last few decades, and has become the industry standard for development.

Yet there are signs this method has exhausted its utility. Samuel Altman, the CEO of the company behind ChatGPT and DALL-E, has suggested that the scaling of data, models and compute to produce generative AI is seeing diminishing returns. Marginal improvements to model precision are not justifying the enormous costs required – financial and environmental – to maintain the systems that support generative AI. Future innovations may have to come from elsewhere.

What’s more, when generative AI models are fed the data they produce, as opposed to “real” data, they can suffer a model breakdown. This synthetic data can mislead models’ learning processes and degenerate its ability to produce accurate outputs. In effect, generative AI could become the author of its own destruction.

The issues of plagiarism, mis- and disinformation and discriminatory outputs are only growing in significance and require interventions to mitigate the consequences they bring.

What becomes of the future of AI if either possibility comes to pass? Both appear to complicate the notion that AI is on an ever-advancing path towards universal application – for good or for ill.

From here, multiple futures open up. We could see a bottleneck in AI development, which could give us time to effectively regulate it, or we could see an outright collapse, which, though solving some of the issues identified above, might reduce our capacity to harness AI for good.

This is not to suggest that nothing should be done now to regulate the technology or offset its negative effects. On the contrary: the issues of plagiarism, mis- and disinformation and discriminatory outputs are only growing in significance and require interventions to mitigate the consequences they bring.

But it is to say that much of the public AI discourse is mired in an outdated, fanciful understanding of what the technology is and is capable of. The visions of the future that result can potentially lead us astray or have us chasing ghosts, and leave major issues, such as those identified above, overlooked.

It is impossible to know where AI will take us in the future. Maybe it will eventually take command of the world’s computers, render us economically redundant, or extinct. Hopefully, it will be used to benefit all of humanity. Or perhaps it will stall or collapse entirely. For now, we can be fairly certain it will continue to exacerbate the social harms we have already witnessed unless swift and decisive action is taken.

We should therefore caution ourselves when pondering longer-term AI futures, and devote our collective resources to solving the issues it is causing in the present rather than shielding ourselves from distant bogeymen.

All views expressed are those of the author.


How can we regulate AI? Let’s just ask it

There is a fast rising public conversation about the future of generative AI and its impacts on the world (Alexi Rosenfeld/Getty Images)
There is a fast rising public conversation about the future of generative AI and its impacts on the world (Alexi Rosenfeld/Getty Images)
Published 30 May 2023 14:00    0 Comments

In the span of a just a few months, ChatGPT, an advanced conversational artificial intelligence language model, has gained over 200 million users worldwide. The artificial intelligence tool has not only catapulted its user base but the public conversation about the future of generative AI and its impacts on the world.  Its release has been heralded as a “Promethean moment” – a technology so consequential that it will alter the trajectory of humanity.

Given its rapid development and far-reaching impacts, many, including its creators and developers, have called for urgent regulation of artificial intelligence. Sam Altman, the CEO of OpenAI which developed ChatGPT, has acknowledged the possibility of existential risk and fronted the US Congress and practically pleaded for legislative oversight. Microsoft President Brad Smith has proposed that the US government should create a new agency to license artificial intelligence systems. Some have gone further. A letter signed by over a thousand tech leaders, including many AI pioneers, called for a moratorium on the development of advanced AI systems, calling them a “profound risk to society and humanity”.

ChatGPT, and similar generative AI, can create and generate human like responses based on text prompts input by the user. It can “chat” or respond because it has learned and then mimics language patterns and information from existing data to create responses or new content.

Given its capabilities and the urgent need for regulation, I thought maybe the best use of ChatGPT would be to help us articulate how governments can regulate it and similar systems. So, I asked it, “Can you offer policy recommendations to governments on how it can best regulate generative AI to help protect humanity and preserve democracy?”

No doubt programmed to articulate the views of its founders, here’s what it had to say.

ChatGPT

Certainly! Here are some policy recommendations for governments to consider when regulating generative AI to protect humans and preserve democracy:

  1. Transparency and Disclosure: Governments should require clear disclosure when users are interacting with generative AI systems. Users should be informed that they are not engaging with a human, maintaining transparency and preventing the spread of misinformation or manipulation.
  2. Regulation of Malicious Use: Governments should enact regulations to prevent the malicious use of generative AI systems, such as deepfakes or AI-generated propaganda. Guidelines can be established to define the boundaries and acceptable uses of generative AI in areas such as politics, journalism, and public discourse.
  3. User Privacy and Data Protection: Governments should implement comprehensive data protection regulations to safeguard user privacy and prevent unauthorized access or misuse of personal data. Generative AI systems should adhere to strict privacy standards, and user consent should be obtained for data collection and usage.
  4. Accountability and Liability: Governments should establish frameworks to hold developers and deployers of generative AI systems accountable for their actions. Clear guidelines should be set for identifying responsibility and liability in cases of misuse or harm caused by AI-generated content.
  5. Auditing and Validation: Governments can require regular audits and validation processes for generative AI systems. Independent assessments can evaluate the fairness, biases, and ethical considerations of AI models to ensure they align with democratic values and human rights.
  6. Addressing Bias and Fairness: Governments should encourage AI developers to address biases in training data and algorithms. Guidelines and standards should be developed to ensure fairness, transparency, and accountability in the development and deployment of generative AI systems.
  7. Digital Literacy and Awareness: Governments should invest in educational programs to enhance digital literacy and raise awareness about the capabilities and limitations of generative AI systems. This empowers individuals to critically evaluate AI-generated content and safeguards against manipulation.
  8. International Collaboration: Given the global nature of generative AI, governments should collaborate internationally to establish common standards and regulations. Cooperation can help prevent regulatory arbitrage and address cross-border challenges related to generative AI.
  9. Ethical Guidelines and Human Rights: Governments should encourage the development and adoption of ethical guidelines specifically tailored to generative AI. These guidelines should align with democratic principles, protect human rights, and prohibit the use of AI systems for malicious purposes.
  10. Ongoing Evaluation and Adaptation: Governments should establish mechanisms for continuous evaluation, monitoring, and adaptation of regulations. The rapidly evolving nature of generative AI requires dynamic policies that can keep pace with technological advancements and emerging risks.

It is crucial for governments to strike a balance between regulation and innovation, ensuring the responsible development and deployment of generative AI while preserving democratic values and protecting human rights. Collaborative approaches involving AI developers, civil society organisations, academia, and international bodies can help shape effective and inclusive policies.

This sounds like a pretty good start and a convincing articulation of what we need to do to get AI under control. It’s as clear and convincing as any policy memo I’ve ever written. Maybe, dare I admit, more so?

The content these systems generate has no grounding in reality; they are unable to assess whether the content they create is true or false let alone display reasoning or discernment.

But these policy recommendations are coming from a tool that, when I asked it to generate a bio note about me, a relatively simple task and already available on the internet, came up with a garbled string of inaccurate information nevertheless written in a convincing human tone. It said I was born in July (not true), in Melbourne Australia (not true), that I had degrees from UC Berkley and Harvard University (also not true), that I began my career at CSIS (also no), that I grew up in a family that “encouraged open dialogue” (not exactly), and that I had “exceptional capabilities” and made “extensive contributions” (well, a broken clock is right at least twice a day). Every time I asked it to regenerate a response, it came up with a less accurate biography using even more flattering language.

This made me consider my initial request in a new light. Systems such as ChatGPT are generative, meaning that they continue to “learn” and improve its copy from whatever data is input into them. But the content these systems generate has no grounding in reality; they are unable to assess whether the content they create is true or false let alone display reasoning or discernment. This will no doubt improve with time, and probably rapidly so, but AI developers haven’t managed to overcome what’s been called the hallucination problem – and it’s not clear if and how they will.

If the AI is hallucinating, then maybe those creators calling for regulation while already unleashing this enormously consequential technology are too.