Published daily by the Lowy Institute

AI will shape our world – even our brains – but it can be regulated

There is no shortage of ideas. The challenge is bringing them all together for policymakers and the public to understand.

Australia has a strong national aspiration to be the most cyber secure country by 2030 (Getty Images Plus)
Australia has a strong national aspiration to be the most cyber secure country by 2030 (Getty Images Plus)

The potential of new and emerging technologies can sometimes feel overwhelming. The pace of technological development means society is constantly on the cusp of emerging issues, often before we’ve dealt with the current ones. Think of the way that neuro technologies can harvest and use brain data, just as society is starting to see the consequences of frequent and serious data breaches and deal with existing privacy issues from large-scale data collection. And that’s before there is really a handle on how to tackle online bullying, hate crimes or the generation of mis- and disinformation. Artificial intelligence (AI) takes all these challenges to the next level.

The goal must be to think about the challenges in new ways, to collaboratively identify solutions, and to harness the potential of technologies in ways that improve humanity.

The military use the term “information environment” to describe how people get information and understand their world. This provides a useful framework for how society can understand and address the challenges of AI in three prongs:

  • Content – the information itself.
  • Infrastructure – the platforms and systems of creation, distribution and use.
  • Cognitive resilience – engagement with information and the social context within which it’s embedded.

Thinking about content, infrastructure and human cognition helps in efforts to understand technological effects broadly, and more importantly to develop solutions.

Much is happening in different sectors, across tech companies, universities, not-for-profits and civil groups. But some of these efforts still need to be connected to support policymaking and collaboratively develop solutions that ensure the integrity of information, infrastructure and platforms, and support cognitive and human resilience.

There are lots of pre-AI approaches to the problem of digital content provenance. Since 2021 Adobe and its partners – including Microsoft, Arm, Intel TruePic and the BBC – have been working on a standard way to verify how a photo or video was captured and to document any subsequent edits. There is the Content Authenticity Initiative and Coalition for Content Provenance and Authenticity.

Google DeepMind recently launched an experimental AI watermarking tool, called SynthID. Watermarking AI generated images and videos, or the inverse (those that have a verifiable provenance) is often seen as a potential solution. SynthID uses a neural network to embed a pattern invisible to human eyes which can be decoded by another neural network can see the pattern and potentially help people tell when AI-generated content is being passed off as real, or help protect copyright.

Mis- and dis-information relating to Australia’s referendum on the Voice proposal highlights the need for better regulation of social media platforms.

It’s great to see the susceptibility of the digital landscape to disinformation and interference the national agenda because global efforts to date have not worked. Notwithstanding the inevitable concerns around content moderation and censorship – inherent in any discussion of digital platforms – the proposed Combatting Misinformation and Disinformation Bill 2023 is an attempt to flexibly address some of the platform process shortcomings, although these reforms are in fact quite modest.

Australia has a strong national aspiration to be the most cyber secure country by 2030. Ahead of the government’s forthcoming cybersecurity strategy, there are indications that some systems needed to achieve this aspiration are falling into place, although the detail and implementation will be critical. Measures includes a plethora regulatory efforts including the ACCC digital platform services inquiry, the eSafety office’s attempts to reduce online harms, work by the Australian Cyber Security Centre with industry and by the Cyber and Infrastructure Security Centre with owners and operators.

More is needed. Mis- and dis-information relating to Australia’s referendum on the Voice proposal highlights the need for better regulation of social media platforms. Government and society need to better understand how AI is and will continue to have consequences for the information environment, as well as ensuring the right people and organisations – across sectors and borders – are brought together in search of solutions.

Global AI summits, blueprints and standards setting forums are all much needed. But  it will be crucial to see the varied involvement leading thinkers in AI creation, use and regulation, as well as big data, privacy, online harms and cybersecurity, from across industry, think tanks, government and academia as well as human rights researchers. Some argue that large scale reforms are still needed and there seems to be no shortage of suggestions. But connecting complex policy change with a clear narrative that reflects the perspectives and expectations of the Australian people is an entirely different challenge.




You may also be interested in