Published daily by the Lowy Institute

Don’t believe the hype: the many futures of AI

Is generative artificial intelligence the source of a utopian future, or will it become the author of its own destruction?

In recent months, experts have warned of AI’s potentially existential consequences if left unchecked (Getty Images)
In recent months, experts have warned of AI’s potentially existential consequences if left unchecked (Getty Images)
Published 5 Jul 2023 

The portrayal of artificial intelligence (AI) in popular culture tends to gravitate towards extremes: either humanity benefits from the riches and efficiencies of ubiquitous automation, or the machines rise up and kill us all. These visions of the future are more than just the hypothetical playthings of science fiction; they are powerful social and political forces that can drive real-world change.

In recent months, experts have warned of AI’s potentially existential consequences if left unchecked, the European Union has pioneered the “AI Act” to regulate the eponymous technology, and a chorus of tens of thousands – including Silicon Valley trailblazers – are petitioning for a six-month cessation of AI model training. Equally, many technologists see humans as roadblocks to AI progress, with some convinced that producing a “sentient” AI can only benefit humanity. These dystopian and utopian visions are united by a sense of inevitability, which can ultimately overlook the myriad other potential AI futures, and can distract from some of the more immediate issues associated with the technology.

The recent releases of generative AI products such as ChatGPT, DALL-E and competitors have raised questions about what consequences will follow from humanity’s first real interface with AI at scale.

Futures studies – or futurology – would caution against presupposing inevitability. This discipline seeks to dispel the assumptions and biases preloaded into our visions of the future and open up consideration for all possible futures. By so doing, it cuts through the noise and hyperbole and focuses on what matters.

AI is one such object of public debate that would benefit from applied futurology. The recent releases of generative AI products such as ChatGPT, DALL-E and competitors have raised questions about what consequences will follow from humanity’s first real interface with AI at scale. Some of those identified from early interactions with the technology are amplified plagiarism, mis- and disinformation, and offensive, biased and discriminatory outputs.

But more philosophical questions have since followed: will AI take our jobs? Is it sentient? Will it replace us? Futures studies would say these are all possible but would first seek to understand what assumptions are baked in. In this case, it appears most likely to be a poor collective understanding of AI, underpinned by the natural human proclivity to reach for extremes.

WASHINGTON, DC - MAY 16: Samuel Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law May 16, 2023 in Washington, DC. The committee held an oversight hearing to examine A.I., focusing on rules for artificial intelligence. (Photo by Win McNamee/Getty Images)
Samuel Altman, CEO of OpenAI, testifies before the US Senate Judiciary Subcommittee on Privacy, Technology, and the Law, which focuses on rules for artificial intelligence, on 16 May 2023 (Win McNamee/Getty Images)

Fundamentally, AI technologies such as ChatGPT and DALL-E are only as good as the data, models and computing power – or “compute” – infrastructure supporting them. The focus on the symbiosis of these three elements has allowed AI development to scale rapidly over the last few decades, and has become the industry standard for development.

Yet there are signs this method has exhausted its utility. Samuel Altman, the CEO of the company behind ChatGPT and DALL-E, has suggested that the scaling of data, models and compute to produce generative AI is seeing diminishing returns. Marginal improvements to model precision are not justifying the enormous costs required – financial and environmental – to maintain the systems that support generative AI. Future innovations may have to come from elsewhere.

What’s more, when generative AI models are fed the data they produce, as opposed to “real” data, they can suffer a model breakdown. This synthetic data can mislead models’ learning processes and degenerate its ability to produce accurate outputs. In effect, generative AI could become the author of its own destruction.

The issues of plagiarism, mis- and disinformation and discriminatory outputs are only growing in significance and require interventions to mitigate the consequences they bring.

What becomes of the future of AI if either possibility comes to pass? Both appear to complicate the notion that AI is on an ever-advancing path towards universal application – for good or for ill.

From here, multiple futures open up. We could see a bottleneck in AI development, which could give us time to effectively regulate it, or we could see an outright collapse, which, though solving some of the issues identified above, might reduce our capacity to harness AI for good.

This is not to suggest that nothing should be done now to regulate the technology or offset its negative effects. On the contrary: the issues of plagiarism, mis- and disinformation and discriminatory outputs are only growing in significance and require interventions to mitigate the consequences they bring.

But it is to say that much of the public AI discourse is mired in an outdated, fanciful understanding of what the technology is and is capable of. The visions of the future that result can potentially lead us astray or have us chasing ghosts, and leave major issues, such as those identified above, overlooked.

It is impossible to know where AI will take us in the future. Maybe it will eventually take command of the world’s computers, render us economically redundant, or extinct. Hopefully, it will be used to benefit all of humanity. Or perhaps it will stall or collapse entirely. For now, we can be fairly certain it will continue to exacerbate the social harms we have already witnessed unless swift and decisive action is taken.

We should therefore caution ourselves when pondering longer-term AI futures, and devote our collective resources to solving the issues it is causing in the present rather than shielding ourselves from distant bogeymen.

All views expressed are those of the author.




You may also be interested in