Published daily by the Lowy Institute

What’s the worst that could happen? Tackling existential risk

A pandemic is bad. But humanity is inviting greater dangers by toying with technologies that could end the species.

What might the future hold? Northern Iraq, 2016, when smoke filled the air from burning oil wells set alight by the Islamic State (Martyn Aim/Corbis via Getty Images)
What might the future hold? Northern Iraq, 2016, when smoke filled the air from burning oil wells set alight by the Islamic State (Martyn Aim/Corbis via Getty Images)

What would happen if you decided to cross the road without checking the traffic? Odds are that you’d survive unscathed. But do it enough times and you’re likely to come a cropper.

As a species, humanity is now playing with technological innovations that pose a small but real risk of ending our existence. Tens of thousands of nuclear weapons pointed at major cities. Biotechnology that could allow the creation of deadly pathogens. Computer technology that could create a machine that is smarter than us and doesn’t share our goals. And all the while climate change could lead to unstoppable feedback loops.

As a teenager, I joined Palm Sunday anti-nuclear rallies. As an adult, I’ve been a strong advocate of climate change action. But when I entered parliament in 2010, the issue of existential risk didn’t loom large on my radar. My priority was people’s quality of life, not the end of life itself.

I’ve come to believe that catastrophic risk is a vital issue. In my new book What’s the Worst That Could Happen? Existential Risk and Extreme Politics (MIT Press), I quote the estimate of Oxford philosopher Toby Ord that the chance of a species-ending event in the next century is one in six. That basically means that humanity is playing Russian roulette once a century. If we keep it up for another millennium, there’s a five in six chance that humans never make it to the year 3000.

Like a person who crosses the road without checking for traffic, the odds are that you’ll eventually get hit.

That’s tragic for those who perish and for those who would never get to experience life at all. We’ve got another billion years or so before the sun engulfs Earth. That’s enough time for another 30 million generations of humans. Not bad for a species that’s only been around for about 10,000 generations so far. Far from being the stuff of science fiction, ensuring the safety of the human project should be a vital responsibility for all of us today.

What are the biggest risks? Naturally occurring hazards aren’t trivial. They include super volcanoes such as the one that formed Yellowstone National Park. An asteroid, of the kind that wiped out the dinosaurs 65 million years ago. Naturally occurring pandemics such as the Black Death. Such dangers are real and merit an appropriate response. In September, NASA’s Planetary Defense Coordination Office will carry out an experiment in which it intercepts a nearby asteroid (not one that threatens Earth) and attempts to knock it off course.

But the biggest risks are the ones that our technologies have wrought. Unexpected climate change feedback loops – such as the melting of the Greenland and Antarctic ice sheets – could lead to long-term temperature rises of 6°C or more. Nuclear missiles kept on hair trigger alert might lead to a miscalculation that ends up in a large-scale nuclear conflict. The misuse of genetic technologies could see terrorists produce a bug that spreads as quickly as measles, but is far more deadly. When computers become smarter than humans, we need to ensure that the first superintelligence doesn’t regard humanity the same way that most of us see the world’s insects.

Underlying all of this is the rise of populism: the philosophy that politics is a conflict between the pure mass of people and a vile elite. Since 1990, the number of populist leaders holding office worldwide has quintupled. Most are right-wing populists, who demonise intellectuals, immigrants and the international order.

Everyone who cares passionately about the future of humanity should view populism as a cross-cutting danger and consider how to stem its rise. 

As Covid-19 demonstrated, populists’ angry approach to politics, score towards experts and disdain for institutions made the pandemic much worse. The same goes for other catastrophic risks. Donald Trump’s unilateral withdrawal from the Iran nuclear deal and Paris climate accord made these two catastrophic risks worse. Forging an international agreement on artificial intelligence safety will likely prove impossible if the populists run the show.

What can we do about it? For each existential peril, there’s a handful of sensible solutions. For example, to reduce the threat of bioterrorism, we should improve the security of DNA synthesis. To tackle climate change, we need to cut carbon emissions and assist developing nations to follow a low-emissions path. To lower the chance of atomic catastrophe, we should take missiles off hair-trigger alert and adopt a universal principle of no first use. To improve the odds that a super intelligent computer will serve humanity’s goals, research teams should adopt programming principles that mandate advanced computers to be observant, humble and altruistic.

Beyond this, everyone who cares passionately about the future of humanity should view populism as a cross-cutting danger and consider how to stem its rise. This means sustaining well-paid jobs in communities hit by technological change. Ensuring that the education system is accessible to everyone, not just the fortunate few. And reforming democracy so that electoral outcomes represent the popular will. Instead of angry populism, the cardinal Stoic virtues – courage, prudence, justice and moderation – can guide a more principled politics and ultimately shape a better world.
 

Andrew Leigh is a member of the Australian parliament and the author of What’s the Worst That Could Happen? Existential Risk and Extreme Politics (MIT Press).


Related Content



You may also be interested in