Christchurch was the tipping point. Clouds have been gathering for years on the horizons of the big social media giants; but history might look back and see the March 2019 murder of dozens of innocents in a Christchurch mosque, live streamed to thousands on social media, as the moment momentum and support tilted away from Facebook.
New Zealand Prime Minister Jacinda Ardern is attempting to build a multilateral coalition to pressure the big social media companies to act on violent content at a meeting of digital ministers from G7 countries in Paris this Wednesday.
Her "Christchurch Call" pledge, drafted with French President Emmanuel Macron, asks the social platforms to examine their software that directs people to violent content, according to The New York Times. Representatives from Facebook, Google, Twitter and Microsoft were expected to attend.
The newspaper said on Sunday that Britain, Canada, Jordan, Senegal, Indonesia, Norway and Ireland were expected to sign the non-binding pledge; the US was not. By Tuesday, Australia was reportedly preparing to join.
Imagine having to build a global coalition to beg major media companies not to broadcast a massacre? It’s a sign of how completely social media has remade our world this past decade; of the power of the tech giants, and the unforeseen consequences of the tech boom’s disruptive logic.
Social media has expanded democracy by giving more people a public voice. But in most other respects, techno-utopianism has run headlong into reality. There is plenty of evidence the technology premised on connecting people has provided a perfect platform for political polarisation and division; viral misinformation has engendered deadly mob violence; platforms borne of a fundamental commitment to free speech have become powerful tools for authoritarian states to undermine democracies.
Some people argue there’s no point blaming the technology. Algorithms don’t divide people, people do!
Those arguments miss the point: as in the gun control debate, the damage is being multiplied exponentially by the technology. It’s easy to forget there is nothing neutral about the US corporate platforms that now stand in for the virtual town square in so many countries. They were born from a particularly hubristic cultural context: the male-dominated, techno-libertarian and faux-globalist ethos of Silicon Valley. Social media is operated for profit, it exploits insights from behavioural psychology to keep us hooked. The companies have an existential interest in maximising engagement, and have learned – or their algorithms have – that outrage and provocative or extremist views do that more efficiently than nuance, kindness or moderate debate.
Research suggests Google’s YouTube algorithm feeds users of mainstream political content ever more extreme and fringe viewpoints in playlist recommendations. And even after Facebook began taking action to restrict harmful content, there is new evidence it is still introducing Islamist extremists to one another through its "suggested friend" function, a feature designed to entrench people on the platform by linking them with like-minded people.
These companies provide a service that lots of people use and enjoy; but they don’t exist for us to “share” content with our friends or engage meaningfully in public debate. They exist to harvest our data to sell to advertisers, marketers and other agents of influence; to feed machine-learning algorithms; to maximise return to their shareholders. Unfortunately, their data farm and laboratory is our civil societies, in real time.
It’s true Facebook, Twitter and YouTube have finally accepted some responsibility for the problem, instituting a range of measures aimed at limiting the spread of disinformation and employing human and AI fact-checkers and moderation systems.
But the companies’ profit motives are not well aligned with stripping out fake users or restricting engagement with extremist content. Or, as Ardern told media on her arrival in Paris, it is reasonable to ask whether these companies are “monetising hate”. We can’t know that without more transparency around their algorithms. That is unlikely to happen voluntarily, which is why regulation is required.
The free market and private algorithms are not appropriate modes for managing political problems of extremist radicalisation, covert manipulation of public opinion or electoral interference.
Many nations around the world have concluded that the public sphere must reassert a regulatory role; the problem is how to do it within reasonable limits. No one wants anything resembling the Chinese model. Australia’s “knee-jerk” reaction has been widely criticised by the tech industry and lawyers as rushed and ill-defined. Facebook would ultimately prefer regulation to a proposal that’s gaining traction in the US: breaking it up under anti-monopoly laws.
Ardern has been careful to try to work with the tech companies, and to restrict focus to terrorist material in an attempt to steer clear of free speech concerns. She knows one little country can’t do it alone, but a coalition of like-minded democracies working carefully might figure out how to force some accountability on the US tech giants.
Because, as Canadian researcher Natasha Tusikov has pointed out, someone makes the rules governing our virtual town square, and it should be democratic governments.