The recent descriptions of Russian interference in American politics, in both the Democratic primaries and the presidential campaign, are a stark reminder of the opportunities social media provides for information warfare.
This is particularly worrying given both the mountains of reports published since 2016 and Facebook’s undertakings to address what it calls “coordinated inauthentic behaviour”. The responsibility lies variously with the foreign agents of influence, Facebook (and other social media behemoths), and the American political class.
States that support and actively engage in influence campaigns cannot be let off the hook. The blame for their deeds lies with them. But as long as these types of campaigns remain effective and cost-efficient, they will continue. So, while calling out the Russians and Iranians is necessary, that alone is insufficient. The onus is on the social media companies and the targeted polities to react.
Facebook’s response to foreign interference is characterised by inadequate ambition and unimpressive results; its capacity to do better is limited by both democratic principles that protect speech and by the logic of data capitalism.
Facebook profits from content that gets attention and prompts a response, and some of the most shareable, popular content on social media is that which makes people angry.
The inadequacy of Facebook’s ambition arises out of its limited focus. It targets “coordinated inauthentic behaviour”, networks of actors who hide their real identity, location, and/or purpose, and operate in concert. Over the last two years, Facebook has announced the detection and removal of pages and groups on the platform engaging in foreign or government interference. Hundreds of accounts on Facebook and Instagram (owned by Facebook), followed by hundreds of thousands of social media users, have been scrubbed out. The most recently removed accounts were operating from Russia, primarily targeting Ukrainian audiences, and from Iran (targeting Americans).
This sounds like the heralding of a new era, in which we might be more confident that democracies have developed some immunity to the kinds of malicious campaigns that shrouded the 2016 US presidential elections. In the run-up to the 2020 US presidential election, what could be more welcome than the reassuring information that Facebook is able to detect and remove malicious foreign manipulation?
The weakness of Facebook’s record becomes apparent when we compare the relatively small numbers of accounts that have been removed with reports on the large – and growing – numbers involved in online campaigns.
And these are only the ones we know about – the actual figures are likely to be a lot higher. One clear reason for this is that considerable efforts are being made to overwhelm systems of detection and removal through either sheer weight of numbers, more and more cleverly disguised fake accounts, or stealthier disinformation strategies.
To further complicate matters, even if dodgy accounts are removed, dodgy content will remain. Facebook does not target false or misleading content, unless it violates its community standards (namely hate speech, violent or cruel content, sexual activity, or nudity). It consistently proclaims to be the intermediator, not the speaker, and thus not liable.
Facebook has repeatedly refused to moderate political speech – instead, its response is to provide a little more transparency around political advertising (including via the Facebook Ad Library report) and more protections of candidates’ accounts. Such measures are fine, but neither of these get to the nub of the matter.
To be fair, it’s a wicked problem: to prevent political interference in a democracy, which relies in large part on free speech, political speech must be curtailed. Protecting free and open political speech as a fundamental element of democratic societies can expose these societies to pernicious, damaging propaganda. The difference between propaganda in the modern era and its previous iterations is clear. First is a problem of scope, speed, scale of information flows; second is the networked structure of communication; and third is the capacity for creative identity formation in a digitalised online space.
But Facebook’s interests are not aligned with any attempt to curtail content creation and information sharing. In fact, it’s the opposite – Facebook profits from content that gets attention and prompts a response, and some of the most shareable, popular content on social media is that which makes people angry.
Fury is a potent political weapon, and a powerful driver of online interactivity, which generates data on users’ preferences, attitudes, and likely behaviours. This data, and the capacity to use this data to predict future behaviour, is what makes Facebook (and others) immensely profitable, and thus powerful. This is the basis of data capitalism (also known as surveillance capitalism).
No right-minded company would risk damaging its main profit-generating business, but as the wealth and power accumulates, so does the risk of public and political backlash. This is one reason Facebook has entered the debate.
Regulation might be a way forward, but the American political classes have seemed unable to make much headway here. Free speech principles are one reason. Another is that some political candidates will actually benefit from the current state of play.
For Australians, these things matter because, first, America matters to everyone, and, second, the Australian senate has announced a Select Committee to inquire into the risks posed to Australian democracy by social media. The lessons from observing America can be applied here. One key lesson: whatever has been tried so far is not working.