The Australian Senate has appointed a select committee to inquire into the risks posed to Australian democracy by foreign interference via social media platforms such as Facebook and Twitter.
We need this. That much is clear from a growing pile of international inquires and reports, including by the US Senate Intelligence Committee, the UK House of Commons, the University of Oxford’s Computational Propaganda Project, Harvard University’s Defending Digital Diplomacy Project, the RAND Corporation, and NATO’s Strategic Communications Centre of Excellence.
Perhaps it has been ever thus – misleading propaganda and political lies are nothing new – but aspects of social media make it a devilishly useful delivery platform for an insidious, potent array of weaponised forms of information used to cajole, harass, hoodwink, and otherwise exploit publics.
Probably the most infamous form of social media influence/interference campaigning is that based on so-called “fake news”, used to discredit opponents, sow social discord, and weaken political support for foreign policy seen as adversarial.
These campaigns work because, like it or not, we are all vulnerable to forms of psychological manipulation. This is the uncomfortable fact underlying all forms of digital marketing, based on micro-targeting and repeated, automated experiments in persuasive communications. Moreover, the drivers of such campaigns don’t need to fool all of us all of the time – just enough of us enough of the time.
Australia remains as vulnerable as anyone else … Social media is widely used as a source of news and information. There are no laws requiring political campaigns to be truthful.
Misinformation campaigns are supported by vast numbers of automated fake accounts (bots) and by actual people managing large numbers of fake accounts under various aliases (sock puppets). The purpose of these is to spread manipulative misinformation more widely and, through shear repetition, boost its believability.
These kinds of fake accounts can be easily purchased from a black market in social marketing that exists on the open web. Like good wine, an aged account is more expensive, as it is less easily detected.
Some fake accounts are even curated, with real friends, unique (sometimes computer-generated) profile pictures, and verifiable phone/email accounts, so as to be almost indistinguishable from actual accounts; these come at a premium and can require an introduction to obtain.
Bots and fake accounts are used by covert operatives for purposes beyond misinformation. As Clint Watts and others have pointed out, among the more notorious actors are:
- hecklers (who harass and discredit opponents until they give up);
- honey pots (who, like the eponymous seductive secret agents) attract through sexual flirtation and/or intellectual flattery, seek to persuade or simply to trick a social media user into clicking on a tempting and/apparently innocuous link exposing their system to malware); and
- hackers (who use this malware to take over a user’s account or retrieve contact lists of supporters, who might then receive a real or faked picture designed to embarrass and discredit).
No one is 100% safe from any of this. But till now (touch wood), Australia appears to have escaped relatively unscathed, especially compared to the US and UK, who both were exposed to large and sophisticated misinformation campaigns conducted by the Russian government from the St Petersburg offices of the innocuous-sounding Internet Research Agency. Australia may have been targeted by some of the same Russian accounts, in an attempt to influence public debate in the wake of the downing of flight MH17.
China, albeit less active internationally than Russia, has been implicated in campaigns to influence Taiwanese politics. Other prime suspects are Iran and North Korea. None of these nations has apparently sought to use social media to influence Australian publics, yet. (Cyber hacking is another matter.)
But Australia remains as vulnerable as anyone else, for several reasons. Social media is widely used as a source of news and information. There are no laws requiring political campaigns to be truthful. Many non-state actors, including extremist groups and international con artists, have enjoyed considerable success using social media to peddle their wares, including in Australia. The volume and range of online content makes scrutiny and intervention extremely difficult. The internationally networked nature of social media platforms creates jurisdictional headaches. The social media companies are powerful and resistant to regulatory shackles.
Many of these wicked problems are intrinsic to online political communication in general; the use of social media misinformation campaigns by foreign actors, as a form of “hybrid” or “grey zone” warfare, raises two additional questions.
First, is this a task for Australia’s intelligence and military agencies – either to defend against these threats, or to develop offensive capabilities? (We know that Australia has used cyber warfare against ISIS.) The British army’s 77th Brigade, for example, is tasked with using forms of informational warfare to influence behaviour and counter propaganda, and GCHQ’s “Joint Threat Research Intelligence Group” has a powerful arsenal of digital weapons.
Second, even if the threat of interference in Australia could be mitigated, there remains the risk that foreign actors may seek to interfere maliciously in the public affairs of those with whom we have a strategic relationship or interest. Some, for example our Pacific Island neighbours, have scant resources and may benefit from Australian assistance. But what form could such assistance take?
The Senate select committee is scheduled to deliver its report in May 2022. Not before time.