Last month Pew asked 1156 experts in computer science, media studies and related fields whether they thought the 'global information environment' would improve over the next decade. Forty-nine percent said yes, 51% said no. While the two near-halves disagreed on the future prospects for fake news, both groups stressed the need to strengthen the public's information literacy as well as to bolster public-serving media outlets.
These kinds of solutions - an educational focus on critical thinking and digital literacy; better funding public-serving journalism either through public or philanthropic support - are sensible and imaginable responses to the demand for fake news. But how should governments and other interested parties go after the supply? Earlier this month the Financial Times columnist Edward Luce made the comparison between the dynamics underlying the war on drugs and any potential 'war on fake news':
A generation ago America declared a war on drugs. Tens of billions of dollars and dozens of broken cartels later, most US law enforcement veterans admit defeat. The problem lay in the demand for drugs, rather than its supply. The same applies to fake news. People seek it out.
But while misinformation, like drugs, has existed for millennia as gossip, mythology, pseudoscience and a host of other forms, what has changed is the advent of the digital environment that now hosts misinformation and the technological processes that now dominate its delivery. To date, governments have had little say in the structuring or regulating of these processes, and citizens have only had their say as consumers of social networks supplied by an increasingly monopoly-like industry. For their part, shareholders seem broadly unconcerned.
So how should governments and other interested parties attempt to go after the supply of fake news? The path forward is unclear. Attempts by Facebook to partner with fact-checking organisations such as Politifact or Snopes to 'flag' fake news items may not be having a net positive effect, according to one study – and Facebook's current fact-checking program only flags an estimated 100 stories a month. Any solution, if there is one, is likely to be multifaceted, technically complex and continuously evolving. For now, it might include relatively non-technical initiatives such as:
- Requiring greater transparency around online political advertising, as would be stipulated by the US Honest Ads Act and is planned by Twitter and Facebook. In Australia, the Guardian and ProPublica last week launched a makeshift database of political ads on Facebook, driven by user encounters and cataloguing.
- Making digital platforms liable for libellous content when the author of that content cannot be ascertained, as argued by Leonid Bershidsky.
- Addressing the monopolistic nature of tech giants – the more algorithms fake news producers need to game, the harder their job.
- Instituting a greater level of conscious user choice in news consumption. As Mark Little and Áine Kerr argue:
Google, Facebook, and Twitter are here to stay. The challenge is to enhance the experience of these ubiquitous platforms through an added layer of conscious control. The personalisation of news challenges the public good, largely because the artificial intelligence that drives it is both unseen and clandestine. Our 'filter bubbles' reflect our unthinking instincts, rather than our conscious intentions.
Australia has a stake in how the regulation of this space might unfold. Algorithms and platforms devised in Silicon Valley or Dublin gird our political system and determine not just the style of content that the Australian public consumes, but how it is delivered to them. Aside from this year's headline-grabbing Senate Committee, the Australian government has some policy engagement in this area. The Department of Foreign Affairs and Trade's International Cyber Engagement Strategy emphasises Australia's commitment to 'raise concerns about cyber-enabled interference in democratic processes' through bilateral and multilateral opportunities, and to a 'multi-stakeholder approach to Internet governance'. But this framing casts digital platforms as passive, static hosts of cyber-enabled interference, rather than as potentially active propagators.
In January, Denmark established a digital ambassador in order to engage with tech giants. Australia currently has an Ambassador for Cyber Affairs, in charge of executing aspects of the strategy mentioned above. Should it also have a digital ambassador? On principle, rewarding corporations that attain discourse-warping levels of influence with diplomatic recognition would be an odd choice – direct engagement with tech giants through the Department of Communications would be more suitable fit (the formulation and release of a public strategy for such engagement would be welcome).
Perhaps a more suitable role for DFAT would be articulating Australia's interests and potential solutions in the relevant international and multilateral forums – but what forums would those be? As one anonymous Pew respondent rather hyperbolically put it:
The internet is the 21st century's threat of a 'nuclear winter,' and there's no equivalent international framework for nonproliferation or disarmament. The public can grasp the destructive power of nuclear weapons in a way they will never understand the utterly corrosive power of the internet to civilised society, when there is no reliable mechanism for sorting out what people can believe to be true or false.
Australia could be at the forefront of developing such an international framework, as it has been for other successful multilateral forums. But it would require the Australian government adopt a technical, reasonably concrete position on what it would like to see happen. Recognising misinformation as a key civic priority and reaching a lasting political consensus on it, already a difficult proposition, is made more difficult by the rapidly evolving technological and cultural aspects of the problem and by the fact that information disorder helps elements of our polity. Moving beyond these hurdles to a broad international consensus among democracies on what is in many respects a global problem would require even more political will.
But the alternative would be abandoning the problem of misinformation supply and hoping that, under severe public and government pressure, the tech giants are willing and able to figure it out on their own. The companies themselves appear divided on their role in moderating the platforms they have created – as an anonymous vice president for public policy at 'one of the world's foremost entertainment and media companies' told Pew:
The small number of dominant online platforms do not have the skills or ethical centre in place to build responsible systems, technical or procedural. They eschew accountability for the impact of their inventions on society and have not developed any of the principles or practices that can deal with the complex issues. They are like biomedical or nuclear technology firms absent any ethics rules or ethics training or philosophy. Worse, their active philosophy is that assessing and responding to likely or potential negative impacts of their inventions is both not theirs to do and even shouldn't be done.