X marks the spot where free speech comes at a cost

X marks the spot where free speech comes at a cost

Originally published in The Age


Accusing the Australian government of censorship, Elon Musk and his company X – formerly Twitter – has vowed to challenge the take-down order from the eSafety Commissioner. X has been told to remove a video – widely circulated on the platform – that shows the recent attack of Bishop Mar Mari Emmanuel at the Assyrian Christ the Good Shepherd Church in Wakeley.

Musk’s refusal to comply and apparent desire to take the matter to court has set up yet another showdown in the battle between content moderation and free speech. But more than that, it’s put further scrutiny on the role social media has played in creating all sorts of harm to society and what government should do about it.

Our recent research project at the Lowy Institute – Digital Threats to Democracy – delved into whether social media companies such as Twitter/X have simply become easy scapegoats for our societies’ ills or whether they are, in fact, posing significant threats to the health of our democracy and social cohesion.

Our project has found that while digital communications technologies have brought some benefits and advantages to the way people work, live and communicate, the economic logic driving big tech companies does not factor in the protection of democracy, nor indeed the protection of other social and public goods. In fact, it can actively undermine them.

Social media platforms and other computer-mediated communication tools have enabled extremists to organise and communicate in broader and more efficient ways. They have played a significant role in spreading disinformation and fomenting polarisation. All of this has diminished open societies and democracies like Australia.

Musk crying “free speech” is a useful distraction. The debate around who is allowed on digital platforms and what they are allowed to say or do, obscures the larger issue. Many of the threats digital technologies present to democracies stem not only from what they can do and the type of content and information that is allowed to circulate, but the economic logic behind the companies that own these technologies.

The business model behind many social media companies and digital platforms is driven by the attention economy, where highly polarising and arousing content is often prioritised because it drives revenue to the digital platforms hosting that content.

Digital technologies have also commodified the public. By providing “free” services, users have become sources of data and content that these companies can then monetise. Big tech’s business models rely on this collection of our personal information and its monetisation, what Harvard professor Shoshana Zuboff famously termed “surveillance capitalism”.

Silicon Valley tech companies are now some of the biggest in the world and operate on the scale of billions of users. But they also operate under a patchwork of government oversight and incoherent regulation across multiple jurisdictions, which has done little to impact their monopolistic positions and outsized economic power, let alone protect users.

The prime minister has admonished social media companies to take more responsibility around their content, and vowed further regulation. But if government is set on tackling these problems, then it would do better to move away from contentious and ultimately futile debates around content and instead focus on the anticompetitive behaviour that has allowed big tech companies to garner so much market power. Digital communications platforms such as Twitter, Meta and Google have become a combination of de facto service providers as well as advertising and data licensing companies, not just communications platforms, and they should be regulated as such.

We also have to acknowledge that content moderation still matters. But when it comes to regulating it, there is a crisis of legitimacy all around. Those making and enforcing the rules on social media – whether it’s big tech or government – will lack legitimacy from some if not most sectors of society. Many people believe governments enact regulation around content primarily based on partisan pressures and interests. Nor is anyone happy a handful of tech CEOs can set the rules and norms for so much of the world’s communication and expression.

But it’s much easier to diagnose this wicked problem than to figure out what to do about it. Besides focusing on the regulation of transparency and anticompetitive behaviour, as well as underlying business models and economic logic of digital platforms, average users and citizens could be brought into the equation when considering more contentious issues around content moderation and deplatforming.

Deliberative mechanisms such as “platform councils” – forums made up of average digital users and tech experts – can help achieve a more legitimate consensus on the uses and governance of digital platforms. They would allow responsibility and risk around content moderation and user access to be shared among the technology companies developing and running digital platforms, the governments tasked with regulating them, and the people using them. Similar processes such as citizens’ assemblies, citizens’ panels or consensus conferences can be convened to inform government regulation and legislation of not only social media companies but artificial intelligence and other emerging technologies that promise to pose even more complex challenges to democracy.

Ordinary citizens must be provided the opportunity to contribute to regulatory decisions. Where piloted, digital deliberative democracy has proven to be legitimate and popular. Most participants wanted tech companies to use this deliberative format as a way to make decisions in future.


Areas of expertise: Terrorism and violent extremism; digital technology; disinformation; authoritarianism; national security; emergency management and countering violent extremism; crisis and natural disasters; radicalisation; counterterrorism; policy; Middle East; US national security