Published daily by the Lowy Institute

Meta’s decision to ditch fact-checking gives state-sponsored influence operations more chance

The social media giant’s move to user-based content moderation is a perilous step that risks enabling state-backed disinformation attacks.

The very mechanism being pursued by Meta to protect users from misinformation can itself become a tool of manipulation (Kirill Kudryavtsev/AFP via Getty Images)
The very mechanism being pursued by Meta to protect users from misinformation can itself become a tool of manipulation (Kirill Kudryavtsev/AFP via Getty Images)
Published 13 Jan 2025 

Meta’s recent decision to dismantle its professional fact-checking program marks a significant shift in the company’s approach to moderating content across its platforms – including Facebook, Instagram, and Threads. The world’s most used social network company argues the move is a return to its “roots” of prioritising free expression. However, the consequences of Meta mirroring the X-created “community notes” model could have far-reaching consequences for national and regional security.

Meta has vast global reach, with more 3 billion users on Facebook alone, most of whom are in India and all of whom use at least one Meta product daily. Meta CEO Mark Zuckerberg has conceded that the change means “we’re going to catch less bad stuff”. But the problem isn’t simply that Meta, like X before it, is abandoning paid, independent fact checking. It’s the model they’ve chosen to replace it.

Decentralised, user-based content monitoring will make it harder to track and expose the covert activities of state-sponsored actors.

Users will need to sign up to participate as contributors, with priority given to early sign ups as the program becomes available. Contributor’s content correction notes will then be rated by other users to determine the success of a note. The specifics of eligibility are not yet clear, but if X’s approach is the yardstick, there may be account authenticity and platform guidance adherence requirements.

Meta is effectively shifting responsibility for content verification onto its users, many of whom lack the necessary expertise to distinguish between truth and falsehood.

Short of taking down illegal and restricted content – acts of terrorism, child sexual abuse, murder, self-harm, rape, torture or kidnapping – I am not a fan of any content moderation dragnet that sweeps up genuinely held citizen views, no matter how controversial. Freedom of expression, association and religion is one of the bedrocks of democracy.

But so too are values like tolerance, equity, inclusion and the rule of law. These principles must therefore be balanced, as unrestrained freedom of expression can give rise to hate speech, misinformation, and actions that harm vulnerable individuals or groups.

We must also protect the right to voice differing opinions while curbing foreign efforts to meddle in the discourse of sovereign, democratic societies. We’re not serving the greater good if on the one hand we stifle legitimate discourse in order to counter online disinformation, or if on the other hand we allow foreign malign influence to run rampant because we choose to prioritise free speech.

In the context of swelling foreign information manipulation and influence, the shift to a public review system will leave a void that could be easily exploited by state-sponsored actors. In particular, this policy change could dramatically amplify the effectiveness of state-sponsored influence operations in several important ways.

First, diminished ability to identify coordinated campaigns.

Fact-checking programs provided a systematic and structured approach to identifying and countering disinformation, playing a vital role in finding coordinated inauthentic behaviour, a hallmark of state-backed online operations.

Decentralised, user-based content monitoring will make it harder to track and expose the covert activities of state-sponsored actors. The new model simply cannot match centralised, coordinated counter-disinformation efforts in terms of scale or effectiveness. Countries with lower digital literacy rates and fewer correction contributors will be left particularly exposed to the manipulation of their information environments.

The absence of impartial adjudicators means that content moderation becomes subject to the deliberate goals of coordinated groups or those with the loudest voices.

Second, speed to action.

During critical periods such as elections, military crises, or times of domestic unrest – whether because of natural disasters, political crises, or social factors – a rapid response to disinformation is essential. We’ve seen how quickly disinformation can spread: against the Rohingya in Myanmar, hateful content in Sri Lanka, child abduction rumours leading to lynch mobs in India, and coordinated violence by militia groups in Ethiopia. All on Meta’s Facebook and WhatsApp platforms.

A well-funded, state-sponsored disinformation campaign will be agile and opportunistic. Heightened investment in computational disinformation techniques, using sophisticated algorithms and automated tools to spread disinformation on a massive scale, will make it easier to flood social media with coordinated messaging. The shift to community-driven moderation risks creating delays and inconsistencies in addressing harmful narratives, leaving societies vulnerable to hostile interference.

Third, the new model creates opportunities to drive engagement with, rather than counter, disinformation content.

State sponsored disinformation seeks to amplify division and fuel polarisation; there is therefore no incentive to retract messages. While some authentic users might be more willing to retract their message in response to community notes, others, particularly those with entrenched views or participating in organised campaigns, will dig in their heels in order to drive increased interaction with their content.

Finally, the creation of new tactics for spreading false content.

A threat actor might become a correction contributor and flag legitimate content based on strategic interests, such as to discredit a political opponent. The absence of impartial adjudicators means that content moderation becomes subject to the deliberate goals of coordinated groups or those with the loudest voices, further undermining the integrity of public discourse. As such, the very mechanism being pursued by Meta to protect users from misinformation can itself become a tool of manipulation.

In regions such as the Indo-Pacific, where territorial disputes and geopolitical tensions are high, Meta’s decision could have significant ramifications. State actors, particularly China, have long used social media platforms to shape narratives around contentious issues, including territorial claims in the South China Sea. A user-driven “wisdom of the crowd” model could make Meta’s platforms even more susceptible to manipulation by state-backed actors seeking to influence public perception of these issues.

State actors with sophisticated influence operations, such as those in Russia or China, are already adept at manipulating algorithms and leveraging social media networks for strategic purposes. Meta is opening the door for state-backed strategic manipulation, creating opportunities for operatives to manipulate both the platform’s algorithms and the community moderation system, with less risk of detection.




You may also be interested in