Published daily by the Lowy Institute

Can violent extremist content online be eliminated?

The Christchurch Call is an opportunity for change — but there is much more to be done.

Understanding how the online environment amplifies hatred is one priority of the Christchurch Call (Barefoot Communications/Unsplash)
Understanding how the online environment amplifies hatred is one priority of the Christchurch Call (Barefoot Communications/Unsplash)
Published 27 May 2021 

Two years ago, New Zealand’s Prime Minister Jacinda Ardern and France’s President Emmanuel Macron called together governments and tech companies to act to eliminate terrorist and violent extremist content online.

The first Christchurch Call to Action summit was held in Paris in May 2019, two months after a gunman murdered 51 people at two mosques in Christchurch, New Zealand. The killer live-streamed his rampage for almost 17 minutes on Facebook, and the video was shared about 1.5 million times in 24 hours

Sixteen nations answered the Call, along with major internet companies, including Facebook, Twitter and Google – whose YouTube platform had been directly implicated as likely contributing to the online radicalisation of the Christchurch shooter.

With the second Christchurch Call Summit this past weekend (there was no summit in 2020), it is timely to consider what has and has not been achieved.

The Call needs to go beyond placing tech companies in charge of defining and dealing with extremist content. 

There has been some notable progress. More than 50 governments now support the Christchurch Call – including the United States (which announced its entry last week). The successes cited by Ardern and Macron are new protocols to rapidly find and remove violent extremist content.

The Global Internet Forum to Counter Terrorism (GIFCT), set up in 2017 by major social media companies and given the job of preventing terrorist exploitation of digital platforms, plays a major role under the Call. Its remit has expanded to include all violent extremist content. There is now more cooperation between governments, companies and other stakeholders, and more research is being done into the drivers of online radicalisation and violent extremism.

But gaps remain. 

The role of algorithms

The power of tech companies and their business models, guided by “recommender algorithms”, remains problematic, and harder questions of how to define violent extremism and address the “broader ecosystem” of extremist thought require more work. 

Two priorities for the next phase of the Christchurch Call are to “better understand ‘user journeys’ and the role of algorithms and other processes may play in radicalisation”; and to look “at how the online environment may amplify hatred and glorification of terrorism and violent extremism”. 

Social media platforms such as Facebook and YouTube are geared to maximise engagement, and their recommender algorithms tend to suggest increasingly extreme content. Targeting users in this way can create dangerous echo chambers, which can be pipelines to further radicalisation. 

In 2018, more than a year before the Christchurch shootings, for example, a Wall Street Journal investigation found YouTube’s recommendations were often leading users to channels featuring conspiracy theories, extremist content and misinformation. This was occurring “even when those users haven’t shown interest in such content”. 

The echo chambers created by recommender algorithms can lead to increasing radicalisation (Michael Dziedzic/Unsplash)

YouTube, Facebook, Twitter and others all reported to the Christchurch Call they had taken steps to limit their algorithms promoting extremist content. But how big were those steps? That’s unclear, because their algorithms are secret. 

Without transparency, researchers have a limited ability to understand how these algorithms have widened the pipeline to extremism. 

Defining “extremist content”

But exactly what counts as violent extremism is also problematic. 

Removing content online can provoke a backlash – it can be perceived or portrayed as a curtailment of “free speech”, and in itself can encourage radicalisation. This has been a concern of the US with regard to First Amendement rights. While many celebrate the US joining the Call, the Biden administration has nonetheless made it clear that while it sees value in “promoting credible alternative narratives” to counter extremism online, it will not “restrict free expression”.

But there is no standard agreement about what constitutes violent extremism – even the social media platforms with the job to remove it define violent extremist material differently from one another. Facebook removes content that “glorifies violence or celebrates the suffering or humiliation of others”. Twitter has three different criteria to define violent extremist groups, supposedly informed by “national and international terrorism designations”, which must meet the following:

  • Identify through their stated purpose, publications, or actions as an extremist group;
  • Have engaged in, or currently engage in, violence and/or the promotion of violence as a means to further their cause; and
  • Target civilians in their acts and/or promotion of violence.

A further issue is the problem of smaller social media sites, which may operate under less scrutiny and can become platforms for content banned by larger sites. In May, the vice-president for growth of video-sharing platform Odysee told site moderators that a “Nazi that makes videos about the superiority of the white race” is not grounds for removal and that the company did not have to explain its content policy.

A deeper concern also lies with the Global Internet Forum to Counter Terrorism, which is governed by an Operating Board that only includes the big tech companies. The role of civil society, governments and international organisations is relegated to its International Advisory Board. 

Such an arrangement seems to grant the tech companies an inordinate amount of power to determine what counts as extremist content. The Call needs to go beyond placing tech companies in charge of defining and dealing with this problem. 

Alternative narratives

While social media is the vehicle to spread ideas, good or bad, terrorist and violent extremist content exists in a wider context. Simply removing violent content fails to deal with what leads to the creation of such content in the first place. 

The original Call recognised the need to build “resilience” and inclusiveness in societies, the importance of “counter-messaging” and providing alternatives to violent extremism. The second Christchurch Call reiterates that alternative, positive narratives to counter violent extremism must also be part of the response.

This, of course, brings into play the complexity of radicalisation more broadly. While social media and online platforms can disseminate extremist messages, the ideas that drive them draw on societal divisions that can be overtly or subtly promoted in the public sphere. The second Call makes nods to this in its focus on civil society. But it could also go further in terms of calling out the discourse of public figures such as politicians.

The Call is a positive step, but it needs to emphasise a more holistic and interconnected approach.
 

Main photo courtesy Barefoot Communications/Unsplash




You may also be interested in