Youth involvement in terrorism worldwide has surged. In 2024, the number of terrorist incidents in the West rose to 52, up from 32 the previous year. But the most alarming trend is the age of those involved.
In the United Kingdom, under-18s accounted for 42% of terror-related arrests in 2024. Nearly two-thirds of Islamic State-linked arrests in Europe in 2024 involved minors. Since 2020, Australian authorities have investigated 37 individuals aged 17 or younger for violent extremism. A first-of-its-kind joint report by the Five Eyes intelligence grouping underlined how seriously agencies now view this trend.
Online sources have fuelled the radicalisation of some younger people. While not designed solely with youth radicalisation in mind, Australia became the first country to enforce a nationwide curb on social media for children under 16, placing the onus on platforms, not users, to verify age and enforce restrictions. Policymakers are watching Australia closely with Denmark, France, and Spain all having announced plans to introduce similar policies.
Among the harms the ban aims to address – including mental health, cyberbullying, body dysmorphia and exposure to harmful content generally – radicalisation stands out as both acute and rapidly escalating. The question facing policymakers is whether platform restrictions can meaningfully reduce this threat.
This constellation of factors creates precisely the conditions extremist groups exploit.
Social media’s documented role in youth radicalisation suggests an obvious answer: restrict access to platforms, restrict access to harmful content. Europol identified social media and messaging applications as the main tools used to spread terrorist content. Algorithms have been highlighted as particularly impactful. Designed to maximise engagement, algorithms filter out opposing viewpoints and create self-reinforcing echo chambers that entrench worldviews within users. When such worldviews are framed within the context of hate, the end result is a feedback loop that amplifies polarisation.
However, this framing assumes young people are passive victims of algorithmic manipulation. They are not. Radicalisation is a complex and individualised process, but like any social phenomenon, has both supply and demand dimensions.
The supply side consists of content, narratives, and networks that promote violent ideologies. The demand side consists of the psychological and social conditions that make those ideologies attractive in the first place. There is a human need to matter, to be respected, and to belong. When those needs go unmet through legitimate channels, individuals become more receptive to groups and ideologies that promise to fulfil them in a quest for identity and significance.
For decades, there has been a predictable “hump of despair” in middle age, when life satisfaction dips before rebounding. That pattern has now inverted. Young people are reporting lower wellbeing, increased pessimism, higher loneliness, lower social connection, and higher levels of distrust towards institutions and authorities than any previous generation at the same age.
This constellation of factors creates precisely the conditions extremist groups exploit, providing young people with a sense of belonging and purpose. Extremist narratives do not need to manufacture grievances, they need only to offer simple explanations for why the system has failed, and promise belonging to those who feel excluded from it. Radicalisation is not a conveyor belt, and vulnerability does not guarantee an individual’s radicalisation. Nevertheless, an initial set of grievances can act as the mobilisation impetus for an individual’s radicalisation journey and the reason why they can gravitate towards extremist narratives.
Supply side restrictions alone will therefore prove insufficient. Even if social media age restrictions successfully limit access to radicalising content on major platforms, they cannot address why that content is appealing in the first place. Prohibition without addressing underlying demand creates displacement, not prevention.
However, the social media ban can help buy time. If countries are going to delay social media access until 16, policy makers should use that window to address the underlying “demand side” conditions that make extremist narratives appealing and build cognitive and emotional resilience to this content online.
This means media literacy education that goes beyond “check your sources” to include understanding algorithmic manipulation, recognition of recruitment tactics, and strategies for managing online conflict. Simple, targeted tools, such as 30-second videos that pre-expose students to common manipulation tactics, so-called prebunking, or structured classroom exercises where students analyse real recruitment materials to identify persuasion techniques, can help achieve this. These approaches can meaningfully improve young people’s ability to recognise and resist harmful online content, know when emotional vulnerabilities are being exploited, identify when narratives are moving from describing problems to dehumanising specific groups, and maintain critical distance even when content resonates with genuine frustrations.
However, simply being able to identify harmful content does not address the vulnerabilities that can make them resonate. For this reason, social media restrictions also need to work in concert with investments in belonging. In many cases, radicalisation exploits young people’s unmet needs for significance, identity, and community. Successful prevention requires creating legitimate pathways to meet those needs: community programs, mentorship, mental health support embedded in schools, and opportunities for young people to contribute meaningfully to society.
Australia, and countries considering similar social media bans, have developed demand-side youth intervention strategies. Australia’s recent Counter-Terrorism and Violent Extremism Strategy explicitly identifies youth vulnerability as a priority, committing $106.2 million over four years to counter violent extremism initiatives. Internationally, similar frameworks exist where there has been a recognition that CVE strategies need to undercut demand-side drivers of radicalisation.
Yet, such strategies often struggle in application. In Australia, for example, only a minority of students experience meaningful media literacy lessons in the classroom, with teachers struggling to use curriculum documents to develop rich learning experiences. Digital literacy exists in the national curriculum as a general capability, but implementation lags behind policy intent.
Australia’s social media policy is a test case. While restricting platform access is politically popular and immediately actionable, addressing the vulnerabilities that make extremist narratives attractive is harder, slower, and less visible. Social media bans can buy time, but that time is only valuable if it is used effectively.
