Published daily by the Lowy Institute

In a high stakes environment, AI is failing asylum seekers

As governments automate security processing, AI mistranslation is quietly eroding rights and accountability.

AI systems are ill-equipped to handle the dynamic and human complexities of asylum decisions (Getty Images)
AI systems are ill-equipped to handle the dynamic and human complexities of asylum decisions (Getty Images)

Artificial intelligence (AI) is rapidly being integrated into international asylum procedures, with early trials promising faster processing, reduced backlogs and more uniform outputs. These efficiencies appeal to governments facing political pressures to streamline complex administrative systems. Yet in migration contexts shaped by power imbalances and security concerns, this seemingly efficient solution hides a profound risk: AI systems, which are driven by pattern recognition rather than understanding, are ill-equipped to handle the dynamic and human complexities of asylum decisions.

As the use of AI expands, it risks substituting human judgment, eroding accountability at a time when security and justice depend on precise and ethical assessment. The result is a profound shift in governance, where the line between humanitarian protection and security management is blurred. This scenario threatens to lower the baseline of human rights and procedural standards that asylum systems are meant to uphold.

AI translation is already embedded across security sectors, including military, defence and intelligence operations. It enables governments to analyse vast amounts of multilingual data quickly, supporting predictive analysis and decision-making. However, the trade-off between efficiency and accuracy has not been adequately addressed. Asylum procedures are an easy target for AI translation standardisation, as victims subjected to these systems lack the linguistic power to challenge them. This dynamic enables governments to deflect sociopolitical scrutiny and present technological adoption as administrative progress. Applying standardised technological tools to these environments undermines the human experiences these systems are meant to protect.

Asylum decision-making sits at the intersection of law, national security and human rights. Each case requires careful interpretation across language, culture and personal experience. Credibility often hinges on subtle details such as tone, phrasing and hesitation, as well as a deeper understanding of cultural nuances such as idioms and expressions. Asylum claims have been rejected on the basis of mistranslated social media posts, in which the intended meaning was lost in automated rendering. These are not variables that can be reliably captured by automated systems, a limitation that has been acknowledged by the AI companies developing these tools. This is problematic, as it further embeds the standardised bias in the asylum process, one that risks entrenching inequality and unlawful discrimination under the guise of technological neutrality. While shorter timelines may reduce uncertainty for applicants, they can also mask systemic failures which undermine the core human rights values that asylum systems are built to uphold.

Language underpins due process, and in its absence individuals cannot present claims, challenge evidence or understand decisions.

The limits of AI translation are well documented. The National Accreditation Authority for Translators and Interpreters (NAATI) has emphasised that these systems do not “understand” language; they reproduce patterns based on training data and are prone to “hallucinations”, or fabricating content. When that data is incomplete or biased, outputs can be inaccurate or misleading. In low-resource languages, translations may be incoherent, with even fluent outputs containing errors that non-speakers cannot detect. AI cannot recognise context, adapt to conversation flow, or interpret cultural nuance. It cannot detect distress, confusion or hesitation, factors often considered to be central in asylum interviews. 

These limitations have real-world consequences. Cases have emerged where individuals are processed in court without access to a human interpreter, relying instead on machine translation to communicate life-altering decisions. In such situations, individuals may not fully understand their rights, the proceedings or the outcomes imposed on them. Small translation errors can also have disproportionate effects, representing a systemic failure when automated tools are allowed to redefine credibility standards. In one reported case, an Afghan woman’s asylum claim was denied after an automated system changed the pronoun “I” to “we,” fundamentally altering the meaning of her testimony. Such cases illustrate how a standardised system fails to accommodate human experience, producing systemic injustice.

Language underpins due process, and in its absence individuals cannot present claims, challenge evidence or understand decisions. Machine translation reduces communication to approximation, and approximation is insufficient in high-stakes legal and security contexts. The risks also extend beyond asylum systems – mistranslation in military environments can have immediate and severe consequences for both civilians and personnel. Replacing human interpreters with flawed automated tools displaces human responsibility and introduces both operational and ethical risks. The growing involvement of private technology companies further complicates accountability. When external actors design and implement these systems, states can attribute errors to technological limitations rather than policy choices, thereby creating gaps in oversight. AI systems risk transforming political decisions into technical outputs – embedding bias, obscuring responsibility and making discrimination harder to identify or penalise.

Despite these concerns, efficiency continues to drive adoption around the globe. Governments facing large backlogs are increasingly prioritising speed over individual case analysis. While reducing delays is important, it should not become the defining objective in decisions that may determine whether someone is returned to persecution or violence. When efficiency becomes a proxy for success, the integrity of the entire system can be compromised. Exploitative practices often begin with the vulnerable, and as efficiency becomes the metric of success, discrimination and procedural injustice are now harder to detect.

The shift from human translation to artificial intelligence endangers the human dimensions of security procedures, and threatens to undermine human rights through perpetuated bias and mistranslation. Policymakers must therefore establish strict, mandatory safeguards that preserve human oversight and responsibility, ensuring that efficiency does not overshadow justice and technological standardisation does not erode the human rights protections found at the heart of asylum and security governance.




You may also be interested in