Published daily by the Lowy Institute

Time for an ASEAN agreement on lethal AI

The use of autonomous weapons in the region’s flashpoints could narrow the window for human intervention to seconds.

Lethal autonomous weapons systems have become cheaper, more capable, and easier to field (Getty Images)
Lethal autonomous weapons systems have become cheaper, more capable, and easier to field (Getty Images)
Published 30 Sep 2025 

The recent Cambodia–Thailand border skirmishes ended with leaders meeting face to face, producing a ceasefire agreement. Now picture the next round of clashes with loitering munitions and ground robots in the mix. Events could move from detection to detonation in seconds, leaving no real space for a human to intervene. That is the danger Southeast Asia faces as AI-integrated lethal autonomous weapons systems (LAWS) become increasingly cheaper, more capable, and easier to field in the very places where miscalculation is most likely.

The problem with LAWS

Lethal autonomy is a spectrum. At one end, a human must approve the shot. In the middle, machines make targeting decisions but a human can still abort. At the other end, the system can select and strike without meaningful human input. There is an inherent trade-off: less human oversight buys speed and persistence, but it also raises the odds of misclassification, narrows off-ramps once a sequence begins, and blurs accountability when things go wrong.

These autonomous systems are increasingly popular in conflict hotspots. In the 2020 Nagorno-Karabakh war, Azerbaijan paired surveillance drones with loitering munitions such as the Harop to hunt air defences and armour; software did much of the cueing and persistence. In Russia’s war on Ukraine, both sides use Lancet-style loitering munitions and small quadcopters with growing amounts of onboard perception and automated hand-offs. Perimeter and border tech is moving the same way. South Korea’s SGR-A1 sentry along the demilitarised zone can detect, track, challenge, andengage. A UN report on Libya described Kargu-2 drones possibly pursuing retreating fighters without requiring data connectivity between the operator and the munition.

Misclassification is likeliest where uniforms, auxiliaries, and civilians mix, which is exactly the scene at borders and in many Southeast Asian potential flashpoints.

Severe risks are likely to come with the widespread use of LAWS. Misclassification is likeliest where uniforms, auxiliaries, and civilians mix, which is exactly the scene at borders and in many Southeast Asian potential flashpoints. Machine-paced escalation compresses off-ramps: alerts, manoeuvres, and strike windows arrive faster than commanders can insert judgment. Accountability drifts from clear chains of command into vendor settings, logs, and software patches. And diffusion is relentless: the same commercial parts that power loitering munitions and small ground vehicles are now available to actors with uneven doctrine and discipline. These are governance problems as much as engineering ones, and they will arrive before our institutions are ready.

Why Southeast Asia must act now

The recent Cambodia–Thailand clash shows how easily tense encounters can escalate. Along maritime boundaries, from the Gulf of Thailand to the waters off Natuna, patrol vessels already operate in close proximity to foreign counterparts, sometimes exchanging warnings at visual range. In the South China Sea, the density of coast guards, naval units, maritime militia, and civilian traffic creates an environment where a misread signal or aggressive manoeuvre can readily spiral in unpredictable ways. When systems with the ability to operate autonomously are introduced, the window for political leaders to intervene narrows to seconds.

Military kamikaze drone Lancet - stock photo Military kamikaze drone Lancet in the sky - 3d rendering
In Russia’s war on Ukraine, both sides use Lancet-style loitering munitions (Getty Images)

Another reason to act now is that the Association of Southeast Asian Nations (ASEAN) militaries will likely remain behind the stronger powers in autonomous systems for years to come. The danger is not simply being out-matched, but being forced to operate within someone else’s doctrine. That doctrine will arrive embedded in imported hardware, pre-loaded software, and training syllabi, which reflect the priorities and values of its originators, not the realities of Southeast Asia’s crowded sea lanes or porous land borders. Thus, without agreed limits, smaller states risk inheriting a pace of engagement and a threshold for force that erodes their ability to control events in their own territory and adjacent waters.

Above all, this is an opportunity for ASEAN to assert centrality by shaping the region’s rules and norms before others do. The diplomatic groundwork already exists: as this year’s ASEAN chair, Malaysia brought military AI into the foreign ministers’ text, with the 58th AMM joint communiqué welcoming February’s ASEAN Defence Ministers’ statement on AI and expressing concern about the “negative consequences and impact of autonomous weapon systems on global security and regional and international stability”. To build on this foundation, ASEAN must now take bold, practical steps to enforce its AI governance vision and secure regional stability.

Time to agree

Rather than banning autonomy outright, the region should pursue a narrow, time-bound agreement that curbs the riskiest applications while preserving human authority over lethal force. Modelled on the Southeast Asia Nuclear Weapon-Free Zone (SEANWFZ), this agreement would be focused, region-specific, and designed to influence behaviour in a narrow context.

Without action, competitive pressures and imported technologies will erode human oversight, ceding control to software by default.

The agreement should draw a firm line against systems that can select and strike human targets without further oversight. It must mandate a human veto for any lethal action, ensuring decisions about life and death remain with people as much as possible. Basic transparency measures, such as test notifications and an incidents hotline, would build trust and reduce the risk of miscalculation in the South China Sea or along contested borders. Lastly, agreeing on a moratorium of at least three to five years on the riskiest LAWS deployments would give ASEAN states time to develop doctrines, legal frameworks, and safeguards.

The framework can be lean. Requiring only a threshold number of ratifications for entry into force would allow a coalition of committed states to lead, setting a regional norm. External powers operating in Southeast Asia could be invited to observe and respect these rules, amplifying the treaty’s norm-shaping influence. The goal is not perfection but establishing that in this region, lethal autonomy comes with clear limits and accountability.

This is ASEAN’s chance to act before a crisis, not after. Without action, competitive pressures and imported technologies will erode human oversight, ceding control to software by default. The proposed agreement would keep human responsibility and judgment at the heart of the most consequential decisions, preserve space for political intervention in fast-moving conflicts, and assert ASEAN’s centrality in shaping its security landscape. By acting now, ASEAN can ensure its rules, not others’, govern AI’s role in Southeast Asia’s battlefields.




You may also be interested in