Published daily by the Lowy Institute

How much autonomy is too much when it comes to waging war?

The use of autonomous weapons in war raises concerns on humanitarian consequences and the application of law.

Photo: Defence Images
Photo: Defence Images
Published 17 Dec 2018 

It’s a perfect storm: the confluence of rapidly developing technologies and the search for new ways to wage war.

For humanitarian organisations working in war zones like the International Committee of the Red Cross (ICRC), the emergence of new weapons is always pause for thought. New weapons require some reflection on their humanitarian consequences, as well as how existing law applies to them.

International humanitarian law (IHL) is the body of rules that applies in war to protect those who are not participating in hostilities, including civilians, detainees, and the wounded. Where these “laws of war” and new weapons intersect, a number of ethical and legal questions arise. A prime example is lethal autonomous weapon systems – sometimes “killer robots.”

As the ICRC sees it, autonomous weapons are those that can select and attack a target without human intervention. Loitering munitions, like the Harpy, are a case in point. A hybrid between a drone and a missile, once launched they can operate for hours, loitering in a defined geographic area without necessarily needing to communicate back to home base. Programmed to detect specific signals, they can lock on and attack all by themselves.

Context-dependent decision-making, mercy, and compassion are beyond the reach of machines. These are unique, complex judgements that can only be made by humans.

But if these attacks occur in an armed conflict, as with all weapons, the law of war applies. The attack must be proportionate, all possible precautions to avoid or minimise civilian harm must be taken and the weapon must not cause unnecessary suffering, superfluous injury, or be indiscriminate in targeting.

Can a machine do all that? No. It can carry out a set of programmed instructions and may even learn from its environment, but it is a human that must apply and ultimately comply with the law of war.

And that raises a key question. What is the minimum level of control a human needs to maintain over a weapon in order to comply with the rules of war?

It is far from the only question, but it is one that the ICRC is encouraging states to answer. Other legal questions seek to define the terms of the debate – autonomy, human control – as precisely as possible. On the ethical side, it needs to be asked whether a person’s moral agency can or should be delegated to a machine at all – particularly in decisions to kill or injure other humans.

Certainly, there are many things machines already do better than humans. A machine eye can “see” things imperceptible to a human eye, for instance. Some even argue that autonomous weapons could result in humanitarian benefits and better legal compliance – by removing the “fog of war” from decision-making, processing information faster, and increasing precision.

But context-dependent decision-making, mercy, and compassion are beyond the reach of machines. These are unique, complex judgements that can only be made by humans. Take the rule prohibiting attacks against combatants who are “out of the fight”, for example, because they have surrendered or are injured. A machine might be able to identify someone as a soldier, using sensors and image recognition technology. It might even be able to detect whether a combatant is wounded, looking at heat signatures or patterns of movement. But how does a machine identify whether the combatant is committing a hostile act?

It’s easy to get lost in the complexity of these questions. Yet for the ICRC, the equation is simple. New weapons technologies do not exist in a legal vacuum. Nor are they inherently good or bad, it depends on how they are used. But questions of legal compliance and ethical acceptability need to be addressed urgently, as technology threatens to outpace international discussions.

One practical means of addressing the questions posed by autonomous weapons already exists. Article 36 is barely five lines of text sitting in the first Additional Protocol to the Geneva Conventions. It is a pretty common-sense obligation, requiring states bound by the Protocol to determine whether new weapons or methods of warfare that they develop or acquire would violate international law.

Its intention is to ensure that states don’t develop or wield weapons that would violate their international obligations, particularly international humanitarian law. But out of 174 countries bound by this Protocol, fewer than 20 countries are known to conduct these kinds of legal reviews. In the Asia Pacific region, that includes Australia and New Zealand.

The reality remains that very little is known about how many states are complying with this obligation, and how. It’s an issue the ICRC actively pursues in its work around the world, seeking to build trust and confidence in legal reviews as a mechanism to match the explosion of new weapons rapidly becoming available.

Encouraging countries to talk to each other about weapons reviews in light of advancing technology is a foremost priority. But even national legal reviews cannot take the place of common international understandings about human control and the limits of autonomy in weapon systems.

Now is the moment for countries to agree upon strong, practical and future-proof limits on that autonomy.

* This article was updated following publication. 




You may also be interested in