Skip to content
Unit 15Emerging Disruptive Technologies - Challenges for Arms ControlChapter 3: Autonomy in weapons systems
Chapter 3

Autonomy in weapons systems

‘Autonomy in weapons systems’ – a primer

TA tank with a small cannon and the inscription ‘Milram Robotics’ in a meadow with tall grass. Small trees can be seen in the background.

Type-X UGV.

Source: Milrem Robotics, CC BY-SA 4.0

Type-X is an in-development 12-tonne class unmanned ground vehicle (UGV) made by the Estonian manufacturer Milrem Robotics. Its modular design allows it to carry various weapons systems, such as autocannons, mortars, anti-tank missiles and more. Its ‘Follow me’ mode allows autonomous movement, shifting the first step of the targeting cycle from human to machine.

Arriving at a multilateral consensus on new binding international law regarding autonomy in weapons systems is difficult, mostly due to the enormous military significance ascribed to the issue. This pertains to consensus between the five permanent members of the UN Security Council, but also to other countries with technologically advanced militaries such as, to name just three examples, Israel, Japan and Australia. This hurdle itself is not new, of course. It is something that was observed in other regulatory processes in the recent past, such as those on landmines, cluster munitions and blinding laser weapons, with the latter being achieved within the framework of the United Nations in Geneva. That said, blinding lasers always represented an exotic niche capability that states could forego at no perceived significant military cost. Landmines and cluster munitions, too, had specific fields of use and were dispensable or at least partly substitutable in the eyes of many states. This is not the case with weapon autonomy. Its impact is perceived to be game-changing for militaries around the globe, comparable to the transition from the horse to the internal combustion engine, and sometimes even mentioned in the same breath as gunpowder and nuclear weapons.

A reliable autonomous completion of the targeting cycle that is compliant with the obligations of international humanitarian law (IHL) is not yet feasible from a technological point of view. Few if any representatives of the relevant technical fields believe it to be currently possible for a machine to reliably discriminate between legal and non-legal targets (e.g. between combatants and civilians – which can be extremely difficult, even for humans, because it depends on context and an understanding of social meaning) and make assessments as to the appropriateness of using military means. Moreover, pattern recognition systems based on deep neural networks, which represent the current state-of-the-art in the field of automated image recognition, have proved to be extremely susceptible to manipulation.2

There is considerable legal debate concerning weapon autonomy. Since errors due to software and hardware or the fog of war as well as enemy influence are unavoidable, Lethal Autonomous Weapons Systems (LAWS) carry the risk of causing undue harm for civilians and creating an unacceptable ‘responsibility gap’ in the event of IHL violations. And since the current body of law addresses humans, prompting them to make decisions, simply delegating these same decisions to machines, which are not subjects under the law, seems improper – and makes the creation of new law a prerequisite.3

Ethical Implications

Some researchers contend that viewing weapon autonomy solely through the lens of IHL misses the key point – namely that the delegation of kill decisions infringes on a more fundamental norm: human dignity. The narrow focus on discrimination implies that, as long as civilians remain unharmed, attacking combatants using algorithms could be acceptable. But combatants, too, are imbued with human dignity – and being killed by a mindless machine that is not a moral agent is infringing on that dignity. Algorithmic targeting of humans reduces them to data points and strips them of their right to be recognised as fellow human beings when they are wounded or killed. This matters, especially from a wider societal point of view, because modern warfare, particularly in democracies, is already decoupling societies from war in terms of political and financial costs. A society that also outsources moral costs by no longer even concerning itself with the act of killing, with no individual combatant burdened by the accompanying responsibility, risks losing touch with not only democratic norms but fundamental humanitarian norms as well4

Security Implications

A strategically relevant implication of machines completing the targeting cycle autonomously is that it might become impossible for humans to intervene as a circuit breaker if operations at machine speed go awry. Weapon autonomy runs the risk of unpredictable outcomes, with a real possibility of swift and unwanted escalations from crisis to war, or, within existing armed conflicts, to higher levels of violence. This risk of ‘flash wars’ is perceived as the biggest incentive for regulation in many countries around the world. This risk is not a problem way off in the distant future. At the Dubai Airshow in 2019, the then chief of staff of the US Air Force, General David Goldfein, presented the simulated engagement of an enemy navy vessel with a next-to-fully automated kill chain. The vessel was first picked up by a satellite, then target data was relayed to airborne surveillance as well as command and control assets. A US Navy destroyer was then tasked with firing a missile, the only point at which this targeting cycle now involved a human decision, with the rest of the ‘kill chain … completed machine to machine, at the speed of light’.5

The Arms Control Debate at the United Nations

Awareness regarding the implications of weapon autonomy started in expert circles and developed into a fixed field of research in the 2000s. An important milestone in achieving wider recognition of the subject was the formation of the International Committee for Robot Arms Control (ICRAC) in 2009 and its first international conference in Berlin in 2010

Group portrait with a total of 32 white people in three rows, including three women.

Group photo of the participants at the first ICRAC conference, October 2010

Source: Courtesy of ICRAC

The ICRAC is a global network of scholars (of which Frank Sauer has been a member since 2010) working on the topic from the vantage point of their various disciplines. In 2012, the United States’ Department of Defense presented the first doctrine on autonomy in weapons systems, lending additional credibility to the issue but at the same time also drawing criticism.

Prompted by ICRAC and the concerns voiced by the scientific community, the non-governmental organisation (NGO) Human Rights Watch, a key player in past humanitarian disarmament processes, began forming a global civil society coalition of NGOs – the Campaign to Stop Killer Robots. Its first goal was to get the issue on and further up the arms control and disarmament agenda of the UN in Geneva. It succeeded in doing so with extraordinary swiftness in 2014, and the Convention on Certain Conventional Weapons (CCW) became the diplomatic and scholarly focal point of the global discussion surrounding autonomy in weapons systems.

The issue of weapon autonomy has now been discussed at the CCW for more than ten years under the rubric of ‘Lethal Autonomous Weapons Systems’ (LAWS). It should be noted that the term ‘LAWS’ is problematic. After all, neither ‘lethality’ nor ‘autonomy’ are decisive factors in the debate. The military application of non-lethal force raises concerns as well (take the prohibition of blinding lasers as just one example), and the term ‘autonomy’, philosophically speaking, inappropriately anthropomorphises machines that have limited agency and are incapable of reasoning and reflecting, let alone taking on responsibility. The term ‘automation’ could just as well be used to describe what is happening, namely the delegation of critical functions from humans to machines. This is what the focus should be on, irrespective of the semantic battles still being fought around the issue.

icture taken from far back in a large conference room. You can see many people from behind, looking in the direction of a stage and  very large projector screen in the middle of the picture. In front of the screen on the stage you can see a small panel with several people who cannot be individually identified.

Informal expert meeting on LAWS at the CCW 2016.

Source: Frank Sauer

Luckily, the functional understanding introduced above has gained considerable traction at the UN level. It has been adopted by the United States in their doctrine, the International Committee of the Red Cross (ICRC) in its position papers and by a majority of civil society organisations, scholars and diplomats. Generally speaking, we are slowly but surely seeing more evidence of convergence in diplomatic talks, resulting in much less ‘talking past each other’ with regard to the regulatory challenge and how to address it. Convergence is also observable regarding the contours of a potential regulation. A two-tiered approach combining prohibitions and regulations is being discussed by many as a promising structure.

One, specific applications of weapon autonomy are unacceptable to many members of the international community and would thus be prohibited. The ICRC and numerous states suggest that this pertains to all weapons with human target profiles as well as those with potentially unforeseeable or indiscriminate effects on the battlefield due to them being uncontrollable.

Two, autonomous application of force against target profiles other than those intended to represent humans, such as various military objects, is acceptable but requires certain limits and constraints, that is, positive obligations to minimise ethical risks, ensure compliance with IHL and address security and safety concerns. Those limits and constraints can be temporal or spatial and are generally speaking subsumed under the notion that ‘meaningful human control’ must be preserved in the design of a weapons system, along with adequate tactics, techniques and procedures for use.

It is however crucial to state that there is no such thing as ‘one-size-fits-all meaningful human control’. Operationalising human control is a case-by-case process. Human control also has to be ‘baked into’ the system at the design stage. Hence, understanding the human element as human control both ‘by design’ and ‘in use’ means that a need to differentiate arises. On the one hand, defending against lifeless incoming munitions remains one application of autonomy that can rely on a weapon’s critical functions being performed without human intervention, provided that decision is delegated to a machine set up with the spatial and temporal limits that are appropriate in the operational context. On the other hand, selecting and engaging targets in a cluttered environment at points in time that are hard to ascertain in advance requires much greater human judgment and agency. In other words, in this case humans have to decide with what, when and where to engage, particularly when an application of military force could endanger human life.

In sum, while the conceptual struggles around weapon autonomy are less of a pressing concern nowadays, the struggle to muster the political will to enact regulation at the UN level is still ongoing.

Footnotes

  1. Sauer, Frank. 2021. “Stepping back from the brink: Why multilateral regulation of autonomy in weapons systems is difficult, yet imperative and feasible”, in: International Review of the Red Cross 102 (913): 235–59.

  2. Sauer, Frank. 2022. “The military rationale for AI”, in: Schörnig, Niklas/Reinhold, Thomas (eds): Armament, Arms Control and Artificial Intelligence: The Impact of Software, Machine Learning and Artificial Intelligence on Armament and Arms Control, 27–38.

  3. International Committee of the Red Cross 2021: ICRC position on autonomous weapons systems, Geneva.

  4. Rosert, Elvira/Sauer, Frank. 2019. “Prohibiting Autonomous Weapons: Put Human Dignity First”, in: Global Policy 10 (3): 370–75.

  5. Video: ‘Here’s How the US Air Force Is Automating the Future Kill Chain’, Defense News, 2019, available at: https://www.defensenews.com/video/2019/11/16/heres-how-the-us-air-force-is-automating-the-future-kill-chain-dubai-airshow-2019/.