Why We Need to Stop Distinguishing Current Autonomous Weapon Systems

By Nurbanu Hayır

In 2018, a group of experts under the framework of The Heinrich Böll Foundation published a report on autonomy in weapon systems. As this report is a policy suggestion to the German government on the legality of autonomous weapon systems (AWS), it reflects on their definition of them. After defining AWS as “any weapon system with autonomy in the critical functions of target selection and engagement” as inspired by the International Committee of the Red Cross, the report summarizes specific characteristics of some weapon systems that “keep them distinct” from fully AWS “that raise concerns” under international law. It enumerates these characteristics as (1) Use of the weapon system in “highly structured and predictable environments” (2) Inability to “dynamically initiate a new targeting goal” (3) Constant human supervision (4) Anti-material uses of the weapon system in order to argue that they do not qualify as AWS.

This article claims that these distinctive characteristics bewilder the debate on what AWS are and whether AWS are illegal. Weapon systems with autonomy in their critical functions, i.e., systems that can “select (i.e. search for or detect, identify, track) and attack (i.e. intercept, use force against, neutralise, damage or destroy) targets without human intervention” should be defined as autonomous weapon systems irrespective of these characteristics because these do not mean that a particular system does not have autonomy in its critical functions but only that the use of AWS might be legal under International Humanitarian Law (IHL).

The purpose of this article is not to argue that all that qualify as AWS is illegal, but rather all that qualify as AWS should be regulated under international law. We should not allow AWS to escape regulation by distorting its definition. Considering that an essential part of the discussions held globally is on whether AWS require the development of new norms under IHL, defining AWS as broad as necessary is crucial to determine the scope of application of these new rules.

  1. Use of the weapon system in highly structured and predictable environments

The use of the system in highly structured and predictable environments may be likely to decrease the likelihood of misidentifying targets. Nevertheless, they should not be considered elements to refrain from defining these systems as AWS, but rather elements to consider when deciding whether the use of a particular AWS is legal in casu.

Autonomy is the ability to operate independently from a human operator. It is the product of Artificial Intelligence, a field of study that has allowed machines to develop functions initially performed by humans. A way of doing this is through hand-coded programming, where coders define everything beforehand, which yields no predictability issues unless an exceptional malfunction occurs. However, this method is increasingly outdated at the expense of machine-learning, a coding technique that provides more autonomy to machines. To put it very roughly, machine learning algorithms, which are a series of combinations to solve a function as in an algebra class, allow the machine to make its own decisions after receiving the data about the environment and the task it must perform with the help of humans. This has increased predictability issues since not everything can be pre-programmed by the coder and machine-learning algorithms are not transparent for humans to untangle. This is so because the machine operates through thousands of combinations when deciding, where humans eventually lose track due to the limits of their cognition. Thus, although humans set the goal for the machine, they cannot foresee the pathway in which the machine makes the decision that might lead to an unpredictable result. 

This foreseeability issue is particularly important because systems are likely to misidentify targets due to the limits of the current technology. Machines’ perception of the environment remains radically different from that of humans, i.e., they use hundreds of dark squares (pixels) to recognize an object. In contrast, humans see and interpret objects in a cognitive way that is unmatched by that of machines. When used in target recognition in weapon systems, this has serious repercussions in  “misidentifying”  targets by machines. Target recognition is equally as important as target engagement in determining whether a weapon system qualifies as an AWS. This should be the case because although a human may intervene in the target engagement phase, as target recognition is completely independent of humans, the decision to engage will heavily rely on the target recognized by the autonomous function. Autonomous target recognition in the critical function of selecting targets should be sufficient to define the system as an AWS

Thus, the use of the weapon system in highly structured and predictable environments should not prevent it from being defined as an AWS.

2. Inability to “dynamically initiate a new targeting goal”

Initiation of a new targeting goal based on the objective introduced to a system is a great example of a near-General AI, which can practically perform all the functions performed traditionally thanks to human cognitive abilities. Today’s AI is Narrow AI, which can only perform some functions that a human can. As an example of a near-General AI, the United Kingdom defines AWS as weapon systems “capable of understanding higher-level intent and direction.” However, the ability to select and attack targets, possible through Narrow AI, is sufficient to raise questions of compliance with IHL principles of distinction, proportionality, and precaution without a need for a General-AI system. For instance, a weapon system that is introduced with image and speed details to autonomously recognize and engage with a target raises questions under the principle of distinction as it is uncertain whether it can properly distinguish between lawful and unlawful targets. Although this system is not capable of understanding the goal of the command by the operator, it nevertheless raises concerns under IHL. 

Thus, similarly above, the limits of the technology today should not prevent defining a system as AWS. For instance, although a system may be incapable of “dynamically initiating a new targeting goal” it may still have autonomy to recognize or engage with a target, which is likely to cause issues under IHL independent of the high-level complexity required by some States.

3. Constant human supervision

Although exercising human supervision from time to time may rule out autonomy entirely, the ability of the system to allow for human supervision does not render the AWS non-autonomous per se. There are many weapon systems with autonomy that are able to operate in the autonomous mode and sometimes do. More importantly, human supervision may be exercised on functions independent of targeting. A great example of this is active protection systems (APS), which are designed to protect armored vehicles at a speed that overpasses the human capability to detect targets. Though exercising human supervision is possible, the aim behind APS is to engage with targets faster than humans can, so they usually operate autonomously without human supervision in target engagement. Hence, human supervision is limited in the targeting functions for specific reasons. Thus, a weapon system will likely be defined as AWS.

Further, it is unclear how much reliance the human operator will vest on the weapon system. Concerns of automation bias also support that human supervision, unless it rules out the system’s ability to operate independently, cannot be a ground to disregard autonomy in the current weapon systems’ functions.

4. Anti-material uses of the weapon

IHL protects civilians and civilian objects under the principle of distinction, the principle of proportionality and the principle of precaution applicable to both the design and use of weapon systems during armed conflicts. If the weapon system is constrained by design to be used towards humans, surely there will be no issues concerning the protection of civilians during armed conflict. Yet, civilian objects (e.g., an operational hospital) might still be threatened. Further, civilian presence is an independent element of the characteristics of the weapon system. Thus, the target type cannot be a ground to claim that a weapon system is not autonomous but rather renders the use of that weapon in compliance with IHL.

Further, some weapon systems are not constrained by design to be used towards humans, but their deployment area happens to be scarcely populated by humans. This is the case for the US Phalanx-Closed-in-Weapon-System (Phalanx) deployed in naval areas with almost no civilian presence. The target software of Phalanx can select and attack its targets. The fact that it does so in naval areas does not mean that it has autonomy in its critical functions. It signifies that its use in autonomous mode is likely to comply with IHL rules, but there are instances where Phalanx misidentified its targets and opened friendly fire.  

Hence, the fact that the system in question is used as an anti-material weapon system is not only sometimes irrelevant to the design of the weapon system; it also does not always mean that it is impossible to encounter consequences in violation of IHL.

Conclusion

The Heinrich Böll Foundation’s summary of the characteristics that distinguish current weapons systems from AWS demonstrate a phenomenon in the debate on the definition of AWS that should be eliminated: the definition of an AWS must be independent of the criteria that are likely to render its use legal under the norms of IHL on the use of such weapons. The use of the weapon system in “highly structured and predictable environments”, inability of the system to “dynamically initiate a new targeting goal”, “constant human supervision” over the weapon system, and “anti-material uses” of the system are merely factors that increase the likelihood of compliance of the AWS with IHL. They do not mean that a particular system does not have autonomy in its critical target selection and attack functions. This is particularly important to clarify because when a system is excluded from the definition of AWS, it is no longer possible to include it in the scope of application of the emerging rules on AWS.

Views expressed in this article are the author’s own and are not representative of the official views of Jus Cogens Blog or any other institute or organization that the author may be affiliated with. 

Weapon System with Autonomous Functions and the Martens Clause: Are the use of these weapons in line with the principles of humanity and the dictates of public conscience?

By Clea Strydom

[Clea Strydom completed her B.A. Law and LL.B at Stellenbosch University, South Africa, before writing her LL.M dissertation on the International Humanitarian Law implications of weapon systems with autonomous functions through the University of Johannesburg, South Africa.]

Introduction

States are increasingly implementing artificial intelligence (AI) to pursue autonomy in weapon systems for armed conflict for various reasons, including, faster reaction time, faster data collection and processing, and being able to use robots instead of risking human combatants’ lives. There are, however, concerns that weapon systems with autonomous functions cannot be used in compliance with International Humanitarian Law (IHL), that it is unethical for machines to lethally target humans, and that their use could lead to an accountability gap.  Therefore, there has been an ongoing debate about whether to ban the development of these weapon systems. The mere fact that these systems have autonomy is not the issue the ongoing legal debate is focused on; rather it is the delegation of critical functions i.e., acquiring, tracking, selecting, and attacking targets, to weapon systems, that is of concern. The ICRC has correctly identified that “ethics, humanity and the dictates of the public conscience are at the heart of the debate about the acceptability of autonomous weapon systems.” 

Weapon Systems with Autonomous Functions

Autonomy in weapon systems should not be seen as a mere development of conventional weapons, instead, it is a paradigm shift in weapons technology that could change warfare drastically. Autonomy in weapon systems does not denote a specific new weapon but rather a shift in the level of human control over critical functions to weapon systems. This concerns a change in how warfare is conducted. While the most widely used terms are Lethal Autonomous Weapon Systems (LAWS) or Autonomous Weapon Systems (AWS), ascribing autonomy to the whole system is problematic.  It should be kept in mind that autonomy is not a type of technology, but rather a characteristic of technology, related to certain functions, instead of being attached to the object itself. Due to the problems with ascribing autonomy to the system, Andrew Williams suggests referring to “autonomous functioning in a system” in general, or “systems with autonomous functions” when referring to a specific platform or system. Therefore, the author has adopted the term weapon systems with autonomous functions (WSAF), as it indicates that the whole machine is not autonomous, but instead that it can perform certain functions with varying degrees of human interference, which will depend on various factors such as the system’s design or intelligence, the external environmental conditions in which the systems will be required to operate, the nature and complexity of the mission, as well as policy and legal regulations. It must be kept in mind that while autonomy in weapon systems is being pursued by several States, weapon systems that can perform critical functions autonomously are still a thing of the future. Therefore, the debate, including the advantages and disadvantages of autonomy in weapon systems, is at this stage still speculative.

The Martens Clause

The Martens Clause made its first appearance in the 1899 Hague Convention II and has since been included in Additional Protocol to the Geneva Conventions, Article 1(2): 

“In cases not covered by this Protocol or by other international agreements, civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of public conscience”.

The International Court of Justice in the Legality of the Threat or Use of Nuclear Weapons Advisory Opinion confirmed the principle contained in the Marten Clause as customary IHL and held that it “proved to be an effective means of addressing rapid evolution of military technology”. Concerning WSAF, the crux is whether the delegation of life and death decisions to a robot would be in line with the dictates of public conscience and principles of humanity.

Professor Michel Veuthy highlighted the importance of public conscience in IHL and identified that it can trigger the codification of IHL principles, be an impetus for the implementation and enforcement of IHL, and provide a safeguard for all situations not provided for or considered in the law. On the other side of the argument, Michael Schmitt argues that the Martens Clause only applies in the absence of applicable law in the Geneva Convention and Additional Protocols or international agreements such as treaties; and that since 1899, when the Martens Clause first appeared, the law relating to weapons has developed to such an extent that it covers all existing and future weapons. As a result, the role of the Martens Clause has been diminished. He argues that it is unlikely that any weapons would be found to be in contravention of the Martens Clause if it has been found to comply with IHL and applicable treaties. However, Robin Geiss points out that the IHL principles applicable to weapons are framed in a human-centric manner and might not sufficiently be able to deal with autonomy in weapon systems; therefore the Martens Clause could be used to create new laws or act as a safety net, as Veuthy suggests.

Even if it is accepted that a weapon could be banned based on the Martens Clause, several questions with no clear answers arise: first, how does one determine what the public conscience is, and secondly, which public? It is unlikely that the global public will share a common ‘conscience’. The public conscience and principles of humanity are not timeless or universal.  Several individuals have conducted surveys to try and determine public opinion on the weapon systems in question. Political scientist and current Inspector General of the United States Department of Justice, Michael Horowitz found that public opinions depend on context. In the first round of questions, Horowitz’s survey found that 48% of participants were opposed to “autonomous weapons”. However, once he put the use of the weapons in context and highlighted their benefits, opposition to them dropped to 27%. In American roboticist and robo-ethicist, Ronald Arkin’s survey participants acknowledged that “autonomous weapon systems” do have a role to play, but the majority felt that they should not be allowed to use force. IPSOS, a global market research, and public opinion specialist company has done various surveys on the views of “killer robots” for Human Rights Watch and the Campaign to Stop Killer Robots (who have called for a ban of “weapon systems that can perform critical functions autonomously). Interestingly the latest survey, conducted between November 2020 and January 2021 across 28 countries, shows that there is a correlation between opposition and the age of the respondents; with a 54% opposition average for those under 35 years of age, and 69% among those ages 50-74. This can be indicative of several factors, including that the younger generation is more accepting of technology and that the older population is more likely to have had first-hand experiences of the horrors of war. 

HRW believes that States should be considering these views when reviewing “autonomous weapons”. The perspectives do not create binding rules but may influence treaties and decisions to deploy the weapons. It is important to keep in mind that opinions change over time. While 50 years ago we could not imagine the possibility of unmanned remote-controlled systems being an integral part of military arsenals as they are today, we have come to accept them to a large extent. Surveys need to be seen in the context of the time, the way the questions are framed, and in this case, advancement in technology. As autonomy in weapon systems develop and the technology becomes more advanced, views on them will change. Armin Krishnan notes, in his book titled Killer Robots: Legality and Ethicality of Autonomous Weapons, that with “social conditioning” views on WSAF will evolve. 

Regarding the principles of humanity, there is a concern about the importance of human agency in life and death decisions. A lot of anxiety exists about losing human control over weapon systems and war in general, which raises questions beyond compliance with laws and also considers whether the deployment of such weapon systems is in line with our values.  Delegating decisions about life and death may dehumanize armed conflict even further. The concern is that allowing weapon systems to lethally target humans means they are not treated as unique human beings which is an afront on human dignity; late Professor Heyns referred to this as “death by algorithm”. It has also been argued that the anthropocentric formulation of IHL principles implicitly requires human judgment over decisions regarding force.

Conclusion

To date, the Martens Clause has never been used to ban a weapon. It must be kept in mind that at this stage the debate is still very speculative. Weapon systems that can perform critical functions autonomously, however, offer numerous advantages and it is unlikely that States will refrain from developing and deploying weapons that would give them the upper hand based on personally held views. What the Martens Clause does do is to remind us that in deciding on whether and how to design, develop and use WSAF we must do so in a way that safeguards our values instead of rendering them unsustainable. 

Views expressed in this article are the author’s own and are not representative of the official views of Jus Cogens Blog or any other institute or organization that the author may be affiliated with.