By Nurbanu Hayır
In 2018, a group of experts under the framework of The Heinrich Böll Foundation published a report on autonomy in weapon systems. As this report is a policy suggestion to the German government on the legality of autonomous weapon systems (AWS), it reflects on their definition of them. After defining AWS as “any weapon system with autonomy in the critical functions of target selection and engagement” as inspired by the International Committee of the Red Cross, the report summarizes specific characteristics of some weapon systems that “keep them distinct” from fully AWS “that raise concerns” under international law. It enumerates these characteristics as (1) Use of the weapon system in “highly structured and predictable environments” (2) Inability to “dynamically initiate a new targeting goal” (3) Constant human supervision (4) Anti-material uses of the weapon system in order to argue that they do not qualify as AWS.
This article claims that these distinctive characteristics bewilder the debate on what AWS are and whether AWS are illegal. Weapon systems with autonomy in their critical functions, i.e., systems that can “select (i.e. search for or detect, identify, track) and attack (i.e. intercept, use force against, neutralise, damage or destroy) targets without human intervention” should be defined as autonomous weapon systems irrespective of these characteristics because these do not mean that a particular system does not have autonomy in its critical functions but only that the use of AWS might be legal under International Humanitarian Law (IHL).
The purpose of this article is not to argue that all that qualify as AWS is illegal, but rather all that qualify as AWS should be regulated under international law. We should not allow AWS to escape regulation by distorting its definition. Considering that an essential part of the discussions held globally is on whether AWS require the development of new norms under IHL, defining AWS as broad as necessary is crucial to determine the scope of application of these new rules.
- Use of the weapon system in ”highly structured and predictable environments”
The use of the system in highly structured and predictable environments may be likely to decrease the likelihood of misidentifying targets. Nevertheless, they should not be considered elements to refrain from defining these systems as AWS, but rather elements to consider when deciding whether the use of a particular AWS is legal in casu.
Autonomy is the ability to operate independently from a human operator. It is the product of Artificial Intelligence, a field of study that has allowed machines to develop functions initially performed by humans. A way of doing this is through hand-coded programming, where coders define everything beforehand, which yields no predictability issues unless an exceptional malfunction occurs. However, this method is increasingly outdated at the expense of machine-learning, a coding technique that provides more autonomy to machines. To put it very roughly, machine learning algorithms, which are a series of combinations to solve a function as in an algebra class, allow the machine to make its own decisions after receiving the data about the environment and the task it must perform with the help of humans. This has increased predictability issues since not everything can be pre-programmed by the coder and machine-learning algorithms are not transparent for humans to untangle. This is so because the machine operates through thousands of combinations when deciding, where humans eventually lose track due to the limits of their cognition. Thus, although humans set the goal for the machine, they cannot foresee the pathway in which the machine makes the decision that might lead to an unpredictable result.
This foreseeability issue is particularly important because systems are likely to misidentify targets due to the limits of the current technology. Machines’ perception of the environment remains radically different from that of humans, i.e., they use hundreds of dark squares (pixels) to recognize an object. In contrast, humans see and interpret objects in a cognitive way that is unmatched by that of machines. When used in target recognition in weapon systems, this has serious repercussions in “misidentifying” targets by machines. Target recognition is equally as important as target engagement in determining whether a weapon system qualifies as an AWS. This should be the case because although a human may intervene in the target engagement phase, as target recognition is completely independent of humans, the decision to engage will heavily rely on the target recognized by the autonomous function. Autonomous target recognition in the critical function of selecting targets should be sufficient to define the system as an AWS
Thus, the use of the weapon system in highly structured and predictable environments should not prevent it from being defined as an AWS.
2. Inability to “dynamically initiate a new targeting goal”
Initiation of a new targeting goal based on the objective introduced to a system is a great example of a near-General AI, which can practically perform all the functions performed traditionally thanks to human cognitive abilities. Today’s AI is Narrow AI, which can only perform some functions that a human can. As an example of a near-General AI, the United Kingdom defines AWS as weapon systems “capable of understanding higher-level intent and direction.” However, the ability to select and attack targets, possible through Narrow AI, is sufficient to raise questions of compliance with IHL principles of distinction, proportionality, and precaution without a need for a General-AI system. For instance, a weapon system that is introduced with image and speed details to autonomously recognize and engage with a target raises questions under the principle of distinction as it is uncertain whether it can properly distinguish between lawful and unlawful targets. Although this system is not capable of understanding the goal of the command by the operator, it nevertheless raises concerns under IHL.
Thus, similarly above, the limits of the technology today should not prevent defining a system as AWS. For instance, although a system may be incapable of “dynamically initiating a new targeting goal” it may still have autonomy to recognize or engage with a target, which is likely to cause issues under IHL independent of the high-level complexity required by some States.
3. Constant human supervision
Although exercising human supervision from time to time may rule out autonomy entirely, the ability of the system to allow for human supervision does not render the AWS non-autonomous per se. There are many weapon systems with autonomy that are able to operate in the autonomous mode and sometimes do. More importantly, human supervision may be exercised on functions independent of targeting. A great example of this is active protection systems (APS), which are designed to protect armored vehicles at a speed that overpasses the human capability to detect targets. Though exercising human supervision is possible, the aim behind APS is to engage with targets faster than humans can, so they usually operate autonomously without human supervision in target engagement. Hence, human supervision is limited in the targeting functions for specific reasons. Thus, a weapon system will likely be defined as AWS.
Further, it is unclear how much reliance the human operator will vest on the weapon system. Concerns of automation bias also support that human supervision, unless it rules out the system’s ability to operate independently, cannot be a ground to disregard autonomy in the current weapon systems’ functions.
4. Anti-material uses of the weapon
IHL protects civilians and civilian objects under the principle of distinction, the principle of proportionality and the principle of precaution applicable to both the design and use of weapon systems during armed conflicts. If the weapon system is constrained by design to be used towards humans, surely there will be no issues concerning the protection of civilians during armed conflict. Yet, civilian objects (e.g., an operational hospital) might still be threatened. Further, civilian presence is an independent element of the characteristics of the weapon system. Thus, the target type cannot be a ground to claim that a weapon system is not autonomous but rather renders the use of that weapon in compliance with IHL.
Further, some weapon systems are not constrained by design to be used towards humans, but their deployment area happens to be scarcely populated by humans. This is the case for the US Phalanx-Closed-in-Weapon-System (Phalanx) deployed in naval areas with almost no civilian presence. The target software of Phalanx can select and attack its targets. The fact that it does so in naval areas does not mean that it has autonomy in its critical functions. It signifies that its use in autonomous mode is likely to comply with IHL rules, but there are instances where Phalanx misidentified its targets and opened friendly fire.
Hence, the fact that the system in question is used as an anti-material weapon system is not only sometimes irrelevant to the design of the weapon system; it also does not always mean that it is impossible to encounter consequences in violation of IHL.
Conclusion
The Heinrich Böll Foundation’s summary of the characteristics that distinguish current weapons systems from AWS demonstrate a phenomenon in the debate on the definition of AWS that should be eliminated: the definition of an AWS must be independent of the criteria that are likely to render its use legal under the norms of IHL on the use of such weapons. The use of the weapon system in “highly structured and predictable environments”, inability of the system to “dynamically initiate a new targeting goal”, “constant human supervision” over the weapon system, and “anti-material uses” of the system are merely factors that increase the likelihood of compliance of the AWS with IHL. They do not mean that a particular system does not have autonomy in its critical target selection and attack functions. This is particularly important to clarify because when a system is excluded from the definition of AWS, it is no longer possible to include it in the scope of application of the emerging rules on AWS.
Views expressed in this article are the author’s own and are not representative of the official views of Jus Cogens Blog or any other institute or organization that the author may be affiliated with.