
11.04.18
Drawing a line: Article 36 statement to UN talks on killer robots
By Elizabeth Minor
At the UN in Geneva this week, states are discussing issues of concern and ways forward on the issue of Lethal Autonomous Weapons systems, under the framework of the Convention on Certain Conventional Weapons (CCW). Read our intervention below, given on the third day of talks by Maya Brehm, Advisor at Article 36.
Statement by Article 36 to the Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS) of the Convention on Certain Conventional Weapons (CCW)
Geneva, 11 April 2018
Thank you, Chairperson,

Article 36 is encouraged by the detail of the discussion this week and that we have moved clearly to a state-led, policy orientated discussion.
We are also encouraged by the many states that have expressed their preference for centering this discussion on human beings and the need to ensure the necessary human control over the use of force.
Our rationale for participating in this discussion is the concern that developments leading toward increasing autonomy in weapon systems will bring about technologies, applications and human-machine configurations that diminish human involvement in the use of force to an unacceptable level in ethical, legal, security and other regards.
The key policy objective for us, therefore, is to draw a normative line against such unacceptable developments.
We believe that the best “criteria” to demarcate the acceptable from the unacceptable have to do with human control. It is a human-control based line that should provide the basis for a prohibition of that which is unacceptable. And an understanding of human-control requirements should guide the use of systems that are not prohibited.
In light of the concerns expressed by many states parties and given that the majority of them have expressed that human decision-making, control and/or judgment are critical to the acceptability and the legality of weapon systems with autonomy in critical functions, any policy response should give focused attention to the characteristics of the required human involvement.
We have previously published key elements of meaningful human control that we think are important. These include aspects of reliable and predictable machine design and development – as well as operational requirements, such as a commander’s ability to understand the likely time and place of any actual applications of force, sufficient to make legal assessments and moral judgments. We also recognize the importance of systems and their uses being embedded within a framework of accountability.
We are pleased to see aspects of this reflected in statements by some delegations this week. We have heard some states recognising that human control is to be manifested at various stages, including during development (for example in the form of programming constraints) and during various stages of use (notably in the targeting process).
We strongly agree that human control cannot fully be encoded into a machine. And that how target objects are encoded into a system, and how that is understood by an operator, impacts human control in practice.
We would also note again that the legal framework of International Humanitarian Law applies to humans, and requires human legal assessments in relation to individual attacks, on the basis of the circumstances prevailing at the time of the attack. We must not allow a movement to technologies to undermine that structure of human legal application.
This has important implications for the level at which a military objective is defined, and over what period of time a system can perform functions without active human intervention.
We are encouraged by the debate here today, and we encourage delegations to elaborate further on how they consider the necessary or “sufficient” level of human control can be described. We would benefit from further explanation of what is understood by terms such as “ultimate” or “final decisions” or “weapons release” – terms that have been used by certain delegations but where different interpretations could make a significant difference to the positions expressed:
For example, is a human making an “ultimate” decision regarding the use of force if they ‘turn on’ an AWS without knowledge of where, when and what, specifically, it may strike?
Or does making an ultimate decision regarding the use of force imply human knowledge of the particular person(s) or object(s) against which force can or will be applied, including their location at the time of a strike?
There are many potential interpretations between these two poles.
Likewise, it should be clear that affirming that “humans should continue to be able to make ultimate decisions” is, of course, very different from stating that they must make such decisions.
We welcome a more detailed and focused discussion developing here and believe this discussion is fundamental to reaching a collective normative understanding about where appropriate constraints on autonomy in weapon systems could and should lie to prevent unacceptable developments.
Thank you.