The following are comments by Richard Moyes, Article 36 at the UN CCW GGE on Lethal Autonomous Weapons Systems, 10 March 2022.

Thank you Chair,

I wanted to comment briefly on the direction of this conversation in broad terms, and to perhaps to suggest some areas where states could productively focus their attention.

The first point I want to make relates to our understanding and description of the problem – of the challenge before us.  I appreciated the comments made by Chile yesterday – that appealed for us to situate this issue in a wider context regarding the role and regulation of new technologies in society. The extent to which we allow machines to use sensors and algorithms to analyse the world, to apply pre-set labels to things and people, and by that process of classification to apply force to them, has a bearing on our future relationship with technology in many areas of life.

Such technologies implicate the dignity of people subject to automated processing. They create uncertainties as to whether machine classification will accord with human intent – uncertainties that grow over time and over greater geographical scope of application. This in turn connects with challenges for the application of existing legal rules: whether a commander has sufficient understanding of system functioning and the context of use to make adequate legal judgements necessary in an attack, and for each such attack, and to enable responsibility and accountability.  It raises questions, under the law, for how “an attack” is to be understood. We have then wider possible risks – of inadvertent engagements, of military escalations – that extend the scope of concerns beyond moments of individual use. And wider still, but connecting back to the individual, we have questions about how in our society we will establish limits on the power given to algorithms and automated processes to pass judgements on our lives.

I raise these issues at some length now because having an understanding of a problem is necessary for shaping a response to that problem.  It is notable in some of the proposals that have been tabled in this meeting that there is no actual engagement with the challenges that autonomous weapons technologies raise.  Again, we see substantial comments on “risk mitigation”, for example, without any description of what the actual risks are. In other areas of professional life, in situations where lives are at stake, this wouldn’t be considered adequate or prudent.

So I would encourage states to retain a focus on explicitly recognising and articulating these broad challenges.

And there are some seeds that have been planted here.  In the paper introduced by the Unites States, for example, we have an explicit acknowledgement that certain autonomous systems may present a risk of unintended engagements against civilians, or civilian objects, as a result of unintended bias in artificial intelligence capabilities.

This seems to be an acknowledgement that would benefit from fuller articulation – beyond a mere noting of potential challenges to IHL earlier in the document.

How do states consider the legality of fielding systems that may contain unintended biases that will result in the targeting of civilians – and how does this relate to the concept of ‘doubt’ regarding military or civilian status under IHL?

The second point I wanted to make relates to the basic structure of response to this issue – in terms of the shape of a policy or legal response.

By the end of last year there was a clear majority of states supporting a structure of response that was founded on recognising some AWS as unacceptable and others as needing to be used under specific regulations in order to enable compliance with existing legal rules and to ensure accountability and responsibility.

This ‘two-tier’ structure of response was evident amongst states that supported a legal instrument and those that preferred a policy instrument or a politically binding document.

And some of the key lines of demarcation in that structure were also subject to broad agreement.  There was a recognition that systems that cannot be used with sufficient control, or that are not sufficiently predictable would present unacceptable moral and legal challenges.

There were significant differences of wording around this point – differences with practical implications.  But there was broad conceptual agreement that a line could and should be drawn here.

There was also significant conceptual agreement on key regulations or “positive obligations” that might be considered necessary – requirements to understand the target profiles that a system uses, and to constrain the area and duration or autonomous targeting in order to allow a human commander to make a meaningful judgement about the outcomes of use in a particular context.

For all the apparent disagreement in this forum there are basic structures here that many delegations embrace, and that most, if they could escape their political anxieties, would struggle to honestly reject.

Given the vague mandate of this group it is clearly valuable to have concrete proposals on the table, we appreciate the work that states have put into such documents and we hope that further such proposals will be developed over the period ahead.  At Article 36, as part of Stop Killer Robots, we are clear in our perspective that new legal rules are needed on this issue.  And it seems as though this is the position of the majority of CCW parties, even if this forum does not allow work directly in that direction – and we particularly welcome proposals and interventions that promote the need for new legal rules.

We would encourage all delegations to work further to explicitly recognise broad challenges posed by autonomous weapons – as this a is a necessary starting point for an adequate and coherent response. And we would encourage delegations to further reinforce and build upon the structure of prohibitions and positive obligations that can provide the basis for a response to this issue.

Finally, we would urge states to give greater consideration to the challenges posed in targeting humans directly with autonomous weapons systems.  Problems that include the risks associated by unintended bias in AI processing that are acknowledged in some of the papers as I have noted already.  Addressing directly the specific issues associated with machines targeting people is vital to the credibility of a response to this issue in the long term.

Thank you Chair.

 

Featured image: A night view of the ” allée des drapeaux ” of the United Nations in Geneva. 14 February 2014. UN Photo / Jean-Marc Ferré