Episode Summary

In this episode, we look at arguments for why we believe it is a problem to have machines targeting people.

Transcript

Uldduz Sohrabi: [00:00:00] Hello and welcome to Article 36 autonomous weapon podcast series, where we raise a  critical voice on weapons. My name is  Uldduz Sohrabi and I’m your host today. With me in today’s episode is the managing director at Article 36, Richard Moyes, and one of our main advisors, Elizabeth Minor. If you’re new to Article 36, and you would like to get to know our team better, you can find out more about us and the work we do on our website ARTICLE36.ORG.

[00:00:53] Our goal through these podcast series is to bring a better understanding of concerns surrounding autonomous weapons systems. And to explain the position of Article 36 and its policy framework. Specifically in this episode, we’ll look at the present arguments for why we believe it’s a problem to have machines targeting people.

[00:01:19] Now, what it means for a machine to kill someone is ambiguous,  linked to the notion that there must be some human proximity to that decision, but where that is, it’s still uncertain. Still, it appears a common moral response to feel that machines should not be allowed to kill people. So why don’t we start by clarifying some of the concepts in our discussion today?

[00:01:47] Elizabeth, could you explain what we mean by targeting people and what scope of weapons are we actually talking about?

Elizabeth Minor: [00:01:57] Thanks Uldduz. So we’re arguing that there should be a specific prohibition on systems that target people within the whole structure of regulation for autonomy in weapons systems that we think states need to negotiate as an international treaty, which we’ve been discussing in this podcast series.

[00:02:13] So just as a quick reminder of that, we want to see an international treaty negotiated that contains both prohibitions on systems that can’t be meaningfully controlled. And systems that target people as well as positive obligations to ensure that meaningful human control is kept over other systems. If we think back to our first podcast on applying for space on sensors, the scope of systems that we think needs to be regulated by this instrument is weapons systems that after they’ve been activated by a person, so after the human intervention, they apply force to targets based on processing inputs from their sensors and matching these to a pattern of information we’ve been referring to as the target profile, which is a simplified representation of an intended target object.

[00:03:00] So like a heat shape or radar signature, for example, it’s like encoding of human concept, like a tank or a person in a pattern of data that a system can be tasked to sense and then apply force to. So when we’re talking about a prohibition on systems that target people, we’re talking about these sensor-based weapon systems that are designed or use to sense machine encoded representations of people as their target objects.

[00:03:27] These kinds of systems, they might be designed to send some target of any person. So a target profile based on a heat shape might serve that for example, or their target profiles might be narrower. So for example, a system might be designed to use biometrics to target particular individuals or groups of people with certain characteristics that might be sensed.

[00:03:48] So we’re talking about weapon systems that reduce people’s objects to strike them essentially. And prohibiting these systems is really one of the key lines of prohibition that we want to see within a treaty. Just to clarify as well, other objects can contain people, for example, vehicles and buildings, and we’re not addressing these with this particular component of the regulation.

[00:04:11]For this component about targeting people, we want to concentrate specifically on drawing this kind of moral line against systems, that sense and reduce people to objects specifically. And these other systems that might target objects containing people are going to be addressed by other areas of the regulation we want to see.

Uldduz Sohrabi: [00:04:28] I know that various concerns have been raised about autonomy in weapon systems. But I wonder if we could talk about how those different concerns intersect with the issue of targeting people specifically, how do these themes fit together?

Richard Moyes: [00:04:44] Yeah, thanks, Uldduz. I think we’re often talking about issues in this space in its framings around international law and framings around military decision making. But I think perhaps more than other areas of our discussion,  this issue of targeting people really resonates with I think much wider themes in our society about our relationship as people with technology.

[00:05:07] I think for us, it’s important to see this issue in that context, that how we make decisions about how we kill each other and how we use technology to mediate between us in the sort of application of force and, and in the prosecution of violence. That has a bearing on how we understand our relationship with technology in other spaces of our lives away from conflict.

[00:05:27] So I think,  what we’re talking about here when we start talking about machines that are using sensors to identify people basically as objects to apply force to we’re talking about algorithmic automated decision-making,  which affects people’s lives and which brings harm to people.

[00:05:44] And I think that does have a  whole set of implications in terms of underpinning moral concerns and also perhaps practical, legal implications out into the wider society. So Elizabeth talked about it before, I think in terms of these systems using sensors, basically reducing people to objects, recognizing that target profiles are just a simplified pattern of sensor data machines don’t see people as people, they just see a pattern of incoming data and they match that against some sort of programmed profile and force is applied. There is no recognition of our sort of shared humanity in that process.

[00:06:20] So where we start to see a reliance on sensors to apply force to people, we’re always seeing a form of dehumanization, essentially a non-recognition of the reality of humanity, a reduction of people to objects that are essentially an objectification of people as a result. And I think for us, that has of course implications for civilians. But this is an issue in terms of military personnel. For us, we’re in favour of a blanket prohibition on systems that would target people in this way because these moral concerns, apply to civilians who might be inadvertently targeted, but I think they also apply to military troops. So there’s an underpinning moral concern for us in that, but there’s also a set of, I think, precautionary orientations for us in approaching this area. If you start to claim that you can distinguish between the people you want to kill and the people you don’t want to kill with these systems,  basically asserting that the system is going to identify the ‘baddies’ and it’s not going to,  hurt the ‘goodies’ kind of thing, you’re immediately perhaps looking at systems that are gonna be built on data sets, which could be prejudicial in their contents where you get sort of bias embedded in data sets all bias embedded in the algorithmic processing.

[00:07:29] The process of identifying people or even more identifying certain types of people to be targeted almost immediately will open up risks of prejudicial orientation. I think it’s probably beyond risks. It’s probably the occurrence of prejudicial targeting. And this then also is linked of course, to how we use labels in society, how we identify certain people or certain groups of people and apply labels to them already. We see in some military practices, a  movement to assuming men of a certain age or people who could be identified as men on the basis of certain physical characteristics of a certain age are, or within a certain age group are considered combatants as opposed to being considered civilians.

[00:08:15] So already we see certain gender signifiers being taken to suggest that they could be an indicator of combatant status rather than civilian status. I think we would just see more and more of that in this at any move towards the use of systems that apply force to people using sensors.

[00:08:31] And as a result, we just see a sort of hardening of these labelling structures in society, which I think probably has very negative implications. So a blanket prohibition across this whole space for us is both morally justified. But we’ll also take a precautionary orientation to certain underpinning broader moral social concerns as well as legal concerns that maybe we’ll come on to as well.

Elizabeth Minor: [00:08:56] It’s important to really emphasize in the space a lot of the concerns that have been brought up in relation to autonomous weapons,  relate to worries about bias and discrimination and how that might be reproduced perpetuated and augmented by autonomous technologies and decision-making. I mean, we know that our technology isn’t neutral, right? Anything that is produced by humans, created and programmed and set to certain tasks- it’s not an objective thing. It’s something that we make and anything that we put into that, including all our oppressions and biases that are current in our society at the moment, are going to be reproduced in the output of these technologies.

[00:09:37]We already know, like on a simple level, for example, about the kind of unreliability of facial recognition technologies and the error rates for them increasing for women and for people of colour. So I suppose we really have to ask ourselves if there are systems targeting people for the application of force,  who’s going to be affected by that, and how much is it going to perpetuate and reproduce some oppressions that we already see?

Richard Moyes: [00:10:02] We’re talking about classification, categorization machines and controlling classifications and categorizations in society is part of the main way of controlling structures of power and dominance in society.

[00:10:15] So the idea of these machine systems that are classifying people and applying force to people in this way, just feels like a real direct route to embedding existing power structures, just all the more distinctly in place. And I think we should be resistant to that.

Elizabeth Minor: [00:10:33] Another concern, I suppose, in the area of systems targeting people is,  if you think about all the problems that we have with increasing autonomy in weapons systems, and we have sensor-based weapon systems in general around agency and accountability and responsibility with automation and the use of force and, basically of a force application being further from human input and control many of these concerns already,  they feel the most pressing and the most acute when it comes to the targeting of people than when it’s directly over the life and death of particular individuals. It raises very profound moral questions around dignity and dehumanization.

[00:11:11] So we think,  for various reasons ethically and morally in order to rule out and address some of these problems of potential bias and discrimination in the use of systems,  it would just make sense to have a blanket prohibition on systems that target people.

Uldduz Sohrabi: [00:11:29] Besides the problems such as what you mentioned on data biased, what other risks and challenges on the legal side, might there be with systems targeting people?

Richard Moyes: [00:11:42] I don’t think we tend to argue that prohibition on targeting people is straightforwardly demanded by the terms of existing law. We tend to argue that it’s really required on moral and precautionary grounds, but some of those precautions are really about preserving the legal structure and the sort of current mode of legal application. I think it’s, first of all, It’s vital to recognize that it’s people who make decisions under the law, the law applies to people and it’s people, human users of weapon systems in the case of armed conflict, it’s those people who are responsible for applying legal rules. This means that certain distinctions such as the distinction between whether a person is a combatant who can be targeted or as a civilian, or who is not to be targeted, these distinctions are made in real-time, by humans on the battlefield. And they have a degree of flexibility and uncertainty to them. And somebody can shift from being a combatant who could be targeted to being somebody who can’t be targeted. If they’re injured and incapacitated or if they lay down their arms and surrender, then their identity shifts.

[00:12:49] So people’s identity, as in terms of the validity of applying force to them of targeting them and not somehow intrinsic to them, but they’re contingent and they’re contextual and they’re uncertain. Systems that are purporting to make distinctions between people as targetable or not targetable are almost always going to be in some way, fixing in place and making static, determinations and distinctions that actually should remain dynamic and uncertain to a degree and although people might argue that that uncertainty, that’s perhaps that’s a risk, perhaps that’s a problem, perhaps we could improve on that. I think that’s a rather naive view of the law because it’s in that uncertainty and in the active engagement with that uncertainty that you get the process of human, moral application and moral judgment coming to bear in each time actual legal decisions are made.

[00:13:41] So in a way, every time we’re raising that uncertainty and fixing those labels in place, we’re also erasing the space for active live human moral application in the situation, which is fundamental to the legal structure. And I think we need to rely on ourselves as people to want to maintain that frequency and presence of human, moral engagement, as a basis for our attempt at some social organization in a circumstance, which is inevitably brutal and difficult. So avoiding the fixing in place of these labels is I think critical to protecting and preserving the law in the long term.

Uldduz Sohrabi: [00:14:20] Well, today we know that there are weapons systems such DoDAM Super Aegis II, which is the South Korean sentry gun that has the technical capability of targeting an object or human without human intervention.

[00:14:35]If we recall the senior research engineer of the system said that, although it’s used with a human in the loop function, it wasn’t initially designed this way, but that it was by all their customers’ requests that they have decided to implement human input pre-launch of force. Now DoDAM said that the users have so far chosen to implement self-imposed restrictions because they’re concerned that the gun may potentially make a mistake.

[00:15:08] A possible argument is those weapon systems would only be used in areas where there are no civilians. What are some of the problems of this argument and what risk would assist them such as the Super Aegis II pose if this was eventually to be deployed within its full autonomous capability?

Elizabeth Minor: [00:15:30] Yeah. So I think there’s possibly a few kinds of problems and issues here. If that argument was made that a system like that, which could automatically sense and apply force to people, could be used in areas where there are no civilians.  Let’s assume to start with at an area where there are no civilians that could exist.

[00:15:48] I think you still run into some legal technicality issues there in the use of these kinds of systems, which relate to what Richard was talking about. So even if there’s an area which has only has military personnel and some of those can’t be targeted. So for example, if a soldier is wounded on the battlefield, it’s no longer legal to apply force or to strike…

[00:16:08]So this comes back to is it accepted within the current structures of the law that we have to design and deploy a system where you’re giving the task to the system to recognize people who are no longer targetable, as opposed to soldiers who still are.

[00:16:27] And I think we would argue that you can’t give these decisions of deliberative legal reasoning to machines that just makes no kind of technical or legal sense.  So you can’t delegate that task cause that’s always a human commander’s decision to make.

[00:16:41] So I think it would be problematic in that way to think of an area where civilians don’t exist and that therefore you could just have a system that targets people and wouldn’t be kind of legally problematic.  I think secondly the idea of having an area in which there aren’t civilians is a problem too.

[00:16:59]I suppose there are areas in which military forces might seek to exclude civilians for the danger to themselves and in order to do combat operations there. This has sometimes been referred to as kill boxes where autonomous weapons systems could operate. I mean, this is problematic because it pushes towards putting the burden on civilians to exclude themselves from certain zones and putting the responsibility on to them. Whereas, in the law and in how we think things should be conducted the responsibility is on parties to a conflict, to protect civilians and to distinguish between civilians and combatants. And again, that’s a human bit of decision-making, not something that can be in any way delegated to a machine.

[00:17:40] So I think there are various problematic ideas with this area of using systems that target people in places where there aren’t civilians. I think as well,   it’s interesting that you say with the example of the sentry gun,  used in South Korea, that the operators that are using the system don’t want to use it in this fully automatic mode because they see that this could be problematic in different ways.

[00:18:03] The way you mentioned it might target the wrong people, for example, the civilians or wounded soldiers.  Generally, we see at the moment, there are not many of these systems around of sensor-based weapon systems that target people. There are anti-personnel landmines but there’s been significant adoption of a global norm against the kind of production use of those weapons with the anti-personnel mine ban convention and a lot because of these issues that we’re talking about.

[00:18:30] Other technologies like the sentry gun currently militaries are choosing not to operate them in this fully autonomous way. So I feel that kind of points us in the direction that there’s already an emerging norm here that this isn’t acceptable.

[00:18:44] And there’s an opportunity for us at the moment to draw a strong line against it with a prohibition on systems, targeting people,  in regulation on autonomy and weapon systems to make that really clear red line.

Richard Moyes: [00:18:57] The point that Elizabeth made about this producing a very clear red line, I think is extremely important for us.

[00:19:03] I think when it comes to regulating technologies in this space, there are inevitably going to be some rules and some obligations, which are quite broad and open-ended and are going to require collective norm-setting in order to really sort of shape the technological boundaries in the right direction.

[00:19:21]But just take it off the table. The idea that we have systems that use sensors to target people  -we as a society don’t want that- sets a very clear standard. And this is important for thinking about how legal development can come about because it does tend to mean that if a group of states were to develop a treaty that contains this provision amongst others, they would be able to exert from that clarity of that rule, significant normative effect on states that stand outside the treaty and that don’t choose to join it because there would be a clear sense that there was something had been identified here that was morally repugnant and boundaries of that were quite straightforward. So I think it’s important for thinking about the political dynamics of a legal instrument that it could have this clear external communicative, normative effect. And the other point that Elizabeth made also critically important from practical terms is that we haven’t as an international society, really adopted weapons that function in this way to a significant extent.

[00:20:18] We did with anti-personnel mines and it was clearly recognized as being a huge moral humanitarian disaster really. And so we could learn our lesson from that. We can draw a line across this now, when we have a chance, when it hasn’t been widely adopted, where there aren’t huge investments in it, I think there’s a real opportunity and that’s something that we really need to seize in terms of the timings of a political process on this issue.

Elizabeth Minor: [00:20:43]  It’s really the simplest way of eliminating some sharp edge concerns with autonomy and weapon systems.  To draw this line within the broader set of systems that we’re talking about. And it helps us get to grip on some of the moral lines that are at stake in this area.

[00:20:58] And thinking about the idea of systems operating in areas where there aren’t any civilians, we should remember that the key dehumanization points still stand as an objection to systems that function in this way.

[00:21:10] So whilst there’s a range of factors that come to bear in terms of claiming to use systems in areas where civilians aren’t present still there is that, that dehumanizing function at work that we think we should be rejecting.

Uldduz Sohrabi: [00:21:25] So we’re arguing for a ban on systems targeting people. But do you think a ban is demanded by existing law or are you seeing a ban as a necessary moral response?

Richard Moyes: [00:21:37] I think we see abandoned systems, target people using senses as a necessary moral response and a necessary sort of precautionary response to a wider set of risks. Some of which include the law. So that’s a little bit different from saying that existing law straightforwardly demands that we come to this conclusion.

[00:21:58] I don’t think that’s definitely not the way I would argue the law, but I tend to be quite conservative on my legal analysis when I pretend to be a lawyer, I  pretend to be a conservative lawyer. So moral issue first and foremost, in terms of this sort of dehumanization concerns, but then a precautionary orientation.

[00:22:16] In relation to issues of bias in datasets or bias in algorithms and the fixing of social labels about identifiers of who might be targetable, if we were starting to claim to be able to divide up people and precautionary to erosion or corrosion of the law as well in terms of the fixing of categorizations that the law depends on and which in the law we think really should be remaining flexible and dynamic.

[00:22:43] So this is to do with how we identify somebody as a civilian who should be targeted versus a combatant who could be targeted. Fixing those kinds of labels in place is not how the law should function. And the more we fix it through these kinds of systems, the more that fixity is going to start to become embedded and adopted throughout the legal fabric.

[00:23:03]And also this, I think relates back to some of the issues we were discussing the idea of targeting people so longer civilians are excluded from the area. Well this starts to buy into broader structural responses that involve putting the burden on the civilian population to get out of the way of combat and if we start to think, well, we can just drop some leaflets and then we don’t need to care whether there are any civilians in this area or we put up some warning signs and then it’s up to the civilians to get out of the way, then we’re always risking placing the burden of responsibility onto the civilian population to avoid harm to themselves, rather than taking upon ourselves as say, the uses of force in a given situation, the full responsibility to avoid and prevent harm to people who should be protected.

[00:23:50] It’s precautionary towards the law in terms of both fixity of legal labels and balance of the law’s functional responsibility to be protecting of the vulnerable, rather than just a structure for enabling the military to act as they will.

Uldduz Sohrabi: [00:24:08] Right. So article 36 position is arguing for a prohibition on systems that target people as one of the components in a broader structure to regulate autonomy in weapons systems. So remind us, Elizabeth, what are the other problems or lines of regulations this will need to address? Let’s remind ourselves how this all fits together.

Elizabeth Minor: [00:24:32] The starting point for the treaty and the structure of regulation that we want to see is this broad scope of weapons systems that apply for us to target based on processing inputs from sensors after human intervention.  Within this broad scope, we think there should be, on the one hand, some clear prohibition.

[00:24:51] So firstly, this ban on systems, that sense and target people that we’ve been talking about today. Secondly, we also think there should be a ban on systems that don’t allow for meaningful human control, so they can’t be effectively understood by their operators.  On the other hand, within this structure, we also want to see some positive obligations to ensure meaningful human control over the systems that are left within this broad scope of the treaty.

[00:25:17] So these would be systems that don’t target people and that it might be possible to use with meaningful human control, but which, you could also use without this kind of control. And so we need to address that with some further obligations. So we need all of this structure of positive obligations for meaningful human control, as well as prohibitions to make sure we achieve all our goals right in the area of getting some regulation that effectively rejects the automation of killing and this digital dehumanization we’ve been talking about today and also demands meaningful human control over any use of force.

[00:25:53] Just want to acknowledge that discussion about autonomous weapons, we’re talking about these sense of best weapons, right? So we’re not talking about broader problems of, for example,  mass data processing and target suggestion where the application of force has the target selection is by a person.

[00:26:09]And these kinds of processes are already being undertaken by militaries. They do obviously raise questions for human control and on the ethical side and have all these similar problems of kind of bias and unreasonableness and injustice potentially.

[00:26:24]That’s not something that’s included within the scope of what we’re talking about today, but hugely problematic. And even if it’s not addressed directly in the instrument that we’re talking about, as Richard was saying, if we establish these kinds of clear lines, such as the prohibition on systems that target people, I think that is very important to have normative force on other conversations around other problematic practices,  within the military sphere, but also more generally there’s a significance for conversations around algorithmic decision-making for various things that affect people’s lives and decisions on particular people,  which is going on outside of this discussion on autonomous weapons and the military sphere.

[00:27:06] And I think it’s very significant to think about the implications of prohibiting targeting people in that context, right? Like it felt like it’s very important if we could at least draw this line against algorithmic killing, essentially that’s really a minimum I would think in this area and in the context of addressing kind of algorithmic decision-making on people and the problems you might have in other areas such as policing and justice. Even your credit score, employment and all these kinds of things. Right? So I feel though what we’re talking about in a way is very narrow, it can have a much broader significance, which is important.

Uldduz Sohrabi: [00:27:46] And that’s all from us at Article 36 on this episode on targeting people in our autonomous weapons series. We hope you found this discussion useful. Thank you and goodbye.