Episode Summary

Applying force based on sensors is the connection between algorithmic decision and execution of kinetic force onto a target. In this episode, we’ll be gaining insight into Article 36’s position on banning and regulating some weapons systems that apply force based on processing sensor inputs.

Episode Notes

This episode was recorded in March 2021.

Transcript

[00:00:00] Uldduz Sohrabi: Hi, and welcome to Article 36 autonomous weapon podcasts series, where we raise a critical voice on the weapons. My name is Uldduz Sohrabi, and I’m your host today. Through this podcast, we’ll be discussing issues surrounding autonomous weapon systems with Richard Moyes, who is the managing director at Article 36, along with our colleague Elizabeth Minor, who is one of our main advisors.

[00:00:47] If you’re new to Article 36 and you would like to get to know our team better, you can find out more about us and the work we do on our website ARTICLE36.ORG

[00:01:00] Today, we’ll be dealing with a specific topic within the ongoing autonomous weapon discussion, which addresses applying force based on sensors. But before we get into that, Richard, would you like to give us a bit of a background to this podcast series and what we think we’ll be covering?

[00:01:22] Richard Moyes: Yeah, thanks Uldduz. Well, we’re going to be talking about autonomous weapons or autonomy in weapon systems, which is very much a live area of discussion in international disarmament and international debate regarding weapons.

[00:01:36] This is an area where Article 36 has done a lot of work, developing policy positions and policy thinking. We’re also part of the Campaign to Stop Killer Robots, which is working internationally with states and other organizations looking to develop a legal treaty to provide some constraints on the extent to which autonomy can be developed in weapon systems.

[00:01:58] And this is really an issue that’s responding to sort of moral concerns, legal concerns, security concerns deriving from technological developments and technological possibilities in weapon systems. We see these concerns and the sort of risks of a sort of dehumanization in the use of force and a loss of some essential control in the use of force as demanding a legal response.

[00:02:24] So we’re working together to try to precipitated & encourage states to develop a legal treaty. This podcast series, maybe we’re thinking podcasts might be a new way of  getting our audience to engage with the subject matter. Thinking it might be easier than writing long texts. I’m not entirely convinced that we’ve proved that to be the case yet, but we’re looking to really develop  an introduction to the conversation to enable people to engage with our thinking on this issue and to both lay out how our thinking, but also to point to some of the open questions and some of the areas of tension in this debate.

[00:03:02] Ultimately, of course, we want to encourage states towards a sort of position of policy agreement, policy coherence on this. In favour of a legal treaty. So,  we’re encouraging people in that direction.

[00:03:18] Uldduz Sohrabi: Now, applying force based on sensors means that there’s a direct connection between a sensor and algorithmic decision and the execution of kinetic force onto a target, which could be either a non-living target or a person. In this episode, we’ll be concentrating on this broad boundary and we’ll be getting an understanding of what sorts of technologies we’re talking about in the discussion of autonomous weapons, as well as the basis it gives us for a structure of legal regulation.

[00:03:56] Elizabeth, let’s start by setting the scene.  What is the basic boundary? Why does Article 36 think that sensor-based targeting is so important? And how does this fit with other perspectives in the conversation generally?

[00:04:12] Elizabeth Minor: To be honest, there’s quite a wide range of conceptualizations of what we’re talking about and what is the problem like in the autonomous weapons conversation.

[00:04:20] And it can be a little bit hard to get a grip on. I know when I first started working on this issue, it took me a while to work out,  what are the kinds of issues and what are the technologies and processes we’re talking about. I mean, when you listen to  states conversations in this area, the idea of what, are we talking about when we’re talking about autonomous weapons, ranges from something that’s very narrow and futuristic and is basically kind of the Terminator of AI-enabled weapon systems that can set their own goals and you know, go out and kill people specifically for example, to a kind of broader idea of automatic weapon systems and even to concerns being raised in the space that goes beyond  direct consideration of weapons systems to a broader range of practices and data processing and decision-making that give more functions over to algorithmic processing and  erode the role of deliberation and human judgment in the use of force. So there’s, there’s a lot going on in the space, right? So in this context, thinking about sensor-based targeting is a helpful boundary.

[00:05:24] So we’re talking here about kind of a broad category of weapons systems where after a system is activated or put in place or sent out by a person. This system will process data collected by its sensors to determine  when and where and against what it’s going to apply force based on  a profile of a target that’s encoded into it.

[00:05:47] So this target profile could be, for example an object of a certain heat shape or radar signature to represent a military vehicle, or it could potentially involve the encoding of representations of a person or people. Again, using indicators like shape or perhaps in the future, you know, biometrics .

[00:06:05] So systems using this kind of where force is basically applied after human engagement from a system processing sensor input data, they already exist. So for example, missile defense systems use this kind of process. You turn them on for a short amount of time, they detect and destroy incoming projectiles automatically.

[00:06:25] So it’s not a new category of things but there’s kind of an issue of the expansion of how these kinds of system processes are used of using them of broader locations or durations. And you’re using them for  different kinds of targets and creating target profiles in different ways, for example, using advanced computational techniques.

[00:06:47] So this is what we’re kind of thinking about. Basically, with these systems, there’s not a human decision about where and when force is applied, with sensor-based targeting, this is kind of uncertain.

[00:06:57] Uldduz Sohrabi: Is there already a general consensus on how autonomous weapons systems are being defined internationally?

[00:07:05] Elizabeth Minor: I’d say no. And it’s something that some countries raise in the debate, right? That we don’t  have a definition of what autonomous weapons systems are at the moment.  From our perspective, we think it’s not really a problem, not to have a definition at this stage.

[00:07:18] Definition making is a kind of political thing that you would expect to see kind of in a treaty negotiation process. But it would be good to have a kind of broad understanding of the wide scope of systems that we’re dealing with in this area. So we think that taking sensor-based targeting for that kind of broad boundary or category we’re kind of talking within is useful. I think it’s useful because, if we can set a systems that use a particular process, it means we don’t need to limit our response to a list of particular technologies, which might, become redundant and make our regulation obsolete when future stuff comes along.

[00:07:56]And we also don’t need to get stuck on defining boundaries, which have come up in the conversation like what is an automatic versus an automated versus an autonomous weapon and kind of what are the different  conceptual and technical levels of these. Instead, we can talk about issues of how do we keep human control and how do we address  the possibilities of erosions of human dignity with the automation of killing, which we think are  at the heart of why we’re talking about this, not to  make technical definitions.

[00:08:26] Uldduz Sohrabi: So just to recap, we’re looking at a broad scope of weapons that we think are adequate to be discussed based on the processing sensor input criteria. Now that you’ve drawn this boundary, can you say more about what problems we have found within the broad scope of technologies of systems  that apply force based on processing sensor inputs?

[00:08:50] Richard Moyes: This broad boundary as Elizabeth says, captures pretty much all of the different conceptions of what might be an autonomous weapon  from the very complicated systems that we have  today.

[00:09:02] So all of these systems in some way, have this function of relying on sensors and the automatic processing of sensory information to determine specifically where and when force  is applied. And systems that work in that way have certain utilities  for the military and I think we need to recognize that’s why states might want to use systems that function in this way. Because if  the user doesn’t know exactly where a target is, then having a system that can detect the target using sensors gives them a certain capability.  So we can recognize that there’s some military utility to be found in this.

[00:09:37] I think as soon as you start using sensors to determine specifically where and when force will be applied, whether that’s an explosive force or a bullet being fired. As soon as it’s the sensor, that it is determining where and when force is applied, I think you automatically get certain concerns and certain tensions arising from that. There’s uncertainty about what the sensors themselves are going to be detecting as a target and Elizabeth referred to target profiles. We’re going to talk more about that, but there’s some uncertainty about whether that target profile really matches up with what you want to apply force to, or whether there’s an error in this situation.

[00:10:15] And there’s uncertainty about specifically where and when force will  occur, which means there’s going to be some uncertainty about what the circumstances are— or the conditions are— in the place where that actual force actually strikes something. So these tensions might be manageable in certain systems and in some systems they might not be manageable.

[00:10:37] And I think that’s an important challenge for us in this conversation. So there are  problems potentially that flow from this structure and when it comes to what is being targeted I think we recognize that machines don’t see things, they don’t see people, they don’t see the world in the way we do.

[00:10:55] They’re just comparing incoming sensor information data, whether that’s an electronic signal or some other measurement of the outside world. They’re just comparing that against some programmed conception or simplification of what a target might be. So there can be errors in that, there can be some discrepancy between what a system is programmed to identify and what a thing looks like and what it really is in the world. When it comes to the targeting of people, this raises significant moral concerns because this is ultimately a reduction of people to a set of data points, a set of ones and zeros.

[00:11:31] The system might be applying force to a person, but there’s no human recognition that this is what is happening, that this is a human being that is being subjected to force. In terms of the uncertainty of the time of place and uncertainty about exactly where and when force will occur— well, if you have a longer period of system functioning, a longer duration of a system functioning. If your system is functioning over a wider area on the ground or in whatever space of operation it is working in, then you get greater and greater uncertainty about what the actual circumstances are going to be in the location where force occurs.

[00:12:08] And this unpredictability risks from a legal perspective. It starts to challenge our ability to make legal decisions over what’s happening in a place. It has ultimately significant predictability concerns. We’re talking about systems here that don’t need to be extremely complicated.

[00:12:24] We already have systems that use sensors to identify targets. They may be projected into an area and may look at an area of ground for a short period of time. These current systems might be looking at a relatively limited area of ground, a few hectares say they may be functioning for a relatively short period of time: only a few seconds of scanning that area of ground. But if you took that same function and I think Elizabeth said you expanded it over a longer period of time, a longer area of space, then a military commanders ability to make informed judgments about  what is likely to happen from that use is significantly reduced.

[00:13:05] These dynamics, the uncertainty about what is being targeted and the uncertainty of specifically where and when force will be applied, I think they raise for us two  key questions that we need to be working together in this debate  to answer. The first question is, are some of these certain systems  functioning in this way and would they be unacceptable?

[00:13:26] And if they would be unacceptable, they should be prohibited. So, what are the functions— What are the lines of unacceptability in relation to these two key problems? And secondly, for those things that we’re not considering unacceptable, how are we going to  maintain, control over them and in practice, how are we going to ensure that meaningful human control is maintained in the use of those systems?

[00:13:48] Because as I say, we already have systems that function in this mode to some extent, if we’re not suggesting that those systems are straightforwardly unacceptable, we at least need to exert some constraints on them and some obligations  on their use or on their parameters to ensure that they don’t expand in their functioning to a point that they’ve, they’ve sort of stretched the legal fabric or they’ve stretched our conceptions of human control to a point where ultimately it’s yet another form of dehumanization.

[00:14:16] Uldduz Sohrabi: We’ve highlighted systems that would target people as being of particular concern as we’ve also highlighted the idea of systems that cannot be used with meaningful human control. Looking at the first of those two questions, how is Article 36 thinking about systems that need to be prohibited?

[00:14:38]Elizabeth Minor: The first area, as you mentioned, Uldduz, is to do with, systems targeting people.

[00:14:43] So within our broad scope that we’re talking about in this episode are systems that apply for us to targets based on processing inputs from sensors.  We think it’s really important to have a specific prohibition on systems that sense people rather than other objects as targets. Just to clarify here, we’re not talking about systems, that sense objects that might contain people.

[00:15:05] So like vehicles or buildings, or again, the computer-generated recognition of human targets that someone reviews no matter how inadequate that might be, but they’re kind of encoding and sensing of people in systems for the application of force. We think there’s kind of a real moral line here.

[00:15:23] It just shouldn’t be acceptable to reduce people to objects in the way Richard was saying to be sensed in this way to encode indicators of our humanness or specific characteristics that people or groups who might have into systems.  We think that’s very problematic and it raises a lot of profound questions around dignity and dehumanization.

[00:15:45]We’re going to be exploring this area more in a future podcast about targeting people. But, I think also just to say, it’s,   important in this area as well that targeting people with sensors in this way also raises a lot of threats around the discriminatory impacts that will result intentionally or unintentionally for racialized and other marginalized people.

[00:16:08] This would be a huge problem with, these kinds of systems. So, for all of these reasons, we think it might be best just to shut this all down by not doing it basically. And having,  just a very clear prohibition. And if we  prohibited sensor-based targeting of people,  we’d address this kind of range of issues, at least for this narrow group of systems. And I think we’re increasingly thinking it’s important to draw a line against this kind of practice in this area of weapons systems and hurting people and killing people to set kind of a strong precedent in the broader context as well. There’s a lot of concerns in our world around  algorithmic decision making and people’s lives.

[00:16:47] And it feels like,  really, as a real bare minimum, we should at least rule out  sensing people for the application of force. We also think it might be relatively straightforward for most states to agree to this kind of prohibition because there are not really many of these kinds of systems in use right now.

[00:17:04] So, hopefully, this is something we can do quite straightforwardly but  this is, you know, far from the only line of regulation we need,  as well within this scope and within this area. Richard, you maybe have more to say here I think.

[00:17:19] Richard Moyes: Yeah. Thanks, Elizabeth.  I think,  within this broad category of systems that use sensors to determine when and where to apply force, the first thing we think we should take off the table is systems that are designed and intended to target people. Elizabeth set that out very, very articulately.  But the other line of prohibitions and other obligations that we think is necessary is to take off the table these systems that  can’t be used with meaningful human control. I mean, this is perhaps a little bit more conceptually tricky.

[00:17:49] I think it seems fairly self-evident in a way, from a moral perspective that, there’s a distinct problem with the idea of systems that can’t be controlled.  The idea of you can’t make a legal judgment about the implications of a system or the effects that a system would have on the legality of an attack in conflict.

[00:18:08] If you don’t have sufficient control over what that system is actually going to do once you put it into operation. So it seems reasonably straightforward, at a sort of instinctive level that the idea of systems that can’t be effectively controlled should not be allowed. I think it’s fair to say that most sensible military people also don’t really want systems that can’t be effectively  controlled. Militaries are institutions that tend to quite like control and  militaries want to know what the force effects of systems are going to be if they’re going to use them in a sensible way.

[00:18:42] So, yeah. Always with these debates around weapons, that can be a risk that we somewhat assume that people who want constraints on systems and militaries on the other hand are directly in opposition to each other and I don’t think that’s necessarily the case on this on these points. Certainly, systems that can’t be effectively controlled present a moral problem for the user because you are essentially applying force in the world without being able to be sure what is, going to happen.

[00:19:08] And they present legal problems. As I’ve suggested you won’t be able to make effective legal judgements. They probably present practical  security concerns as well, that  you don’t know what sort of reaction your system is going to elicit from other people in the space.

[00:19:25] So probably these problem aspects are relatively straightforward in conceptual terms. The more of a difficulty is how do you determine what is a boundary of effective control or meaningful human control in the language that we like to use? How do you articulate how that boundary should be understood or the sort of rules that should  shape or articulate that boundary.

[00:19:48] I mean,  Elizabeth earlier, mentioned target profiles. These for us are the sort of program models that a system uses to identify what it is going to apply force to. Constraints on target profiles in some way could be a way of doing this. If the target profiles a system uses are going to change after that system has been put into use by a human operator, that would seem to be something that, makes a claim to meaningful control rather difficult. Similarly, if a system user doesn’t actually understand how those target profiles have been constructed in the first place, if the user of a system doesn’t know on what basis a system will apply force, then that also seems to present  a problem of control. And finally, if a system is inevitably going to be operated over a very long period of time or over a very wide area, then that also presents certain control-related problems.

[00:20:45] So whether they can be built into the prohibitions or whether they need to be subject to obligations on use, I think is a, is a question we can come on to. In that respect, tying a couple of these threads together, we’ve already seen in the world a prohibition on anti-personnel landmines. Well, for start anti-personnel landmines, as the name suggests, are systems that use sensors to target people and this was the first line of prohibition that Elizabeth was setting out. And I think we could say as an international community, there was a movement towards using systems that use sensors in this way to target people. It caused a huge amount of harm, and most states now have stepped away from that.

[00:21:25] Most states have joined  the prohibition on anti-personnel mines and have rejected that method of system functioning. In terms of the concept of control, anti-personnel mines’ one of the key problems, of course, was that they continued operating over a very long period of time. And as a result of that, not only were they harming people, but they were also having these distinctly uncontrolled effects, both through the direct harm to individuals but also on  the impact they had on land use and  development and the like.

[00:21:54] All of these prohibitions, we’re talking about: prohibitions on targeting people, prohibitions on systems that can’t be effectively controlled. These are all within this broad scope of systems that use sensors to determine where and when force would be applied. Anti-personnel mines perhaps provide a bit of an early precursor example of systems that function in that way.

[00:22:17] And I have to say our experience of that example has been one of the distinct humanitarian problems, which very much illustrate the types of concerns that we see already developing in this space and that we’re working to prevent expanding in the conflict in the future.

[00:22:33] Uldduz Sohrabi: I think the point that you raised here is very important. I think that is a good way that I’ve come to understand target profiles. It’s in these simpler terms that a machine will always only be able to interpret a representation of reality based on a set of data that are at least second-handed from a human. And those are not reality itself. Going back to your earlier explanation, you’ve talked about systems that need to be prohibited because they’re unacceptable but that still leaves us with the other systems that target on the basis of sensors. How do we make sure that those systems are used with meaningful human control in practice?

[00:23:21] Richard Moyes: Yeah, I think that’s one of the reasons why we think it’s important to draw this broad boundary around systems that use sensors in this way is because there are these systems, some of which are currently in use that we think do need to be subject to certain constraints in their use in order to ensure that they continue to be used with meaningful human control.

[00:23:42] Now people may argue that those systems, many of them that are currently in use such as the type of missile defense systems that Elizabeth pointed to, you could argue that they are currently being used with sufficient human control. And I think those systems certainly can be used with sufficient human control.

[00:24:00] But they’re already being used with, with certain constraints on their operation. And those constraints involve the commander understanding what those systems are going to target in practice, understanding the location where they’re going to apply force, understanding the duration or the time period within which they’re functioning in this way. So these are some of the key parameters that we don’t want to see expanded and where if they’re just expanded endlessly, then we start to see a sort of stretching of the legal fabric, a dilution of the human role in legal decision-making and operational decision-making.

[00:24:37] So we want to see obligations in a legal instrument on systems that function in this way, obligations to understand the system that you’re going to use, understand how it functions,  obligations, to understand the location and to constrain the location where you’re going to use it and obligations to constrain and to control the duration of time and the period of time within which that system is functioning.

[00:25:02] So these key functions, location, duration, target specification. We developed that as a rhyme to help us remember our key policy messages  on this. Location, duration, target specification. Constraints on those areas, possibly constraints in other areas too. But  those are the key themes we think, that need to be controlled in order to enable a system to be used  with sufficient human control.

[00:25:27] Of course. That’s just enabling reasonable use of a system. It doesn’t mean that  every use of a system is therefore straightforwardly acceptable or legal. But what we’re arguing is that as a minimum, you need to understand how a system works and you need to be able to control its location and duration of functioning. If you can’t control those factors, then you can’t reasonably make a claim  to have evaluated the situation and then to be applying force legally in a subsequent situation. So we see these as a sort of necessary conditions for a system  to be used in a morally appropriate way, in a legally appropriate way.

[00:26:07] Uldduz Sohrabi: So we’ve talked about a broad scope of weapons systems here today, that target on the basis of sensors without human decision making on specifically where and when force will occur. And you’ve also described some lines of prohibitions and some of the obligations we think are needed. Just for us to bring this discussion to a close, could you both comment on where this sort of approach fits into the international conversation and why we think this provides an important way forward?

[00:26:41]Elizabeth Minor: So in the scope of kind of the conversation about autonomous weapons, it’s probably clear from this conversation— we’re not talking about a really clear set of hardware of  some particular weapons technologies, which we might be able to more easily define,  in relation to other regulations. So  it’s a bit, unlike some other weapons policy conversations, we’re talking here about sort of systems and configurations and uses. And I think, that kind of characteristic  of the conversation brings this need  for a structure that we’re talking about of prohibitions and positive obligations on the use of systems and thinking about a broad scope and  a range of systems that use a particular process rather than trying to talk about,  is it the drones or is it like missile defense you know, talking about sensor-based targeting helps us with tackling and conceptualizing this problem.

[00:27:29] And because it’s  a bit of a complicated area, it also brings us to this structure with a few different elements if we want to regulate this properly.  So, thinking about the scope of sensor-based weapons helps us  also ground our discussion in the present and in current practices around human control, like Richard was saying, and in other norms, like the anti-personnel mine prohibition,  but it allows us to build from this to a framework that will be relevant for the future and give us clear standards going forward.

[00:27:57] So, in the international discussion, as we said,  it’s been going on for eight years in Geneva now, amongst states at the international level,  first in the human rights council and then in the Convention on Conventional Weapons. I think there’s been a lot of progress in understanding these subjects and also quite a convergence around human control as the key issue where work is needed.

[00:28:19]And I think there’s a lot of content, which you could plausibly bring together to support a lot of the things that we’re talking about in the approach that we set out here. So there is in the policy conversation quite a lot of statements to the effects of systems that are too complex to understand and will set their own goals aren’t things that states want. And they either see these as possibly already illegal or something that is problematic. There’s quite a lot of you know, proposals and discussion on what might the different elements of human control be. A lot of which reference these concepts we’ve been talking about to do with time and space and understanding and can give us a good basis for moving forward. I think we really need to do at this point.

[00:29:03]Also something that we think is important for us to kind of move forward now, setting clear norms and standards, even if not all states want to participate in this. We need to kind of set the tone now of what we think are the unacceptable things in this area, what are the lines, and to set these restrictions, which can also inform the terms of further conversations and agreements between others that might not participate.  So there’s always value in doing this.

[00:29:28]Richard Moyes: I think Elizabeth captured it very, very well there. With the starting point of systems that use sensors to determine where and when force is going to occur. I think that gives us a very concrete starting point for seeing where the tensions and the problems flow from in relation to how the systems we are considering work. And so we don’t need to start from an abstract concept of autonomy or what is autonomous and what does autonomous mean? We can just start from a very concrete fact of systems that work in this way that have an automatic relationship between sensors, analysis of sensor data and force application being done automatically. They straightforwardly create certain tensions and it’s from those tensions that the problems that we are concerned about all flow. So in a way, I think this approach allows us  to approach the debate without getting tied up on the meaning of abstract terms, which are ultimately not necessarily problems we need to solve in order to determine the lines of legal policy obligation that should bear in this area. We’re going to talk about more of these points in a bit more detail  in future podcasts.  So we can explore that further. But I think in terms of the political dynamics, as Elizabeth said, our expectation really is that there will be a convergence of policy orientation amongst states and other actors over the year ahead.

[00:30:51] I think it’ll be a convergence that goes in this direction, because ultimately because it’s founded on a sort of sense of where we are and of how technology actually works or how systems actually work. It gives us a more solid footing for arguing about things and for describing situations. If the conversation is going to flow rather on definitions of autonomy or what is autonomous versus what is automatic and the like, then those debates are going to struggle to be resolved in a rational kind of way.

[00:31:20] Whereas I think here, we can proceed with some confidence. So our expectation is that we’ll see a policy convergence over the period ahead. Civil society Campaign to Stop Killer Robots will be working hard to encourage that  and looking forward to working with states in developing a legal response in this area.

[00:31:39] I think socially there needs to be a legal response to these concerns. Elizabeth alluded to it earlier that these questions about the algorithmic decision making of a life and death matters- they have a bearing, not just in conflict— but a bearing that  exerts itself through the wider society.

[00:31:58] If we don’t take off the table killing people on the basis of algorithmic decisions, then there is no limitation based on harm in other areas of life where algorithms are bearing on people and people’s experience of the world. So I think socially, we need to develop a response to this and I’m sure ultimately states will come together and do that.

[00:32:20] And yeah looking forward to it and expecting it over the next couple of years.

[00:32:27] Uldduz Sohrabi: And that’s all from us at Article 36 for this episode. We hope you found this discussion useful. Thank you for listening and goodbye.