Episode Summary

This episode addresses yet another aspect of how human control can be sufficiently maintained. Article 36 puts forth its arguments of why it is crucial for the user of a system to understand the system in use, as well as outlining what an assessment of sufficient understanding could look like.

This episode answers following questions:

  • Will weapons with autonomous function do what we intend for them to do?
  • To what degree should the user understand the system that is being used?

Episode Notes

This episode was recorded in March 2021. 

Transcript

Uldduz Sohrabi: [00:00:00] Hello and welcome to Article 36 Autonomous Weapon Podcast series, where we raise a critical voice on weapons. My name is Uldduz Sohrabi and I’m your host. With me in today’s episode is the managing director at Article 36 Richard Moyes, along with our colleague Elizabeth Minor. If you’re new to Article 36 and you would like to get to know our team better, you can find out more about us and the work we do on our website ARTICLE36.ORG.

[00:00:50] Now, our goal through these podcast series is to bring a better understanding of concerns surrounding autonomous weapons systems. And to explain the position of Article 36 in its policy framework. Will weapons with autonomous function do what we intend for them to do? And to what degree should the user understand the system that is being used? In this episode, we argue that one crucial way to retain meaningful human control is to have a sufficient understanding of a system because without that we will lose predictability.

[00:01:28] Richard, could you start by explaining what is meant with predictability and understanding of technology in the discussion of autonomous weapons systems.

Richard Moyes: [00:01:38] Thanks, Uldduz. I’m not sure we’re going to go so far as to define these terms in some sort of technical way, but I think for us, there’s recognising that it’s human users of systems who are responsible for using them appropriately in specific contexts. And I think we’ve spoken before about the need to create some boundaries of time and space to the context in which our system is going to be used in order to make judgments about what will happen. For a user of a system, to make judgements in an informed and effective way, they not only need to understand the context, but of course, they need to understand the technology that they’re using, the system that they’re using, and they must be able to think about and predict and anticipate how that technology is going to function and interact with that context of time and space.

[00:02:23] So in a way, understanding the technology, how it functions, is a critical building block for just simply being able to anticipate what will happen if you use that system in a particular context. Of course, you need to bring in the contextual information in an individual case in order to make an informed judgment.

[00:02:41] So there’s always some combination of understanding the technology as a building block and understanding the particular context of use, but you need an understanding of the technology in order to be able to predict outcomes or to predict and anticipate what might happen. And you need to be able to predict and anticipate what’s going to happen in order to really be able to say you have a sort of sufficient, effective, meaningful control over a system that you’re using.

[00:03:05] Because if you’re claiming to make judgements about the system, but you can’t in some way, predict in a sort of useful way what’s actually going to happen, then I think it’s difficult to argue those judgments are going to be sufficient, meaningful or substantive.

[00:03:18] So effectively, if you can’t sufficiently understand the system, you can’t sufficiently predict how it will operate, then you can’t make decent legal judgements about it. And in our view systems that present that problematic structure of characteristics, then they shouldn’t be allowed.

[00:03:35] They would be part of the prohibitions that we would like to see in a legal instrument. So, some of our orientation to this might be saying, well, what’s the necessary levels of understandability transparency, in a system for it to be used with sufficient control and other orientations are still feeding into systems that are not considered unacceptable– how can we make sure that they are used with sufficient control and impact?

Uldduz Sohrabi: [00:04:01] Perhaps you could tell us why Article 36 has brought this element forth as one of the main concerns in the policy discussion, trying to establish what constitutes meaningful human control.

Elizabeth Minor: [00:04:14] As Richard said, understanding systems and how they operate and what will they do? Will they do what is expected? And will it do that in a way, for reasons that we can understand? This is key to control of a system and using it in a way which we would say constitutes meaningful human control in thinking about the structure of regulation that we generally want to see in this area. Just to recap on that, our starting point is a broad scape of weapon systems that apply force to targets based on processing inputs from sensors after human intervention. So these sensor-based weapon systems that we’ve been talking about throughout this podcast series. Within this broad scope, we want to see clear prohibitions on systems targeting people, which we discussed in a previous podcast and also on systems that don’t allow for meaningful human control.

[00:05:04] And on the other hand, we also think there need to be positive obligations to ensure meaningful human control over the rest of this scope of sensor-based systems– so in the design and use. Systems that can’t be effectively understood and where we can’t predict what they’re going to do, they really fall within these areas to do with meaningful human control, where we want to see regulation, right? So we want to see a prohibition on systems that can’t be effectively understood by their operators. If you think about what’s key to understand for a commander using a particular weapons systems for this class of things they need to know, for example, what will fall within the target profile of their weapon system.

[00:05:44]In terms of objects that are not intended to be hit and maybe what are excluded and how is this target profile being constructed and is this understood to the degree that the human operator will know how it will operate in the environment?

[00:05:58]We’re not talking about understanding in terms of everyone needs to be a programmer and understand the technology in that way, but just to be able to understand the effects and if it’s not sufficiently predictable to meet these requirements, basically for legal decisions and on ethical grounds as well, then we think, those kinds of systems should be prohibited.

Uldduz Sohrabi: [00:06:21] So article 36 have raised some key elements to consider in its policy proposal. These include those that we’ve discussed in this podcast series, including targeting people controlling time and space concerns and using target profiles. Now, one of the elements which has already been raised is one that leans more towards legal mechanisms rather than direct technology concerns around autonomous weapon systems.

[00:06:49] And that’s the question of accountability. How does the understanding of the system in use link to accountability?

Richard Moyes: [00:06:59] I think accountability and responsibility are important themes in this discussion. We’ve emphasized that for us and I think now for all the key sort of significant actors in this debate, there’s a recognition that the law applies to people.

[00:07:12] It’s people who are responsible for making legal judgments and determinations in the context of attacks in our conflict. I think accountability and responsibility for us, the user of a system needs to have a sufficient understanding of a system to make appropriate judgements about the use of that system.

[00:07:31] And it’s only if they have that understanding that they’re really in a position to be responsible for the effects that result from the use of that system. So there needs to be sufficient understanding in the user for them to be responsible for the effects that are being created by the functioning of that system in practice.

[00:07:51] Now within a bureaucratic system, you could have a system that says this person is accountable and they’re going to carry the can, even if they don’t have that level of understanding over the system that they’re using. But in that situation, it feels to me like you’ve got to somewhat unacceptable tension between the application of accountability within a bureaucratic system and the reality of sort of moral responsibility on the part of the person concerned. So I think we need to keep responsibility and accountability tied together and in moral terms, in an appropriate way. And that really means that we need the users to have sufficient understanding to take responsibility in themselves for the actions and the effects that result from their use of a particular system. And in conjunction with that, you have the sort of bureaucratic structures, whether that’s the better tree or another organisation that is organising the use of such systems that are applying that responsibility in the appropriate place.

[00:08:45] Of course things can go wrong with machine systems, right? You can have mechanical problems or computing problems. In which case, the person who put that system into use in that context, they may not be the person who is responsible for those effects. In an existing military structure, if you have a mechanical problem, something goes wrong with a bomb that’s being dropped, the user of the system was in good faith, expecting it to function in a certain way and something went wrong. And then you would go back to the producers and the manufacturers. And so there is a chain of actors who are also implicated and to some extent here, but there has to be with the user the fundamental responsibility in relation to that system functioning, as it is anticipated and expected to do.

[00:09:27] And that, that requires a sufficient level of systemic understanding on the part of those people.

Uldduz Sohrabi: [00:09:34] I understand that you say that you don’t have to be a programmer in order to work these systems and predictability is linked to understanding the outcome of what the system would do.

[00:09:47] What does a sufficient level of understanding of technology and predictability look like in Article 36’s policy framework proposal?

Richard Moyes: [00:09:57] You can’t be expected to understand every line of computer code within the system that you’re using and that’s already the case with complex systems that people use. Anyway, we are capable of talking to each other through this online platform without any of the three of us understanding fully the internal workings of this system, but we have a functional understanding of the system that we’re using here to communicate.

[00:10:21] And I think that’s very much the same tests that you would want to use in these contexts that the user of the system has to be able to give us a sufficient working account of how it functions. I think a critical line would be, as Elizabeth suggested, in relation to the target profiles the user of a system should be able to understand the external context conditions, the external circumstances that will trigger a system into applying force at a particular time and a particular place. So they should be able to give us a meaningful explanation of the external conditions and characteristics that will trigger the application of force by a system.

[00:11:00] And in a way, this parallels obligations under the European GDPR data protection legislation, wherein the context of automated decisions, if you’re subject to automated decisions as a person, you have the right to a meaningful explanation from a person of how that decision came about and how that decision was undertaken.

[00:11:20] I think we can take that as a parallel test into this space. That the users of systems should be able to give a meaningful explanation of what were the conditions that triggered an application of force by the system and how, and they should have that meaningful understanding of those conditions and contexts, prior to using the system, of course, as part of their ability to understand them and predict what’s going to happen in a particular context.

[00:11:44] This in turn, perhaps relates to certain more detailed ideas. I think as an overarching rule, those kinds of rules provide the basic structure. But I think flowing from that, we could think, well, this would present particular problems if a system were able to say change or develop its target profiles, subsequent to that system being put into use. That would seem to be straightforwardly unacceptable because the human user wouldn’t be able to give an account of those profiles in advance of the system being used.

[00:12:14] Similarly, if you have target profiles that are built on certain forms of current machine learning, where you can’t actually identify the external conditions or characteristics that are triggering the application of force, then those sorts of structures of machine learning might be considered an unacceptable basis for developing target profiles because they don’t really allow the user of a system to describe what will trigger false activation.

[00:12:40] Other than perhaps by reference to similarity to the abstract object or abstract circumstances, that is the intended target. So if you say we’ve trained the system to be activated, to recognise military vehicles, tanks as targets, but we don’t know how it does it- we’ve trained it on a machine learning algorithm, we don’t know what it actually is ‘seeing’ as it were. I’m using air quotes for seeing, obviously that doesn’t translate it to audio very straightforwardly. But if we imagine the machine is seeing tanks, but we don’t actually know what it is seeing as being the characteristics of a tank, then we’re not really in a position to give an account of what will trigger that system into applying force in the real world, other than it’s in some abstract way, things that look like tanks, which doesn’t really inform or provide us with any kind of meaningful explanation of what’s going on beyond that sort of level of this is our intent to attack tanks and it will attack things that look like tanks in some totally abstract and unexplainable way.

[00:13:43] So in those technical structures, I think we start to see configurations that would challenge overarching rules, requiring the users of systems to be able to give an explanation of the conditions and characteristics that will trigger it application of force or rules that say it’s unacceptable to use systems where you can’t understand and explain the context, but still produce an application of force.

Uldduz Sohrabi: [00:14:09] How does the discussion currently look on this element? Is there an emerging agreement on predictability being key as part of the assessment of meaningful human control to start with? And perhaps, is there a recognition of what sort of expectation for the level of predictability that is considered sufficient?

Elizabeth Minor: [00:14:31] So I think I would say there is a broad recognition of the need for operators or users of weapon systems to understand, and for systems to behave as they’re intended in the international debate amongst state discussion on this issue at the CCW.

[00:14:46] I think there’s also quite a lot of prominence given to you these concepts of predictability and also explainability, transparency– these kinds of terms. As well as have a bit of linking of this, to legal compliance. A group of states which got together to make a joint paper last year, for example, put in that they think that control is necessary for the application of the law. And if you can’t sufficiently understand a system, then it’s hard to claim that you’re meaningfully controlling it. So I think the key for us here is recognising that sufficient understanding is needed to enable control.

[00:15:22] And that means that some types of systems are going to be unacceptable and need to be prohibited because there isn’t that sufficient level of understanding that’s possible. Richard was talking about systems that use their machine learning processes to generate target profiles and the problems that they might be there with knowing what exactly that system is going to hit or not, and the black-box nature of that process.

[00:15:46] We think there would need to be lines of prohibition there and also lines of positive obligation to ensure sufficient understanding over other systems and what’s left and I think there’s nuance in the debate here, maybe. I think there’s a difference between for some people in this discussion suggesting that we can reach some level of perfection with the technology in terms of its functioning and its predictability. And for others, the emphasis is more on the human role in understanding and control, which is where we think things should be located.

[00:16:20] I think, possibly within the international discussion, we haven’t got to the right level of nuance yet, and maybe not have the technical details in the policy debate. And so it’s just to bring something up, which is a bit linked to this, I think, and it also comes up in the time and space issues that we talked about in another podcast. There’s quite a lot of enthusiasm in the debate for mechanisms for the kind of contact or recall, or supervision or self-destruction of sensor-based weapons that are sent out. Which is, in a way, is something that’s fine and useful but it’s an area in which a lot of the time, the thinking and the discussion goes towards technical fixes for potential imagined problems, rather than thinking about the human principles that we need to put in place for adequate control and appropriate use before you consider all of that.

[00:17:11] So I feel like we want to continue this discussion with that kind of human role in understanding being at the centre of what we’re talking about.

Richard Moyes: [00:17:20] I think linked to what Elizabeth saying- for us when we move towards states mapping out a legal instrument on this issue, they’re going to need to focus on the broad principle, rules and obligations first and foremost. We don’t necessarily want to get into details of in a legal instrument of different types of machine learning functioning in certain ways, but rather we need to articulate what are the characteristics of understandability, explicability, controllability that must be there for the human user in order for us to feel that a system is appropriate and adequate.

[00:17:55] And I think framing those requirements in human operator terms is going to be the key. More detail can come out in subsequent discussions about how those rules are applied in practice in terms of how they relate to specific technical configurations and more specific technical structures or approaches, but getting the broad human usable rules on the table is going to be the key.

Uldduz Sohrabi: And that’s all from us at Article 36 for this episode on understandability and predictability in our autonomous weapon series. We hope you found this discussion useful. Thank you for listening and goodbye.