Episode Summary

Target profiles are the way that autonomous weapons identify targets. They are the patterns of information from a system’s sensors that are supposed to match an intended target – or target type. This episode is talking about how these systems work, exploring the problems that arise when the complexity of the world, including people, is reduced into a pre-programmed pattern of data.

Episode Notes

This episode was recorded in March 2021.

Transcript

Uldduz Sohrabi: [00:00:00] Hi and welcome to Article 36 Autonomous Weapon Podcast series where we raised a critical voice on weapons. My name is Uldduz Sohrabi and I’m your host today with me in today’s episode is again the managing director at Article 36 or Richard Moyes, along with one of our main advisors, Elizabeth Minor. Our goal through this podcast series is to bring a better understanding of concerns surrounding autonomous weapons systems and to explain the position of Article 36 and its policy framework. If you are new to Article 36 and new, we would like to get to know our team better. You can find out more about us and the work we do on our website ARTICLE36.ORG.

[00:01:05] Now target profiles are the way that autonomous weapons identify targets. They’re the pattern of information from a system sensor that is supposed to match an intended target or a target type. They’re also a simplification of reality. For example, a machine does not genuinely recognize a human person as a person, as we would one human to another, rather it detects only a pattern of sensor information that it is programmed to label us a target, whether that is complex biometrics of how a human body might move or simply that something weighs more than a few kilograms.

[00:01:50] This episode is talking about how autonomous weapons systems work, exploring the problems that arise when the complexity of the world includes getting people is reduced to a pre-program pattern of data.

So Richard, why don’t you start by telling us what is meant by target profiles and what systems are we seeing today that use target profiles.

[00:02:17] Richard Moyes: Thanks, Uldduz. I think that introduction really explains it very well. In general here, we’re talking about systems, that apply force on the basis of sensors, and we’re talking about systems that where the sensors are really determining the time and the location where force will be applied.

[00:02:34] Well, these are systems where the processing sensor data sensor signals. It’s those sensor signals that are going to determine specifically when and specifically where the force occurs and target profiles are basically the patterns of sensor information- that pattern of sensor signals tells the machine now is the time to apply force.

[00:02:53] So in any system where you have this direct connection between sensors sense of processing by the machine and force applications, there must be some encoding of, which actually triggers the application of force. And that’s what we’re talking about when we talk about target profiles. Target profiles here is a concept that really helps us to understand how these systems, work, basically in their most fundamental terms.

[00:03:17] It’s part of the system design part of its encoding or coding. Really one of the key mechanisms by which users relate to systems because for the user of a system the target profile is really the representation of the target. It’s a sort of model of what that weapon system is intended to be activated against.

[00:03:36] So, to some extent, it’s sort of vital to using a system in an effective way or in an appropriate way to understand how that target profile functions in reality. When we look at systems that function in this broad way that we already have, well, landlines are a very obvious example. They’re a very simple example.

[00:03:57] And in the case of landmines, the target profile is usually, some measure of weight. For an anti-personnel mine, it might be relatively low weight, five or 10 kilograms of pressure needed to activate a mine. For an anti-vehicle mine, you’d usually have a higher weight needed, maybe 30, 50 kilograms or more.

[00:04:14] And so we see there just that basic difference in the target profile as a measure of weight, is then used as the indicator of a difference between an anti-personnel weapon or an anti-vehicle weapon, in that case. Weight is a very crude measurement or basis, of course, for distinguishing these things in the world, more sophisticated systems, sort of anti-missile systems they may use radar signatures to identify targets, radar signatures, which are moving at a certain speed and moving in a certain direction. The target profile might be some combination of those elements. Anti-ship systems use sonar or acoustic signals. So depending on what sensors our system uses the target profile must be some pattern of information that is formed out of that sort of sensor data.

[00:05:00] Uldduz Sohrabi: So it feels like target profiles are related to labelling things or modelling. How do you think that target profiles relate to the idea of labels?

[00:05:13] Richard Moyes: Yeah, I think that’s right because of the target profile. It isn’t the real thing, of course, it is just a pattern of information in whatever language the sensors of that particular system use.

[00:05:25] It’s a pattern of information that I think then necessarily has to be a sort of simplification of the real world. It’s a sort of simplified model of the real world in the language of those sensors. To some extent, there are almost always generalizations of a form as well, because we’re putting things into categories.

We’re putting things, into sort of types that can be labelled and can be where those labels can be associated with this particular pattern and the simplification of sensor data. So we’re grouping things into types, we’re classifying them on the basis of certain, characteristics.

[00:06:01] And so, it’s a classifying process- It’s a labelling process. And of course, these are done in advance. So they’re kind of pre-established generalizations about the world. And, there are some of the risks of these systems that are a little bit embedded in that.

[00:06:16] Elizabeth Minor:  And it’s also about sort of the encoding of like human concepts into something that a machine can process as well. So I think as we’ve kind of discussed in other podcasts as well, a machine doesn’t see an object or a person or something like that. So our target profile is kind of the translation of a human concept of what a user wants to apply force to if that weapon into a pattern of sensor data that a machine processes.

[00:06:45]Richard Moyes: I think that’s a really, really good point. I know it means that we have sort of multiple layers of translation, and reduction and simplification here. We have the human concept being translated into a pattern of data. And then we have actual patterns of data in the real world being compared against that model.

[00:07:03] So you’re at some sort of degree of separation by the time you get to actual calculations, whether those actual calculations match up with the original intent.

[00:07:14]Uldduz Sohrabi:  I mean, with this distance between the intent and calculations of the intent itself, as you explained, I can imagine that there are problems that will occur. Now since this specific sub-function that we refer to as target profile is addressing how the autonomous weapons systems work. Could you explain ‘where’ in the process of the development and use of autonomous weapons systems that this process occurs?

[00:07:44] Richard Moyes: Yeah, I think that’s right. I think it’s important to see that concept of the target profile as a central element of how these systems must work. There must be target profiles if a system is going to apply force it’s for us it’s simply, it is the pattern of machine data from sensors and other conditions that trigger that application of force. So to start, we recognize that they must, they must be there. In practical terms, they’re encoded in the development and the design production process of the weapons.

[00:08:13] So that, that part of the process of design people are thinking, how are we going to make this weapon work in the way we want it to? And then they’re thinking, well, with these sensors, we could encode certain objects of a particular type in this particular way. They might be looking at armoured vehicles like tanks, and they might be thinking well, using infrared sensors and other sensors, we could identify the heat of this vehicle’s engine.

And then that could become the basis for the target profile that we’re going to use. So during the development phase, these are being decided upon in order to achieve certain effects– but then they become fundamentally important in the phase of use when the system is actually in operation in the real world- that’s the point where this target profile or these target profiles are being constantly compared against, in relation to the system’s sensors and they are the basis upon which that system is going to apply force, in the outside world, so that the fundamental to the design of the system, but also to then using the system in an appropriate way. Understanding what is going to trigger an application of force,  whether that’s only the thing, whether that’s the things you’re intending to target or whether that’s something else that’s a fundamental tension in these systems.

[00:09:27] But I mean, when we look at, systems like landmines again, providing a very obvious example where we talk about weight over five kilograms. That’s a very, very crude representation of a human being. It has very little about a human being encoded in that it’s simply a very broad simplification of what the presence of a human might be.

[00:09:48] So always in the use of these systems, we have that tension between the target profile in relation to its intended target and, and the reality of what else it might actually be activated against.

[00:10:00] Elizabeth Minor: Yeah. Because of this reduction and simplification like Richard was saying to you, something that a machine can process, like for any system that’s using a target profile, there’ll be things that are intended and things which are kind of unintended objects that will match and that will fall within it, and the system will apply for us to. And that’s one of the fundamental problems here. And that comes from this process of simplification and reduction and labelling of human concepts of what a user of a weapon system might want to apply force to, into something that is recognizable by a machine.

[00:10:34] Uldduz Sohrabi:  Why are target profiles important for discussions about autonomous weapons systems?

[00:10:41] Richard Moyes: They just help us to talk really about straightforwardly, how systems that function in this way actually operate where, a little bit too often, the debate on autonomy and weapon systems gets a bit too abstract and a bit too futuristic.

[00:10:54] I think here we can see target profiles as a concrete building block of how these systems work. They’re very important to the users of weapons, as we’ve said, they are the way in which the developers and users of weapons seek to understand what those systems will do. So they’re important for developing weapon systems to have a certain effect. They’re fundamental to allowing this opening up of space and time, which basically enables the users of weapons to try to strike, targets, to strike objects when they don’t know specifically where those objects are. So the target profile enables this application n of force, even if there’s some uncertainty about specifically where the target is or when it is, or even perhaps if it is in an area.

[00:11:38] They’re also, I think important for us because they can give us some mechanism for thinking about rules relating to the control of systems with autonomy or systems that function in this way. They point us towards, I think some of the key problematics in this space- to what extent does the target profile really match up with the specific objects that are the intended targets?

[00:12:02] What is that sort of tension between the simplification that’s modelling of the world and the reality of the world? What are the implications of reducing living things or people to these kinds of simplifications so that the encoding process is fundamental to some of the problems we see in this area.

[00:12:20] And on the other hand, if we’re opening up this space in terms of where and when force will occur. Well, then the target profile is also sort of tied up with and bound up with the questions around where and when false will actually occur. And therefore, what are the conditions in the locations where force will occur?

[00:12:38] What can we know about those conditions, which I think is another, sort of fundamental aspect of the problematics in this space. So we think that it’s a very useful conceptual tool for thinking about rulemaking for, formulating rules, probably needs more attention from states. So some states have already indicated that controls on target profiles could be an important way of regulating and controlling systems in this area. But we think they can help us to think effectively about policymaking in this area.

[00:13:07] Elizabeth Minor:  And I suppose some questions in that area of thinking about rules and, how we might use that concept in that area might be like Richard has said – thinking about what kind of types of target profile are there?

[00:13:19] And what kind of objects or phenomena do we apply target profiles too? And are some of those unacceptable such as producing people in this simplified way to be objects, to be recognized by machines? which we think is a serious problem, of course. There’s also the question about, you know, what are the ways in which target profiles are developed and you were asking before about where is this situated in the process of development and use of an autonomous weapon system. And one of the issues in this area of debate is the possibility of a system which after their kind of activation by a human could then develop what they are targeting, so basically, to develop their target profiles after they’re put into use some through machine learning or other techniques. We think, obviously this is deeply problematic. And then, on the other side of the key, a key question in the area of autonomy and weapons systems being to do with control and thinking about meaningful human control is understanding of systems.

[00:14:18] And then thinking about how much do you understand about the target profile, how it has been constructed and what objects might it apply force to- is very important in that area. So we think it’s really potentially useful in, in all these different ways and unlocks a lot of, kind of thinking in this area, which is useful for policy.

[00:14:38] Uldduz Sohrabi: So the way that I understand target profiles, when I hear you talk about this central element to the discussion- is that target profiles are essentially not in new concept and it’s already a way in which systems operate today. The problem that I’m hearing though, is that this can become problematic when we raised practicality of this target profiles and how it’s being controlled. Could you comment and give further examples of where they are being used today in addition to the mines that Richard mentioned?

[00:15:17] Elizabeth Minor:  Yeah. So there’s, there’s quite a lot of systems that already use this kind of process. Richard mentioned a few before such as different kinds of mines.

Also, for example, missile defence systems will use a target profile to represent an incoming missile, which the system is turned on for a short period of time and set to destroy. And other systems which are kind of, I suppose, more on the advanced end of things on which lead us to raise more questions about concern and growing autonomy in weapons systems, such as, loitering munitions- which are systems which are set to hover over a target area for a certain period of time and to, when they detect something with a particular radar signature to, dive in on that and apply an explosive force to it. I suppose saying a couple of these examples, also helps us to understand what are the kind of problems of systems, which use target profiles and so what, you know, some problems with target profiling.

The problem of kind of unintended, objects is present with both of those kinds of systems that I’m talking about. Right? So missile defence systems, in the past have applied force to civilian aircraft in error, for example, because those objects also fell within the target profiles that they were using. That’s a space, an unusual scenario in the use of those weapons because normally they’re used under quite tightly controlled circumstances of time and space. But of course, this shows us a lot of the issues that there are with systems that use target profiles and autonomous weapons.

[00:16:56] And similarly with systems like loitering munitions, they could be in an area over a period of time where other objects could enter this space such as civilian vehicles or other things which weren’t perhaps the intended target but could match the target profile within that system.

So it shows some of the problems with, this, process and type of weapons.

[00:17:18] Uldduz Sohrabi: Some governments argue that constructing and defining target profiles using more advanced technologies and processing techniques could increase the precision of weapons and that this in turn would reduce civilian harm. Comparing this to other weapons in armed conflict, how does the argument of increasing precision sit with what Article 36 is framing as a problem?

[00:17:49] Elizabeth Minor: Well, I think possibly one thing to bear in mind that’s important is that greater precision in weapon systems doesn’t automatically mean, and isn’t automatically intended for the reduction of civilian harm.

And that we’ve seen in other contexts, of course, that the use of precision weapon systems and munitions, there’ve been a few studies in relation to the use of drones, for example, showed that there isn’t necessarily a reduction of civilian harm at all and sometimes there’s an increase, I mean, it depends on how you use these weapons systems rather than often the technical characteristics.

And, I suppose as well, you can have precision weapons, which, for example, have a large area of effect. So for example, a precision-guided 2000 pound bomb, which will still, destroy an entire building it will hit that target exactly where you want to, but it could still, cause major harm to civilians from everyone who would be killed within the area effects of that weapon. There’s also a personal favourite, which is the concept of a precision nuclear weapon, which, exists for some reason.

[00:18:54] And I mean, thinking about the characteristics of nuclear weapons, it’s the nuclear weapon, right? So it’s like the largest area. In fact, weapons ever invented causing harm down the generations to people and the environment, the fact that the central point of impact of your nuke will be exactly what you want it to isn’t going to significantly reduce the effects of that.

[00:19:17] So I think, that’s one thing to think about in this area that, precision doesn’t necessarily mean better outcomes for civilians and it kind of depends on what you’re comparing it to as well I feel.

[00:19:28]Richard Moyes: Those are very, very good points, perhaps slightly more conciliatory tone towards militaries.

[00:19:35] Our policy orientation is not to say that all systems that use target profiles, must be prohibited as we’ve noted. There are numerous systems in use today that use target profiles that are not associated with distinct humanitarian concerns. Others have been anti-personnel landmines, of course, prohibited and they’re prohibited because of the problems of how they function and the way in which they were used.

[00:20:03] Anti-vehicle mines have also caused substantial, humanitarian problems, even though they’re not, straightforwardly prohibited at present, but in general, our policy orientation is not to say all systems that use target profiles are unacceptably problematic, but they all do present certain tensions.

[00:20:19] And we’re definitely saying you should take off the table systems that target people that use target profiles to represent human people because there’s a dehumanizing reduction of people through that process of well, the process of processing in a way, and we’re saying you should take off the table systems that the user can’t effectively control because they don’t really understand the way the target profile functions.

[00:20:43] They can’t actually constrain the functioning of the weapon in a certain way. So there’s a sort of line of prohibition there in relation to how target profiles function. But for other systems, we think the key things thing is that they are used with meaningful human control in practice, that doesn’t mean you can start applying force in any context, simply because you can make a claim to precision.

[00:21:05] The key thing is that when you’re making decisions about the use of force, you’re in a position to make, informed, appropriate judgments based on an understanding of the context and based on a sufficient understanding of the system that you’re going to use. And I think, as a sort of mental model, in a way, the target profile is always a sort of bundle of assumptions about the outside world.

[00:21:26] It’s a simplification of the outside world. And so it’s a sort of set of assumptions about the context where you’re going to apply force. So, when we talk about controlling the time and space in which these systems are used, that’s really a question of ensuring that they’re only used within an area of time and an area of space that really ensures that there are any assumptions in that target profile or the assumptions that are being invested in that target profile actually are going to hold valid. And I think that sort of sense of seeing this as a sort of set of assumptions and as a tension, in a way that’s built into the system and then in the use of that system, the need to control those assumptions and to ensure that they don’t sort of spread out into problematic effects for the surrounding population. I think that’s the sort of key dynamic of a policy orientation in this space.

[00:22:15] Uldduz Sohrabi: Another problem that’s been raised in relation to autonomous weapons is the militaries’ or user’s ability to understand a system that they choose to use. Now, I understand that a weapon user might not understand every detail of how a system works. You and I are talking through our computers. We don’t really know the inner workings of these machines, but we know how to use them. So I’m wondering how target profiles can help us think about a user’s understanding of a system. What actually needs to be understood and how might the concept of target profiles provide a way for helping to make systems understandable?

[00:23:02]Richard Moyes: I think ensuring the systems are appropriately understandable, effectively, understandable – is fundamentally important to ensure that they can be used with meaningful human control.

[00:23:13] So, I think we recognize that people are not going to absolutely understand perhaps all of the details of computer code in a system that they’re using, it’d be nice to think that somebody somewhere did understand the computer code, but the user of a system probably needs to have an effective understanding or, an appropriate understanding of the systems that they’re going to use.

[00:23:34] But when we think about the target profile that does mean they need to know what will trigger activation of force in relation to that target profile. That means not just thinking that the system targets and it’s an anti-vehicle system, therefore it’s enemy vehicles that target it. You have to have some understanding of whether civilian vehicles would also trigger applications of force against this target profile.

[00:23:57] So understanding what are the characteristics of things that will trigger force, both the military things you’re intending to target, but also other things that may then produce a risk of the sort of false positives in the use of the system. I think that’s extremely important. That should be recognized as a sort of baseline level of understanding.

[00:24:15] We do have concerns about systems, say that where machine learning is used to construct target profiles, machine learning can be used for object recognition. It can be effective in certain ways that developing the capability of a system to identify certain objects.

[00:24:31] But the problem is we, as the users don’t necessarily know what it is that the computer, the algorithm is actually identifying. It’s not seeing, a tank or an enemy aircraft in the way a human would see it. It’s simply seeing some pattern of data that it identifies and codes as that object.

[00:24:52] And if we have no understanding of what that actual information is that it is going to act upon, it makes it very difficult for us to know what other things in the outside world might also trigger that application of force. So a system could be developed that was extremely reliable at identifying tanks– It was extremely good at identifying tanks as tanks.

For us to know what other things, it might also identify as tanks out in the real world, which is extremely diverse and complicated, very difficult to sort of model in any kind of test conditions then suddenly, although we can know while it’s very good at identifying tanks, we can have very little understanding of the actual error states that it might produce of the false positives that it might trigger.

[00:25:36] And that creates a sort of an additional degree of unpredictability, even though certain aspects of that system’s functioning could be considered to be sort of statistically reliable.

[00:25:47] Uldduz Sohrabi: Do we have an idea of what extent we can make a target profile more understandable and more precise?

[00:25:56] Richard Moyes: We have to recognize there is a limit to how precise one can be about specifying what a thing is. The identity of things is in any way mutable. So, you know- a human person, of course, we want to take off the table targeting to people altogether – but a human person can be a soldier in one context and then add a civilian in another. And so there’s a contingency around these identities that is always a factor.

I think we need to perhaps recognize as well that. Being as precise as possible might also mean it’s more, I mean, in terms of the matching of the profile to a thing in some way, it’s always going to be a simplification and the more narrow your target profile, the more sort of specific its boundaries- perhaps then the more likely, perhaps it is that when a system, encounters an unintended target in the real world, for some reason, it’s just at the wrong angle or it’s just in the shade of a tree or external factors in the way that that object presents itself, the means that the target profile just doesn’t match against that object in that specific circumstance.

[00:27:03] So I think what I’m saying is in a sort of engineering raw mode, there’s a degree of generality that is needed, if you want to be reasonably confident that your system actually is going to apply force in certain circumstances. I think, current systems that are identifying the, say the heat shape of armoured fighting vehicles- Well, they’re expensive weapons, right?

If you fire it at a set of enemy tanks, you don’t want your weapon. I’m anthropomorphizing, but you don’t want your weapon looking at the tanks and thinking, ‘well, it looks like a tank, but it’s not quite as specific as my target profile. So we’re just going to fall to the ground.’

[00:27:38] And self-destruct because that’s going to be hundreds and hundreds of thousands of dollars of nothing that you’re getting. So it’s just to suggest that for weapon developers and for the users the closeness of target matching isn’t necessarily the primary motivation.

[00:27:52]Elizabeth Minor:  And there’s this, balance there isn’t for weapons users and for states between having the intended effect and also the risks to kind of unintended objects and people.

[00:28:03] And I think you can see that a little bit in the debate on autonomous weapons with some countries who are interested in having these more advanced technologies with these more advanced target profiles, realizing that there will be this problem of also affecting civilian objects and vehicles and stuff like that.

[00:28:20] And instead of, I suppose, confronting that problem as one of the developments in this area of weapons systems and the rules we should put on that, some countries and others are thinking about, ‘oh, well, how could we possibly techno fix that by, encoding also into these systems that it definitely wouldn’t hit, you know, an ambulance or a civilian car or something like that.’

[00:28:40] So adding a layer of programming, complexity to try and eliminate those things. And I think, again, from an engineering perspective and from a legal perspective there’s has sort of multiple problems with it, but just the fact that that’s in the conversation, I think shows some of the problems that we’re talking about with target profiles and the understanding in this area.

[00:29:02] Uldduz Sohrabi: All right. Just in order to summarize this discussion today, would you take us through the points that we’ve made in this episode and just take us through how we can think about target profiles and how they help us structure the sorts of rules that we want to see.

[00:29:19] Richard Moyes: Sure. I’ll give it a go. I feel like we’ve covered quite a lot of ground in this conversation.

[00:29:24] I mean back to our beginnings, perhaps we’re talking about systems that use sensors to determine specifically where and when force is going to be applied. So, it’s the system that is doing that through sensory information machine calculation and then forces application. For us, all systems that functioning in that way must use target profiles as a sort of conceptual element of how those systems work. They all contain some form of target profile or target profiles. And then for us, when we start to think about the sort of rules that we want to see we can see how target profiles relate to that. So key prohibition for us is not using systems that function in this way, and that would target people.

[00:30:04] And so the targeting of people then is a function of the target profiles that a system uses. People in this context, as we’ve seen with the anti-personnel mine example, can be represented in ways that have very little ‘people-ness’ about them. That’s part of the problem. That’s why we see it as dehumanizing essentially. But it’s the target profiles that our system uses that are intended to apply force to people in that context that are a critical problem for us. Target profile types are one of the key ways in which we think systems can be regulated.

[00:30:35] Another line of prohibition for us would be not allowing systems that can’t be effectively controlled. Well, that’s partly about understanding the target profiles that a system uses it’s about making sure that those target profiles are understandable in the first place that they’re not produced through opaque processes, that doesn’t actually allow the users to know what it is in the real world that is going to trigger an application of force.

[00:30:58] And it’s about making sure that those target profiles are going to change once the system has been put into use, right. They’re not going to be modified or adapted within the system itself without some human authorization and understanding of what those changes are. So target profiles, again, important to understanding systems.

[00:31:15] And that’s the line of probation for us. If they’re not going to be understandable, they shouldn’t be allowed. And then finally, as a key element of our policy orientation is an obligation to ensure meaningful human control over the use of systems. I think understanding the target profiles that a system has is fundamentally important that understanding those target profiles and then ensuring that the context of use is controlled and limited, appropriately in relation to the target profiles that are being used.

[00:31:44] That’s a central element for how human control can be exerted over the use of systems in practice. So for us target profiles, are central to the rules that we want to see, we think that the concept is just useful for understanding that this is how these systems work and therefore, bringing it more clearly into the debate allows us to talk about policy and law in this area perhaps just in a more straightforward and direct kind of way.

[00:32:14] Uldduz Sohrabi:  And that’s all from us at Article 36 for this episode on target profiles and labelling as part of our autonomous weapon podcasts series. We really hope that you found this discussion useful. Thank you for listening and goodbye.