On 10 November, the National Defence Committee of Belgian House of Representatives held a public hearing on autonomous weapons, which considered two resolution proposals – one to add killer robots to Belgium’s law on prohibited weapons, and one calling for Belgian leadership in the negotiation of an international legal instrument on autonomous weapons. Resolution proposal on adding ban on KR to law.

Article 36’s Elizabeth Minor spoke at the hearing alongside others including the Belgian Red Cross, PAX, and the Belgian defence ministry. Her remarks are reproduced below:

 

Article 36 presentation to Belgian House of Representatives on regulating autonomy in weapons systems

Elizabeth Minor, Advisor

I am speaking from Article 36, a UK-based NGO that works for stronger international standards on weapons and civilian protection. We are part of the International Steering Committee and coordinate the Stop Killer Robots campaign.

Thanks very much for inviting me to speak at this public hearing this morning – we really appreciate the opportunity to give our views on the resolution proposals you are discussing. From our perspective, it is very positive to see these proposals to take steps forward to address the dangers presented by increasing autonomy in weapons systems with new law nationally and internationally. We think such responses are urgently needed, and that a new international legal instrument regulating autonomy in weapons systems needs to be negotiated now

Building on others’ comments about the concerns raised by increasing autonomy in weapons systems and where things are now in international discussions, I mainly want to focus my comments on what we think an effective regulatory response to the problem of autonomous weapons should look like in more detail, in terms of the basic building blocks or structure that could make up a new international treaty.

In order to do this, it’s important first to talk about the scope of systems we want to see regulation of:

  • As you know, when autonomous weapons systems are being discussed, we are not talking about one particular class of weapons, or particular pieces of hardware. So this is not about drones, or sentry weapons or whatever in particular
  • It is more useful to think about autonomous weapons systems as systems that use a particular process of functioning
  • So for us, the broad scope systems that we want to see states negotiate a new international legal treaty on are weapons systems that detect and apply force to a target based on the processing of data from sensor inputs – so after their activation or emplacement by a person, there is a period of time where the system will detect targets and apply force without further approval – the operator will not determine specifically where, when, or against what force will be applied
  • This period of uncertainty underpins the range of concerns expressed about autonomous weapons, from the threat posed to meaningful human control over the use of force, to the moral challenges of machines automatically taking a life. On the threat to meaningful human control, the greater an area or period of time an autonomous weapons system functions over, for example, the less control an operator will have and the less certainty that it will do what is intended

There are a lot of different conceptualisations of autonomous weapons systems in the international debate and the general discussion, but we think that this broad scope of systems functioning in this way – so systems that apply force through a process of matching sensor inputs to a ‘target profile’  of characteristics following a system’s activation, emplacement, or deployment – encapsulates a lot of definitions being used and underpins the discussion and concerns raised.

We think this process of functioning is more productive to focus on in order to move forward in legal and policy terms than for example concentrating on what might be the ‘technical’ differences between automatic/automated/autonomous weapons or whether systems are ‘offensive’ or ‘defensive’ – the heart of the issue is human control over systems with a certain process of functioning, rather than these other distinctions.

If you are looking at this broad scope of systems – which covers a lot of things, including weapons technologies that already exist like loitering munitions and high speed interceptive systems, as well as systems in development or those we are concerned with being developed that would place greater complex functionality and unsupervised operation with machines – there are two questions that really arise that the international community needs to work together to solve:

  • Firstly, which systems within this scope are fundamentally unacceptable for moral, ethical or legal reasons and so need to be ruled out by states
  • And secondly, how can human control be meaningfully maintained over the rest of the systems in this area, to adequately uphold legal obligations and more profound moral and ethical principles

To answer these questions effectively, we think that an international legal treaty needs to have two key components – a set of prohibitions for types and sets of systems that are clearly unacceptable, and a set of positive obligations to ensure that weapons systems are never used without meaningful human control.

In terms of prohibitions, there are two key lines that we can see should be clearly drawn at the moment (there may be others):

  • Firstly, we believe that autonomous weapons that target people should be prohibited. We think there is a clear moral red line in reducing people to objects to be sensed and killed or injured by machines. It is an affront to dignity, a fundamental digital dehumanisation. Using systems autonomously to target people would also pose severe dangers to civilians as it would not be possible or applicable to be able to ensure they could be used with distinction. We think that however for most states this would be quite an easy line to draw: there are not currently many automatic anti personnel systems in use, anti-personnel mines are banned, and existing weapons systems with the capability to automatically apply force to people such as certain sentry gun systems are currently being chosen not to be used in this mode
  • So that is the first line of prohibition we think it is really important to make, of systems targeting people
  • The second prohibition we think needs to be included is of systems that cannot be meaningfully controlled by their human operator. So this might be for example weapons systems where because of the complexity of their functioning, such as the use of ‘machine learning’ to construct profiles of targets to detect, their outcomes might not be possible to sufficiently understand. Or, it might be systems tasked to ‘learn and improve’ and so the conditions when they apply force may change during use
  • If it is the case that what, where or when a system will apply force will be too far from the acceptable control or understanding of the person who is using them, they should not be designed or used, and they should be prohibited

So those are some specific prohibitions we think would be important in an international treaty. However, to make sure that no weapons systems are used without meaningful human control we think that a new law will also need positive obligations over the remainder of the systems within the broad scope of autonomous weapons I was talking about. These positive obligations would cover systems which don’t target people, and that it might be possible to use with meaningful human control – but which could also be used without such control.

  • The kind of positive obligations for meaningful human control that we would want to see would include those around limiting the parameters of use of systems, such as the type or size of area and the duration over which they are used. Many of these requirements for control might reflect current military practices that are applied to weapons systems like missile defence systems or sensor-fused weapons. The speaker from the Red Cross usefully laid out many of these components around limits around location, duration of functioning and target specification, and the need for operators to adequately understand the effects of a system.

In summary we are calling for a treaty with prohibitions on systems that cannot be meaningfully controlled and systems targeting humans, in addition to regulation for all other autonomous weapon systems to ensure meaningful human control.

In considering the proposal to add prohibitions to Belgium’s national law, we would invite you to consider this structure of regulation. Already the proposal is aiming to prohibit the operation of systems without meaningful human control. This is one part of what we believe regulation needs to address. I would urge you to also give consideration to adding a specific prohibition on autonomous weapons systems that target people alongside this.

Internationally at Convention on Conventional Weapons we want to see a mandate to negotiate a protocol to the treaty, which would be legally binding. We agree with the proposed resolution that now is time for international law and that we should act to set norms as soon as possible. We should not wait for all countries to be ready, as some wish to obstruct the negotiation of a treaty because of the military developments they are pursuing. The international community needs to state what the international standards around autonomous weapons systems should be now, in order to put pressure on these developments: and any law agreed will necessarily influence further conversations that all countries will have.

Countries that are ready for action should take it, and leadership from Belgium both through nationally agreed measures and through stepping up to lead treaty negotiations internationally would be extremely valuable.

Thank you.

 

Featured image: The Belgian parliament hearing room Twitter/Stop Killer Robots