On 15-16 September 2021 the government of Austria hosted the online Vienna Conference on Safeguarding Human Control over Autonomous Weapons Systems. With strong statements from the Foreign Minister of Austria and the Disarmament Minister of New Zealand calling for action towards new international law to regulate autonomy in weapons systems, the conference was another indication of growing political will and policy coherence amongst states around the need to address increasing ‘autonomy’ in weapons systems with a framework of prohibitions and positive obligations.

Article 36’s Richard Moyes contributed to the Vienna Conference’s panel on ‘international law and safeguarding human control’ – his contribution and analysis to the meeting is reproduced below:

Comments on the Panel: International Law and Safeguarding Human Control

by Richard Moyes, Managing Director, Article 36

15 Sept 2021

Thank you chair,

And thanks to Austria for the opportunity to participate on this panel – and for the initiative in convening these discussions.

The international policy discussion has made significant steps forwards this year. And we can now see the shape of a response:

A combination of prohibitions and regulations

  • Prohibitions on systems that cannot be used with meaningful human control, and
  • Prohibitions on autonomous systems targeting people.
  • And positive obligations to constrain the area and duration of system use, and to ensure understanding of systems, so that existing rules of law can be meaningfully applied.

We might not yet all agree on these lines – but we have the potential now for much more productive conversation around this structure.  So additional opportunities for interaction, like this, are all to be welcomed.

I thought I would use these comments

  • first to reflect on some of the wider dynamics we see in the legal discussions around autonomous weapons
  • but then to raise some more detailed legal issues at stake within the structure that I just outlined.

Wider legal points

In terms of the wider dynamics, I just want to place this conversation in a broader social and historical context.

And the first part of that is just to recognise that the CCW was established 40 years ago to provide a flexible framework within which the problems raised by certain weapons technologies could be addressed in law. So the CCW, and other legal treaties addressing certain weapon types are, first and foremost, a recognition that the general rules of IHL do not deal sufficiently with the issues raised by specific technologies.

And the CCW, in its preamble, sets its goals towards détente, the ending of arms races, of peace and of disarmament and it reaffirms “the need to continue the codification and progressive development of the rules of international law applicable in armed conflict.”

So there is a recognition here that the law is not considered finished, and that it needs progressive development.

Indeed CCW states are officially: “wishing to prohibit or restrict further the use of certain conventional weapons,”…

It is noticeable also that the Protocols of the CCW don’t simply repeat the existing general rules.  Sometimes in our conversation it feels like certain delegations feel like nothing can be said unless it is already agreed in the law.  But you cannot progressively develop the law by just repeating the law that is already agreed. We can recognise this as a rhetorical, political posture of course – intended to prevent progress – but we shouldn’t mistake it for an intellectually credible position.

So, my first point is that the preamble of the CCW makes it clear that it is a framework for developing the law. Those states calling for the start of negotiations of a legal Protocol are straightforwardly in line with the purposes of the convention – and we shouldn’t lose sight of that.

The second point I want to note is that there have been legal developments in our wider society that respond to automated decisions that affect people.

The General Data Protection Regulation (GDPR) adopted by European Union governments recognises that rapid technological changes have brought about new challenges.  And so, it affirms that people “have the right not to be subject to a decision based solely on automated processing” which produces legal or similar significant effects.

And it places those that undertake automated processing under an obligation to provide “meaningful information” about the logic involved in that processing, in advance as well as to be responsive to objections to such processing.

I am not arguing that this legislation directly creates legal obligations for European Union states regarding autonomy in weapons systems.  But I am noting that all states of the European Union at least have recognised that the technological development of automated decisions requires a legal response in order to protect the interests of people.

It is a technological development raising new challenges that were not adequately catered for in existing law.

And of course, this recognition of people’s rights in the face of automated decisions was not in place during the drafting of the Geneva Convention Additional Protocols, or of the CCW’s original Protocols back in the 1970s.

So my second point is that there is a recognition by many states that exposing people to harm from automated decisions requires a specific legal response to protect the rights of those people.

Specific issues

So my comments so far have really been about the validity of seeking new law in response to the challenges presented by autonomy in weapons systems.  Now I wanted to raise some of the legal issues at stake in the structure of prohibitions and regulations that is now central to this policy conversation.

Firstly, I wanted to note how adopting new law on this issue is protective of the existing legal framework – and should be welcomed as a mechanism for preserving important existing presumptions in the law.

And secondly, I will comment on some of the legal issues around targeting people, and why we would argue for prohibiting the targeting of people on a precautionary basis, including precaution against a sort of ‘erosion’ of the law.

In terms of protecting existing law, there seems to be some developing agreement around certain positive obligations that might be necessary in order to ensure meaningful human control in the use of systems.

There seems to be some general recognition that controlling the area and duration of a system’s functioning is necessary.  And I think we should see this as necessary to protect existing legal rules against them being interpreted in broader and broader terms.

So with the law being directed at people, and people then having obligations to make certain legal judgements and determinations in the undertaking an attack – then it follows, at least implicitly, that those people need to be able to make judgements that are based on relevant up-to-date and substantive information regarding the objective being sought and the context within which a system will be used.

You can’t claim to have made a substantive legal judgement whilst not knowing anything about the context. And you cant know the context without it having some spatial and temporal definition.

Now we can recognise that this may need to be framed with some flexibility – – – different situations are more ‘knowable’ than others – – – but recognising that area and duration need to be limited sufficiently for commanders to apply existing legal rules should be straightforward, and I think we are seeing agreement here.

And we should be recognising that this is vital to maintaining the substantive case-by-case application of human moral and legal judgement, which is central to the structure of the law, and which military lawyers should be seeking to protect.

There is a related issue though which I don’t think has been as generally agreed: and that is the need for obligations “to understand” the technology being used. It comes from the same logic – that meaningful legal judgements require understanding of the system that will be used and the context in which it will function.

And this perhaps links back to the GDPR legislation I referred to earlier – which requires those authorising automated decisions that might harm people to be able to provide meaningful information on the logic behind that processing.

So again, here I think we should be working towards obligations for the users of systems to have a meaningful understanding of how their systems work, and to be able to give a meaningful explanation of, for example, the external conditions that will trigger an application of force by the system.

After all, being able to explain what will trigger an application of force is vital to understanding the risks a system poses.

And these same arguments link to the prohibition we argue for on systems that cannot be used with meaningful human control – which we could understand as systems that cannot be sufficiently understood, or where their use cannot be sufficiently constrained to allow an understanding of operational  context.

Targeting people

Finally, I wanted to comment on our arguments for a prohibition on autonomous weapons systems targeting people.

This we would not argue as being directly required by existing legal rules.  Rather this is where new law is needed exactly because existing law does not rise to the moral and practical challenges at stake.

Rather we would argue for this position on a moral or ethical basis – that it is dehumanizing to have machines that kill people automatically on the basis of sensors – and that as a matter of precaution, such a prohibition would secure our collective best interests for the future.

Although we might not argue that the law demands this prohibition – there are major legal problems that can be avoided by such a rule.

Any claim that a machine can distinguish between people that may be targeted and people that should be protected is fraught with risks. This is not simply an issue of machine capability, but it is a more fundamental issue of the responsibility of people to make legal judgements, recognising the changeability of people’s identities under the law.

If we start to fix certain sensor-detectable characteristics as indicators of targetability then we automatically risk overriding protections.

And we risk bringing in bias and prejudice, deliberately or inadvertently, if combinations of physical characteristics associated with age and gender, for example, become proxies for combatant status.

Alternatively, it might be argued that states would prefer autonomous weapons that would target all people – but where their use will be limited only to areas where there are no civilians present.

Again, this puts at risk combatants, in such areas, who are hors de combat, or people seeking to surrender. And more broadly it pushes the burden onto the civilian population to avoid such areas.

These issues all threaten an erosion of existing legal protections and presumptions, in order to accommodate a technology that is dehumanizing. Instead we should take confidence from the fact that automated killing of people has not been adopted – despite the technology being readily available, in sentry guns and the like.  We should maintain that rejection.

Conclusion

So I think in these comments I have mainly been looking to highlight that both the legal framework and developments in our wider society are in favour of a legal response.

And that in some of the more detailed legal discussions, the rules that are proposed are fundamentally reinforcing and protective of existing legal orientations. And many of the underpinning legal arguments are still about preserving the protections afforded by the law.

So – it is time for the negotiation of a legal instrument. We can develop a flexible, practical instrument that preserves human control, and human application of the law – but an instrument also that draws a clear line against machines killing people. Thanks.

 

Featured image: the High-Level Opening Panel of the conference featuring Alexander Schallenberg, Federal Minister for European and International Affairs of Austria; Izumi Nakamitsu, United Nations Under-Secretary-General and High Representative for Disarmament Affairs; Phil Twyford, Minister of Disarmament and Arms Control of New Zealand; and Gilles Carbonnier, Vice President of the International Committee of the Red Cross (ICRC). Photo: MFA Austria/Twitter