
03.09.21
CCW Group of Governmental Experts August 2021 meeting analysis
By Richard Moyes
The CCW Group of Governmental Experts (GGE) meeting, from 3-13th August 2021 served to further confirm that the international conversation on autonomous weapons has been transformed in 2021.
Whilst substantial disagreements still exist, the basic conceptual approach to the subject matter and the shape of the necessary policy response are now clear. This has not been the case in previous years when the conversation was characterised by contradictory orientations to basic ideas and people talking past each other. This stabilisation of the policy conversation – around a combination of prohibitions and regulations across a broad concept of autonomous weapons – is a major political achievement, and it is a necessary step for the successful development of a legal instrument.
Half-way through this August GGE, Belgium’s Ambassador Pecsteen circulated a ‘Chair’s paper’ on ‘Draft elements of possible consensus recommendations…’. This provided a basis for the second week’s discussion and it was a very positive move, despite weaknesses in the paper itself. The Chair’s text fell significantly short of what is needed in reality: it did not clearly call for a legal response, it failed to acknowledge the need to prohibit systems targeting people, and it adopted a rejection of ‘fully autonomous weapons’ that were so narrowly defined as to be implausible. But the paper clearly reinforced the structure of prohibitions and regulations as central to the debate – and it served to frame the subsequent conversation constructively around that.
This is creating an interesting political dynamic across the community of states.
Within the central structure that is emerging in the discussion there are two main camps:
- There are now groupings of increasingly confident progressive states, broadly aligned with the position of the Stop Killer Robots campaign, and the recent position of the International Committee of the Red Cross (ICRC), calling for a legal response, including both prohibitions and positive obligations, and also demonstrating clear command of the policy arguments within this structure. Whilst states in this camp are not completely likeminded, there is a clear sense of common aspiration. See for example Costa Rica et.al, and Brazil et. al.
- Working with same basic structure, but rejecting a legal response at this stage, a smaller cluster of states (see for example France & Germany), are seeking a political outcome, possibly as soon as the CCW Review Conference. It feels like they would be pleased with a rhetorical rejection of ‘fully autonomous weapons’ – but with that term defined to refer to systems designed to operate outside any structure of human control (an approach also followed in the text circulated by the Belgian chair). This formulation is awkward because the notion of a system being deliberately developed to be outside any form of human control is implausible. This grouping is not supporting a prohibition on systems targeting people, but does see the need for additional positive obligations to maintain control over other systems (though not in legal form).
A sceptical reading might fear that this latter position, if adopted by consensus in the CCW, could perhaps be sold politically as a significant achievement – asserting ownership of the term “fully autonomous weapons” whilst undermining it, and forestalling a proper legal response whilst requiring no meaningful constraint on autonomy in weapons systems in practice. However, there have already been Guiding Principles in the CCW and elsewhere a political Declaration (led by France and Germany) based on the Guiding Principles. It seems unlikely that adopting a weak political statement will forestall a serious legal response for long.
In content, the position being promoted rightly embraces a recognition that certain characteristics of autonomy can render systems unacceptable. This is a strategically vital point, and it places a significant burden of responsibility on the definition of that boundary line. The formulation “systems designed to operate outside any framework of human command and control” (Chair’s paper) doesn’t stand up to detailed scrutiny and so is not likely to be able to carry the weight of responsibility assigned to it.
- Practically, it can be argued that ‘using’ a system implies some form of control. But if the mere act of ‘activating’ a system (at a particular point in time) represents at least the potential for integration of that system into a framework of command and control then it does not seem that this definition implies any constraint beyond that. On a plain reading, it seems to apply only to systems that operate prior to a point of deliberate human use. It is practically implausible.
- Morally, it allows space for systems that can used within a military command structure but which cannot be practically understood by the user and so are foreseeably prone to creating unpredictable actual effects. This then fails to maintain a relationship of moral accountability between a human user and the consequences of their actions.
- Legally, the existing rules of the law need to be applied by human legal agents in individual ‘attacks’. The issue then is not the place of a machine within a ‘framework of command and control’ but the characteristics of that system as a tool for use by human commanders in attacks. The proper tests here are around:
- the understandability of the systems functioning, and
- reliability of that functioning, coupled with
- sufficient constraint on the space and duration of its use,
- to enable predictive contextual analysis that can provide a morally reasonable basis for legal judgements.
So to be capable of being sufficiently ‘predictable’ a system must be understandable, reliable and limitable in time and space. It is the characteristics of systems within a framework of human command and control that are important – not fantastical notions of systems operating wholly independently.
Crafting a definition of systems that cannot be used with meaningful human control will present challenges – and, as suggested above, may require several components. But we should avoid approaches that place the problem concept in the realm of the fantastical. Whilst urgency in the current conversation is to be applauded, negotiations are needed to give specific lines of prohibition a morally, legally, and practically considered form.
However, the political groupings noted above are all at least participating in the same conversation!
Elsewhere, the UK, Russia, India, the USA and others reject a legal response but also reject the sort of structure that the previous groups are working within (perhaps anxious that conceptual convergence may lead to actually achieving something). The states noted here are by no means united on the content – but they are united in standing outside the structure that is being converged towards and seeking to avoid a sense that agreement on that structure is being reached.
Various political and semantic techniques are employed in order to break up the constructive dynamic, including: emphasising that existing law is adequate; insisting on using the phrase “emerging technologies in the area of lethal autonomous weapons systems” (instead of just simplifying to “autonomous weapons”); worrying that the chair is exceeding his mandate; and generally suggesting that the conversation can properly do nothing more than reiterate things that have been agreed in previous years. However, these techniques are significantly less effective in a context where most of the other participants are clearly having a serious conversation instead.
As we approach the Review Conference later in December, certain states will be increasing the pressure towards the adoption of some form of political ‘outcome’ that can be presented as a success. Such a success might fulfil short-term political aspirations. To some it might be sold as impeding the development of a separate process of treaty development (in a forum that the most militarised states can’t straightforwardly control). However, it is unlikely to be an outcome that does justice to the genuine social challenges at stake in establishing limits on the role of machines in decisions to harm people.
The CCW discussions in August are establishing the foundations of a legal response – even if the direct route to that outcome is not yet visible. Over the period ahead in the CCW, there will be proposals for political outcomes that can be presented as a success, arguments over the ‘mandate’ for further work, late nights in which ‘important states’ perform the rituals of ‘tough negotiations’, all in preparation for the ‘blame game’ in which certain camps try to pin responsibility on the other for their collective inability to fit a square peg into a round hole. But against the background of those performances a legal instrument is taking shape – and it is one that responds to serious moral challenge that are of importance to society. We should remain confident that such an instrument will find legal expression.
Featured image: A night view of the ” allée des drapeaux ” of the United Nations in Geneva. 14 February 2014. UN Photo / Jean-Marc Ferré