As states prepare for international discussions regarding ‘lethal autonomous weapons systems’ (LAWS) at the United Nations in November, the UK has provided a slightly updated statement of military doctrine in this area.

The UK’s Taranis stealth UAV. (Flickr/Qinetiq https://www.flickr.com/photos/qinetiq/4789729740/)

In a new publication on Unmanned Aircraft Systems the UK’s Ministry of Defence this week acknowledged that its definition of an ‘autonomous weapon system’ differs from those of other states. In doing so, it took a step towards recognising that such definitions remain one of the key areas for international negotiation on the issue. The UK’s proposed definition of a lethal autonomous weapons system appears to have gained little traction in the international policy debate thus far and this acknowledgment could open up pathways for more substantive and fruitful discussions with other international actors.

At the same time, the UK unfortunately missed an opportunity to squarely address the ethical, human rights and humanitarian law challenges posed by lethal autonomous weapons systems, and in doing so to contribute actively towards a global standard that would prevent their development and use. Over the past decade, rapid advances in technology have powered the development of increasing autonomy in the ‘critical functions’ of weapons – such as in the identification, selection and engagement of target objects. This has broad implications for the relationship between human and machine decision-making on issues of life and death – threatening the development of weapons systems that strike targets without meaningful human control. This in turn presents new and distinct challenges regarding their regulation and use.

The UK did reconfirm its commitment to refrain from developing LAWS – a commitment that would be very welcome if the UK definition of LAWS was not so far-fetched. It is a commitment that is sorely undermined by the UK’s narrow and futuristic definition of autonomous weapons systems as those that, “capable of deciding a course of action without depending on human oversight and control,” also have “the ability to understand higher-level intent.” By maintaining a definition that focuses on systems so sophisticated that they are not yet technologically achievable (and according to the UK paper may never be achievable), the UK has failed to engage substantively on the concerns surrounding those weapons systems already on the cusp of development.

It is an approach that seeks to situate the problem so far in the future that it will give a free reign to the development of systems where targeting choices are made repeatedly by computers and algorithms even if designers would never claim these systems “understand” the “intent” of the operator. This approach hampers international scrutiny of weapons system developments that might have a fundamental bearing on military conduct and the reliance on computer algorithms to make life and death decisions.

The Ministry of Defence did, however, emphasise again the UK’s commitment to ensuring “human control and oversight”, despite having rejected calls for an international agreement setting out such a requirement. But again the UK failed to set out what it understands as meaningful human control, and what its key constituent elements might be. Nor does the Ministry of Defence give any clarity on the threshold of human control that the UK deems necessary to ensure any weapon or use of weapons falls within the law.

The repeated assertion that current international law is adequate to contain the development and use of LAWS is similarly unhelpful, particularly as the UK has not set out how the law should be interpreted or applied to the use of these weapons. Instead, the UK has proven both dismissive of the unprecedented challenge posed by LAWS, and deaf to the urgings of civil society, leading artificial intelligence experts, and senior UN officials to support a prohibition on LAWS, based on a collectively agreed definition.

Urgent attention is needed to assess the acceptability of systems that are already under development, particularly those on the cusp of use. The UK’s current policy orientation on LAWS is out of step with technological developments, and avoids addressing head on the central concerns over the development of weapons with increased autonomy. It also does not elaborate key terms such as “human control” – beyond stating that a human will set the parameters for a machine’s operation.

Such issues are the substance of debates on the role of artificial intelligence and autonomy in weapons, and the UK should encourage and openly engage on them in international discussions if it is to avoid being side-lined. The UK now needs to build upon its recent recognition that its definitions of “autonomy” differ from other states, and work towards building a collective understanding of both autonomy and, most pressingly, meaningful human control.

Read more

“We do not have long to act”: Top AI/Robotics leaders raise alarm about killer robots

UK government: Defining ‘human control’ essential at killer robots talks in 2017