In the results of an enquiry published this week, the House of Lords Select Committee on Artificial Intelligence challenged the UK’s futuristic definitions of autonomous weapons systems as “clearly out of step” with those of the rest of the world, and demanded that the UK’s position be changed to align with these within 8 months. This could represent a significant step towards enabling more constructive participation from the government in international discussions on these systems.

The UK currently defines autonomous weapons systems as those “capable of understanding higher-level intent and direction” – which places them in the realm of science fiction, far beyond the parameters within which most states are debating these systems.

The Lords also recognised, importantly, that a “lack of semantic clarity could lead the UK towards an ill-considered drift into increasingly autonomous weaponry.” By setting the bar at what some would consider “unicorns on rainbows”, the UK risks adopting weapon systems that operate without acceptable human control over the selection and engagement of targets, but that nevertheless would not fall within its definition of an autonomous weapon.

The committee concluded that the government’s definition limited its ability to participate meaningfully in the international debate on Lethal Autonomous Weapons Systems. It highlighted that this not only restricts the UK’s ability to show international moral and ethical leadership on this issue, but also that it “hamstrings attempts to arrive at an internationally agreed definition.” Definitions are currently one of the key areas of international negotiation on this issue at the Convention on Certain Conventional Weapons.

The committee further noted that “[i]t was generally agreed that the level of human control or oversight over these weapons was at the heart of the issue.” The need to retain an acceptable level of human control emerged as a key area of convergence during discussions at the Conventions on Certain Conventional Weapons’ Group of Governmental Experts meeting on Lethal Autonomous Weapons Systems in Geneva last week. Article 36 has long advocated for a human control-centric approach to governing increasing autonomy in weapon systems, and for drawing a clear normative line, in the form of a legally binding instrument to prohibit the development and use of weapons systems where humans would no longer exert meainingful control over the use of force.

The report requests the convening of a panel of “military and AI experts” to agree on a revised form of words for the UK’s definition. In convening this panel, the government should consider including a diversity of experts who have been closely involved in the international debate, as well as UK tech companies that have been engaging in this conversation, and ensure gender diversity in the panel’s composition.

Finally, the report recommended the creation of an “AI code” giving guidelines of conduct, and listed five starting principles for developing this. One of the principles was “[t]he autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.”

Collectively, these recommendations and analysis represent an important contribution from the Select Committee that could help to steer the UK government towards a more productive orientation to international processes to address the legal, moral and ethical problems posed by increasing autonomy in weapons systems.

Read more

Drawing a line: Article 36 statement to UN talks on killer robots

Tech companies and civil society urge UK to take action to prohibit autonomous weapons

The new UK joint doctrine note, definitions of autonomy, and human control


Photo: Eric Huybrechts