We are working to avoid a dehumanised future, where machines can be tasked to kill and apply force without people understanding or being fully responsible for the consequences.

International discussions on ‘autonomy’ in weapons systems now need to focus in detail on the specific components of a solution. Discussions are already building a common understanding of the prohibitions and other obligations needed to preserve human dignity and ensure meaningful human control.

Regulating ‘autonomy’ in weapons systems effectively

Our new leaflet ‘Regulating autonomy in weapons systems’ explains with graphics and visuals a basic model of how a treaty to address autonomous weapons could be structured – and illustrates how that structure responds to the problems that increased autonomy in weapons systems raise.

For us, there are two key problems that we need to work together to solve:

  • firstly, which systems within the scope of discussion are fundamentally unacceptable; and
  • secondly, how human control can be maintained over the remaining systems in this area, in order to adequately uphold both legal obligations and more profound moral and ethical principles.

An effective structure for international legal regulation would prohibit certain configurations – such as systems that target people, and those that can’t be meaningfully controlled – and require positive obligations for meaningful human control over others, within a broad scope of sensor-based weapons systems that employ a particular process to apply force: that of matching sensor inputs to a “target profile” of characteristics following a system’s activation, emplacement, or deployment.

This publication lays this out, looking also at how different types of systems within the scope of current international discussion would be affected.

Download this resource in: English, Español, Nederlands, Deutsch, Magyar

Please contact us if you are interested in translation into other languages.

Read more