By Dr. Matthew Bolton, Thomas Nash and Richard Moyes:
Wide adherence to international prohibitions on biological and chemical weapons illustrates a strong international consensus that not all weapons are acceptable.
In banning anti-personnel landmines the global humanitarian community acted to address a military technology that has caused extensive suffering to civilians, but is also a weapon type that raises particular moral concerns because of the way in which it functions.
Weapons that are triggered automatically by the presence or proximity of their victim can rarely be used in a way that ensures distinction between military and civilian. Despite eventual successes on anti-personnel mines, and more recently cluster munitions, technology develops faster than a humanitarian consensus. A pressing challenge is the rapid evolution in military systems which are able to select and attack targets autonomously, moving towards the use of fully autonomous armed robots.
Although the relationship between landmines and fully autonomous armed robots may seem stretched, in fact they share essential elements of DNA. Landmines and fully autonomous weapons all provide a capacity to respond with force to an incoming ‘signal’ (whether the pressure of a foot or a shape on an infra-red sensor). Whether static or mobile, simple or complex, it is the automated violent response to a signal that makes landmines and fully autonomous weapons fundamentally problematic – it is killing by machine.
Some may dismiss the development of autonomous military robots as ‘science fiction’, but it is coming ever closer on the 21st Century battlefield with a variety of systems already developed and deployed that require (and are given) less and less human decision making and direct control. For a useful overview of the growing role of military robots, view Peter W. Singer’s TED Talk or read his book Wired for War. The US unmanned aerial systems roadmap 2009-2047 is quite explicit about the development of capacity for autonomous targeting, as well as in many other aspects of operation. Other technological developments, such as remote control drones, semi-autonomous weapons and ground vehicles, distance the process of killing and may produce further changes to the psychology of warfare in parallel with the risk of automated targeting.
Academics such as Noel Sharkey, Peter Asaro, Robert Sparrow, Jurgen Altman and others have urged the international community to begin serious deliberations about how to regulate and control the use of robotic weaponry, founding the International Committee for Robot Arms Control (ICRAC) as a forum for developing such an agenda.
Whilst an expanded role for robots in conflict looks unstoppable, we need to draw a red line at fully autonomous targeting. A first step in this may be to recognize that such a red line needs to be drawn effectively across the board – from the simple technologies of anti-vehicle landmines (still not prohibited) across to the most complex systems under development. This is not to ignore challenges to such a position – for example, consideration might need to be given to how automation functions in missile defence and similar contexts – but certain fundamentals seem strong. Decisions to kill and injure should not be made by machines and, even if at times it will be imperfect, the distinction between military and civilian is a determination for human beings to make.