As the technologies that are used within new weapons become increasingly complex  questions about their moral and legal implications arise. Advances in sensing, data processing, robotics and machine learning, amongst other areas, are producing concerns discussed that are being discussed internationally as issues of increasing ‘autonomy in weapons systems’.

This new paper looks at:

  1. the problem of ‘opacity’ in the context of ‘autonomy’ – with opacity presenting a barrier to a user’s understanding of a system and therefore, inter alia, to predictability of outcomes, and accountabil- ity for outcomes; and
  2. the notion of ‘explicability’ as one form of response to that problem.

The purpose of this paper is to outline a way of thinking about how some of the challenges arising from greater complexity in weapons systems could be approached by using the concept of explicability, understood as a basic ethical principle.

It is a principle, now finding expression in civil law and policy, that relates to intelligibility of the inner workings of technologies and that can help to enable accountability for their use. The paper reflects on issues of ‘opacity’ in the context of autonomy in weapons, including in relation to legal obligations and requirements for accountability. It then considers ethics and the notion of ‘explicability’ as they have been approached in policy and legal responses to machine decision making in everyday life. Finally, it suggests implications from those emerging ethical orientations to issues arising in the military context.

Its key messages are:

  • Where technologies work in ways that are ‘opaque’ – such that their functioning cannot be effectively understood or explained – it raises challenges for predicting specific outcomes and ensuring adequate accountability. Such challenges are particularly acute in the context of autonomy in weapons because the outcomes involved include severe harms.
  • In the civilian space, policy and legal responses to new technologies have recognised these challenges and have imposed obligations for ‘explicability’ both as a system requirement and as part of any response to people who experience harm from automated data processing.
  • In the context of autonomy in weapons systems, establishing a legal requirement for ‘explicability’ (as once component of a legal response) would prohibit certain forms of system functioning. It would also provide a basis for scrutiny of technologies under development (such as in national weapon reviews) and would facilitate legal judgements and accountability around the use of systems that are not prohibited.

Featured image: Illustration of a ‘black box’ © bb-studio/Article 36

Read more