27.03.26
AI-based targeting in armed conflict
By Albane Guyot & Richard Moyes
Article 36 calls on states to halt the use of AI-driven systems as a basis for target recommendations in armed conflict until the necessary international rules and guardrails are in place. Systems that process data to generate, identify or recommend targets for possible attack present distinct challenges and grave risks. Additional rules are needed to ensure meaningful human judgement, control and accountability in the use of force, to uphold human dignity and to protect international law as a framework structured around human decision-making.
These rules should prevent AI-driven systems from serving as the primary basis for life and death decisions. They should recognise that people have a right to reject their personal data being channelled into these military systems, and they should require traceability, and transparency from states regarding the AI-targeting functions they are using and any harms that are caused.
Article 36 has set out elsewhere the specific prohibitions and positive obligations that should be applied to ensure meaningful human control and to protect human dignity in the context of autonomous weapons systems. Both AI-driven targeting and autonomous weapons systems threaten to undermine legal frameworks that are based around moments of morally engaged human decision-making.
Generating targets through AI systems risks attacks based on biased datasets. These systems may also promote surveillance, treating people en masse as objects to be processed, and systematic social labelling for the purposes of violence. We should, as individuals, have the right to prevent our personal data from flowing into these systems. Private sector actors should be required to protect those rights, but they also have the opportunity to show leadership in this challenge.
International discussions on AI in the military domain have, so far, presented a broadly permissive framework, which risks validating a slide towards further automated violence. The international community needs to take action to arrest this slide: halting the use of AI systems to generate targets for attack and developing a credible framework of rules that rises to the moral, legal and humanitarian challenge that these systems present.
Whilst certain states have asserted the potential benefits of these technologies for improving the protection of civilians in armed conflict, the pattern so far has been towards an increased volume of attacks with scant human evaluation. Imagined benefits for civilians in hypothetical scenarios should not outweigh the actual patterns of civilian harm experienced in real conflict situations.
Later this year, states have an opportunity to move to the negotiation of a legal instrument on autonomous weapons at the Review Conference of the CCW. In a context of conflict and widespread civilian harm, new military technologies and orchestrated attacks on international law, it is vital that they take this chance.
States should also now give specific attention to the humanitarian impact, and the ethical and legal hazards, presented by AI-led targeting systems – and to the rules and policies needed to address these concerns. Whilst this issue presents grave challenges it also represent an opportunity for positive collaboration, amongst states, international organisations, civil society and industry, to limit the functions afforded to AI for the good of human society.
by Richard Moyes with Albane Guyot