My latest for the Wavell Room -
Photograph: IDF
AI targeting and the deaths of civilians
The Israel Defense Forces are, apparently, using AI to boost their targeting abilities in operations against Hamas in Gaza. Are you surprised? Me neither. Nor am I really surprised at the critical tone of the reporting. The Guardian, here, points to ‘concerns over data-driven ‘factory’’. Democracy Now up the ante, describing a ‘mass assassination factory’ … increasing civilian toll’. And +972 Magazine, the Israeli and Palestinian magazine which broke the story, also goes for a ‘mass assassination factory’.
What’s going on? I think several things. First, unease at automation (those 'factories') is being conflated with unease at civilian casualties. Secondly, there’s a palpable sense of injustice for some observers that one side in this conflict wields cutting edge military technologies, and the other side doesn’t. Finally, for some critics, I suspect any concern is folded into a broader sense of the injustice of Israel’s actions in response to Hamas’ attack.
We'll all have our own views on the latter - but this isn't an article about that. It's about the first concern: the ethical implications of AI targeting, including their effects on civilians. Is it invariably morally wrong to use AI to target? To assess that, we'll need to explore the ethics of targeting in general, and then see how AI alters those, if it does. Not easy, but doing so might allow some reflection on AI targeting that's applicable more broadly than to just this particular conflict. Let’s try.
First - what makes for ethically sound targeting? In custom and by international law, groups should only fight wars when doing so is justified (usually by reference to self-defence); and they should then strive to keep any military action in proportion to those ‘just’ war aims. Note though that being proportionate emphatically doesn’t mean limiting oneself to the weapons available to the opposition, or to the levels of violence they can muster. So, in this case, the imbalance in technologies might be uncomfortable for some – but it’s not illegal per se, or even unusual.
What about the ‘mass assassinations’ of the headlines? It's never clear who exactly is being assassinated - but if they are combatants, that's the wrong term: they're simply being killed in combat. And if they are civilians, it's also incorrect, unless you could prove their killing was deliberately intended as the point of the operation. We'll come back to that. I guess by 'assassinated', the journalists are both nodding at the precision of Israel's weapons, and also implying an injustice of some sort - perhaps that the IDF mean deliberately to kill civilians when they could otherwise avoid it.
Is there such an injustice here? And what role if any is AI playing in it? Well, belligerents should certainly make efforts to discriminate between legitimate and illegitimate targets – and to seek to avoid civilian casualties. But they must inevitably also make macabre calculations about how much anticipated but unintended killing will follow their action – this is the so-called ‘doctrine of double effect’.
The acceptable number of innocent third parties killed is subjective - that is, there’s no universally agreed figure of how many anticipated but unintended deaths are permissible in pursuit of a given aim. In practice, the number has varied from place to place and time to time. If anything, liberal societies have become increasingly restrictive about what’s acceptable, in contrast to earlier historical eras where force was sometimes applied far more permissively. Arguably, that sentiment has been part of the logic driving the development of increasingly precise weapon systems.
Still, civilians are inevitably killed and maimed, even in high tech wars. Working out how many should be risked is an altogether grim matter, but it's essential in recognition that wars are often fought around and within human settlements. Macabre or not, someone must make the calculation, keeping in mind the ends sought – often including, especially for liberal states, the need to demonstrate moral legitimacy. It's those ends that govern proportionality, remember, not some need to balance against enemy action.
In this instance, +972 Magazine reported that the IDF has dramatically increased the number of anticipated civilian deaths permitted. If so, that’s very likely because they view doing so as being in proportion to the cause of their war – urgent self-defence against Hamas. Right or wrong, that's nothing to do with AI, note.
So, on to the unease at automation itself. How does using AI change the equation? On one hand, if AI systems can serve up more legitimate targets, then more such targets will be attacked. And – yes – all else being equal, more civilian casualties will ensue in any given time period, purely because of the greater tempo of operations. On the other hand, if AI processes improve reliability (and we don’t know that in this case) fewer illegitimate targets will be hit by mistake. And, additionally, greater tempo might bring about more rapid results (again, it’s a big if in this case; but there’s a reason militaries strive for operational tempo – to stun enemy combatants into submission). If so, and if the combat is more rapidly concluded, it’s possible that fewer civilians will die than in a long attritional campaign.
Now, +972 Magazine also reported a related claim - that the more permissive targeting
'is mainly intended to harm Palestinian civil society: to “create a shock” that, among other things, will reverberate powerfully and “lead civilians to put pressure on Hamas,” as one source put it.'
That's one implicit logic of 'strategic' bombing. It would, if true, be more morally troubling - it's permissible to anticipate harm to civilians, not to aim at it. (Sidebar - it is also strategically dubious: the results in air power history have been meagre). The other logic is that shock and tempo shatter cohesion for combatants, not civilians. It's possible that both are factors. Where civilians and combatants are mingled in together, as in Gaza, the two are practically inseparable. So, if the targeting aims at shocking civilians - and is enhanced by AI - then the AI is part of the ethical problem. Alternatively, if it's enhancing prosecution of military targets, then its defensible, perhaps even admirable, when compared to the attritional alternatives. Perhaps some in the IDF are aiming to shock civilians and prompt an uprising against Hamas as +972's source suggests. I'm sceptical. More likely, tempo and shock are aimed at Hamas. Either way, it's the intention, not the technology that's at fault.
War necessitates a gruesome calculus. Or would if it were possible to calculate any of it – the whole thing is riven with ifs and maybes. How can we know in advance what enhanced operational tempo might achieve? How can we know whether AI processes improve reliability? Or specifically, whether these AI processes do. We simply don’t know what’s involved in the IDF’s ‘Gospel’ targeting system. It’s likely some combination of image and signals analysis, with automated mapping – likely serving targets up to human analysts for further processing. How much acceleration in targeting has that allowed? I don't know, and I bet you don't either. And that's before we get to the actual target.
So, it’s hard to reach firm conclusions on all this amidst the confusion of a highly charged conflict, especially from afar. That uncertainty should, perhaps, give a little pause to those who readily conflate AI with illegitimacy in warfare.