
I recently addressed the autonomous weapons gathering hosted by the Austrian government. There were 140-odd states there, along with a large audience of interested third parties. I think it fair to say that the balance of opinion among the latter was against AI weapons, and perhaps more broadly against the use of AI in national security. Sentiment favoured regulation, international treaty, and even an outright ban.
I was the skunk at the party. I’m extremely sceptical about the possibility of effective arms control here. In a later post, I’ll explain why. But first, I was asked whether AI weapons made war more likely, by reducing it to killing from afar, on a screen.
I was sceptical here too, and made three arguments:
- There’s a plausible idea that increased ‘killing distance,’ whether physical or psychological (or both) makes killing easier. You find it in SLA Marshall’s work on Second World War infantry; and it’s the central theme in Dave Grossman’s much cited book On Killing. There’s something to it: most humans, happily, find killing people face-to-face very hard. In so far as technology increases that distance, it ought to make killing less troubling.
And yet, the most brutal genocide of the modern era, in Rwanda, saw hundreds of thousands hacked to death with agricultural implements by tens of thousands of their neighbours. Conversely, when the fate of the world hung in the balance during the 1962 Cuban missile crisis, empathy for distant strangers and ideological enemies worked to prevent disaster, despite the high-tech weapon systems of the day permitting vast killing at tremendous distance. So killing distance matters …. but only sometimes.
- Second point: there’s a logic that the depopulated battlefield enabled by pervasive autonomous systems will make states more willing to go to war, and to escalate once in it. There’s less risk to their own soldiers.
Does this stand up? I think not. At least not in the case of liberal western societies, where the development of increasingly sophisticated, precise weapon systems has been driven in part by the twin engines of liberal values and liberal risk aversion. The same liberal values that undergird the innovative ecosystem responsible for developing cutting edge AI also extols the value of life – our own soldiers, and even the societies against which we fight. Not always, of course – but certainly enough to make me doubt AI weapons by themselves will encourage adventurism. As for other, less-liberal societies, I think marginal quality really matters for military AI; and I doubt very much theirs will be as good as ours – an argument I make at length elsewhere.
If you don’t buy all that, I’ve another argument for you: layered deterrence. AI systems are only part of the combined arms package states are developing. You might fancy that your AI weapons give you an advantage, and with minimal risk to your own people – but the enemy gets a vote too, and they may choose to reply asymmetrically, using their own comparative advantages. If that includes possession of nuclear weapons, it would be a bold leader to gamble that AI alone will deliver success.
- Argument three then. It’s suggested that AI systems encourage disproportionate action – including by making success look easy and cheap. Much attention has been given to Israel’s use of AI in targeting – via its so-called Lavender and Gideon systems. Did these systems result in more civilian deaths? Did they encourage a belief in the IDF command that technology would lead to victory, against a less sophisticated enemy?
Well, they certainly haven’t self-evidently contributed to Israeli strategic success. Rather, they are part of a rising tide of international opprobrium. Tactical success is harder to gauge – but it’s likely that the increased tempo of AI-cued operations has inflicted severe damage on Hamas military strength. As for the increased civilian deaths and casualties – and this was my point in the conference hall – the responsibility is with humans who set the risk parameters for their machines; not with the machines per se. With the same risk appetite per strike, the aggregate will be more with Lavender, simply because of the increased tempo it enables. We can’t blame automation for that. And we can’t really assume that increased tempo is per se a bad thing either: militaries prioritise it for good reason – it can shatter the cohesion of enemies, and perhaps deliver quicker tactical success. I wrote more about that here.
In concluding, I did offer one very big reason to be unsettled by AI weapon systems. It’s their role in conflict initiation and escalation. We understand the dynamics of war as fought by humans, at least somewhat. We have a handle on human psychology – there is a huge body of interdisciplinary scholarship that explores the, often emotional, decision-making involved. ‘Machine psychology’, by contrast, is in its infancy. In Clausewitzean terms, the ‘grammar’ of war with AI is opaque. Strategists must always ask, ‘what did they mean by that?’ And with machines in the mix, it’s not always clear. We know already that machines do surprising things, and gauge risk in ways rather different from humans. Right now, these decisions are limited to the ‘toy universe’ of games like poker and Diplomacy, or to very tactical military decisions. In the near future, they will inevitably feature on a larger, strategic canvass. On that, at least, I can agree with the unease many at the conference evidently felt.
One last point: I was struck by the military naivety of some present. I don’t care for gatekeeping – we are all students of war, in one way or another. And in complex debates like this, you can’t be an expert in the full range of relevant topics – whether that’s law, technology, strategic culture, or military tactics. I do think, though, that if you want to participate meaningfully in these discussions it’s really important to understand something of military strategy and tactics. What, for example, is a frigate and how might it be used in conflict? Why do states spend large sums on acquiring frigates? I think questions like that are an essential part of the debate, without which opposition to AI weapons is simply an article of faith.
Next – some thoughts on arms control.