Re-aiming REAIM
What the big international conversation about military AI gets wrong, and what it should be talking about.
Street art in downtown Seoul
I’m in Korea for the second big REAIM conference on responsible military AI. Seoul is great – visit if you can!
REAIM (Responsible AI in the Military Domain) is a large gathering of states and others. There’s a REAIM Commission, and I’m one of its Global Commissioners. I have to say though, there’s nothing like an international gathering earnestly discussing the development of norms to bring out my inner Dr Strangelove.
Many of the states attending signed a ‘blueprint for action’ that was understandably broad and non-committal. It, for example, stressed the importance of establishing appropriate standards and norms in this area – and more such stirring stuff. Plenty of people would like states to go much further, and faster. Some are keen on binding international law, regulating, perhaps even banning some types of military AI. Others think we may need changes to existing humanitarian law, to address the novelty of weapons that choose their targets with no direct human control, or even oversight.
They are out of luck. Tighter regulation will not happen, because the great powers don’t want it to happen. And they don’t want it to happen because of the security dilemma - there is so much uncertainty about where AI is going, and how much fighting power that might convey, and for whom, that states are reluctant to bind themselves. An arms race is most definitely afoot.
That’s not great news. Arms races are a risky business. Uncertainty and miscalculation amidst rapid shifts in geopolitical power – we’ve been here before. There’s a spicy new twist too – escalation dynamics where non-human decision-makers are involved are even less well understood than when groups of humans square off. The ‘grammar’ of war, to borrow Clausewitz’s terminology is uncertain; more so since the inner machinations of modern AI are, to put it mildly, opaque.
And then there’s the rapid development of AI itself. We can be too anchored by the AI of today to pay sufficient attention to what’s coming, fast. Some of the ‘existential risk’ discussion is rather odd, and there’s more than a whiff of religiosity to some views of a superhuman AI coming to do-in humanity. But it’s obvious now that frontier AI is becoming more powerful and flexible at an extraordinary pace. The implications for national security will be profound, extending deep into the fabric of societies, states and international relations.
Surely, these are the sort of thing we should be discussing at REAIM. Instead the discussion is still dominated by tactical applications of AI, in ‘killer robots’ and the extent to which these can be brought into compliance with international humanitarian law. This is a pity.
Why the framing on regulation and essentially tactical AI? I think it reflects an earnest desire to limit the scourge of war in this new arena. And perhaps it also reflects the attitudes of some state actors who are keenly aware they will struggle to compete. AI presages social and geopolitical dynamics that will transform societies and states themselves – AI innovation is heavily concentrated in a few places and many justifiably fear being subject to the dictates of a new AI imperialism. Perhaps some have read Thucydides’ sobering reminder that sometimes the strong do what they can, the weak do what they must. Alas, regulation in a Lilliputian attempt to tie down the powerful is doomed to failure. Talk about norms can easily pick off the low hanging fruit, like noting concern and the need for greater trust – but it’s hard to progress far beyond that.
In Seoul, however, I detected some encouraging stirrings. The ‘blueprint for action’ references stability only in passing; but I heard about geopolitics, escalation dynamics, and existential risk more than once – and not just when I was talking about them myself. In the months ahead, I think we’ll see more talk about the geopolitical and strategic dimensions of AI, perhaps even in forums like this one. I hope so.