Arms racing and the prospect of regulating autonomous weapons
More reflections on the inter-state AI conference in Vienna
At the Vienna conference on autonomous weapons, I was asked about arms racing – was there one in AI and what were the implications for regulation, if so?
The mood in the hall was against AI weapons, and in favour of regulation. Waves of applause broke out every time the dangers of AI were raised, or someone suggested the imperative of doing something about them – a treaty, perhaps even an outright ban. My unwelcome take – it won’t be possible.
Yes, I accept that norms matter in international affairs. Values change – we no longer permit slavery; anti-personnel mines are banned; there’s a powerful nuclear taboo, and a (weakening) norm against assassination. Realities in international affairs, as elsewhere in human affairs, are often socially constructed.
But that’s not the same as saying that realism is just an ideology – as one participant suggested, advocating instead a doctrine of legalism, with treaty law as final arbiter of who does what. There’s a reason the League of Nations failed – it sought to compel the most powerful states to bend their knee to the law, without the ability to veto. Another participant critiqued deterrence as shaky and probabilistic. I agree – there are no certainties when it comes to deterrence, even with thermonuclear weapons. We’ve had a few near misses. But, as I said from the stage, I’d rather have deterrence, however imperfect, than attempt to stop war through legalism, like the infamous Kellogg-Briand pact of the late 1920s that simply sought to outlaw it.
Why am I so sceptical? There are a bunch of reasons, and I highlighted some in my remarks:
First there’s definitions – AI is a marketing slogan as much as a discrete technology. It covers different philosophies, different hardware, and different applications. What, exactly is it you propose to regulate?
Many of these applications are dual use – useful in military and civilian domains. Indeed, many military applications are identical to the civilian ones – both, for example, engage in recruiting, education and healthcare. Or there’s product and concept design. Improving these things with AI is a long way from the front lines, but nonetheless has an impact on military power.
Perhaps what matters is the sharp end, especially the ‘killer robots’ that dominate discussion, especially in forums like this one. Perhaps. But the problem is of boundaries – what about systems that do the targeting? They don’t pull the trigger, but they are definitely part of the ‘kill chain’ – as the larger system assembling lethal force is often termed. But then what of the logistics system that provides ammunition? Or the AI that designs the platforms, or hones the armed forces via richly detailed simulation? Aren’t these part of the kill chain too?
There’s much discussion of ‘meaningful human control’ – and many states aspire to preserve that in the application of lethal force. But there’s little agreement on what constitutes ‘meaningful’ here. The pilot of an F-35 ultimately launches the missile. But before that there’s plenty of scope for the AI in the aircraft’s intelligence fusion computer to bound the pilot’s rationality. Does that diminish the pilot’s responsibility? If so, by how much? Should we ban the F-35? Boundary problems are hard.
There’s another reason that international agreement won’t be forthcoming – the security dilemma is incredibly powerful here. There’s tremendous uncertainty about what’s coming next in AI; about what AI can do for fighting power; and about who is going to do best at innovating and instrumentalising it. (A clue – the loudest calls for regulation are coming from weaker actors who fear being disadvantaged).
From the outside, it’s hard to know exactly what capabilities adversaries have – similar looking robotic platforms will perform very differently, depending on the quality of code, and the concepts through which they are employed. A remotely controlled drone might be upgraded to full autonomy with a simple code update. Meanwhile, the signature of a military AI programme is very different from that of a nuclear or biological weapons programme. It may draw heavily on dual use technologies. It won’t necessarily need specialised, heavy plant, like centrifuges, or rare earth metals with limited utility elsewhere. It won’t be easy to monitor from afar, for example by satellite surveillance. And developing cutting edge AI might be far more challenging than simply pinching it, via industrial espionage.
All-in-all, the prospects for monitoring compliance look bleak; defection from any regime looks comparatively simple; and the gains might be very great. So, choosing to constrain yourself via treaty obligations looks extremely risky.
I made a third point – about the offensive potential of AI weapon systems. Nuclear weapons are commonly considered a powerful ‘defensive’ weapon – their utility comes from the prospect of retaliatory punishment. A small arsenal, securely held, might be enough to deter very large conventional forces. With AI weapons, by contrast, the picture is less settled. My own view is that they, at minimum, muddy the nuclear waters – perhaps undermining that settled defensive equilibrium in a similar way to anti-ballistic missile technologies that threatened the effectiveness of retaliatory strikes. More broadly, I think there’s scope to argue that they constitute an ‘offensive’ weapons technology – enhancing the ability to accurately and rapidly concentrate massed force. That would give possessors of the best AI a possibly significant first mover advantage. Bottom line: offensive weapons incentivise arms racing. More so when there’s so much uncertainty. Better safe than sorry.
Overall then, reasons to be pessimistic about regulation abound.
A final observation – I’m a commissioner for the newly established Global Commission on Responsible Military AI. Isn’t my scepticism here at odds with that role? Only if you conflate ‘responsible’ with ‘banning’. Which I don’t. Surely it’s far more responsible to have a grown-up, informed discussion about the challenges I’ve outlined here, and others besides, than it is to stridently call for a ban, irrespective of the difficulties. It’s idealism that’s irresponsible, in my view, not me.