I just spent a few days talking about the strategic implications of AI for the UK at the fantastic country house you see above. As ever, I can’t say what everyone else said – but I can certainly tell you what I did. I was asked to set out a few provocations for discussion, and here they are:
The AI we have today is not the AI of next month, still less the years ahead.
Our discussion is overly anchored in the tech of today. There’s no need to invoke the world of The Terminator, but clearly things are moving swiftly. My prediction for the immediate future – AI continues to improve its social intelligence, becoming better at understanding and anticipating humans.
Strategic implications abound –
I highlighted the likelihood of rapid shifts in power balances, and the instability and conflict that often follows that. Different countries will innovate and instrumentalise AI at different rates.
The ways in which military AI proliferates will play a big part in this process. In particular, I highlighted profound changes in the traditional defence export model. If you are exporting fighting power as a service, rather than a platform, you have a very high degree of control over clients. You could even think of this as a form of imperialism. Regimes will become reliant on service providers, who can, if they choose, dial up and down that fighting power at will, in the way that Elon Musk might dial up and down the range on Teslas via software updates. Better to be a provider than a purchaser in this world. The UK, of course, is both.
The world I see emerging is of an Empire of the F-35. Liberal minded states, bound together by shared technology and a vision of the norms that govern it. This world overlaps with some existing alliances, though not perfectly (as with Turkey, for example). I anticipate new alliance structures emerging, or evolving, from this new reality, in the way that the EU emerged from pooled coal and steel.
The national organisations we have currently are, of course, not best suited to this world. Change will come, but bureaucratic inertia is a factor too. I highlighted three challenges:
How do you hold inventory of ‘stuff’ when the technology is changing so fast? No point having warehouses full of millions of a particular tactical drone when it will be outmoded in months. We need the industrial capacity to create things extremely quickly. Lots of investment in automation needed!
The British deterrent is overly concentrated in one delivery system, making it vulnerable to, among other things, technological advances in sensing and data processing. We need to restore a second leg of our triad, via air launched cruise missiles.
We need to develop sovereign capabilities at the frontier of AI research – for the largest, most capable models, we are currently reliant on a handful of large, American providers. That is, for me, too risky.
There’s lots of talk about AI governance and the need for responsible AI. This is, on the whole, good. There’s a widespread and earnest desire to limit the scope of war and tackle the risks and inequities of AI adoption. But we are too eager to equate ‘responsible’ with governance and regulation. In national security it can be wildly irresponsible to constrain ourselves when others do not. Existing arms control regimes, for chemical and biological weapons, or landmines and cluster munitions, are (in part at least) a reflection of their limited strategic utility for great powers. They are banned because they don’t much matter. AI does matter, and will not be banned.
This is the security dilemma, and it is not an imaginary construct. Uncertainty, especially over the offence-dominance of AI, and over the ease of defection from arms control regimes, and of proliferation of AI will limit the scope for international regulation.
The challenge for democracies is to square this dynamic with our own values. Via our export of AI-as-service we will have a powerful tool to extend those values to client states. We need to maintain our innovative ecosystem and our martial edge, while preserving the essential tenets of democracy – which are private conscience and public square. I’ll have more of that in book form soon…
So there it was – my opening gambit for what turned into a great discussion. One final observation – there is some good thinking going on about the strategic implications of AI, but not nearly enough. We need to stimulate wider debate on this topic. Not on the challenges of aligning AI, or the ethical dimensions of AI in warfare, or the inequities of it for the global south, or any of the many other worthy facets of the debate – there’s plenty of discussion on all these themes, and that’s great. What we need is more informed discussion of the implications of AI for national security and the defence of the realm. Weigh in!