I wrote a short brief, with some concrete recommendations - let me know what you think… First - here’s what the AI reckons the review will look like - all very Tenet!
This paper addresses the implications of Artificial Intelligence for British national security and defence across the following four sections: (a) the MOD’s approach to AI; (b) the geopolitics of AI; (c) the tactical implications of AI; and (d) the implications of strategic AI and Artificial General Intelligence. A summary of recommendations follows.
A: The MOD’s Approach to AI
The UK government has begun to address the implications of AI in national security. There is much excited discussion of AI in defence and some progress. Yet the scale and pace of change in AI research far outstrips the present ability and appetite of UK Defence.
More ambitious changes are needed, across all areas of activity, including organisational, budgetary, conceptual, procurement, and personnel management. The United States, by contrast, is making bolder bets on AI, as with its Replicator programme and the pausing of its crewed NGAD fighter aircraft programme.
A paradox in the UK is that widespread talk of revolutionary change, including by senior defence leaders, is not matched by the adoption of AI technologies. The result is a feeling of ‘performance theatre,’ where AI is much discussed, but little seen.
A large gap exists between basic research and fielded capabilities. Much research of military utility is dual-use and developed outside the existing defence industrial base. Google DeepMind produces cutting edge AI, not BAE Systems. Small defence AI companies like Faculty and Helsing face multiple, formidable barriers to competing with legacy defence ‘primes’. At the same time, these small companies lack the scale and resources to invest in cutting-edge AI technologies, whilst the primes lack sufficient incentive.
Recommendation: The MOD needs to accelerate change and accept more risk. There are many possible remedies, but to highlight one, MOD needs larger scale internal ‘venture capital’ than presently. Something along the lines of CIA’s In-Q-Tel.
More powerful sovereign British computing capabilities are needed.
AI architectures are changing – scale continues to deliver performance in transformer architectures, and the UK will need significantly more sovereign computing if it is to reduce reliance on foreign capabilities, including reliance on US based corporations. The UK particularly needs more high-performance computers, and more closed computer laboratories. It should also continue to build resilience in supplies of semiconductors and semiconductor manufacturing, especially mitigating risk in relying on China/Taiwan. Overall, the UK needs to diversify both hardware and technical skills beyond the centrifugal force of Google DeepMind.
Recommendation: The UK needs to revisit its recent decision on exascale computer housed at Edinburgh. Scrapping the project was a strategic blunder.
B: The Geopolitics of AI
The government is involved in international conversations about the responsible use of AI (REAIM), and discussions about the risks of AI, including existential risk (Bletchley). These discussions will increasingly overlap, as AI technologies become more flexible and capable.
International regulation of military AI (e.g. via arms control regimes) is improbable, given the security dilemma. Instead, the UK’s focus should be on norm formation, domestically and with likeminded liberal allies. New legal instruments are not needed to accommodate more battlefield autonomy, contra the extended, ongoing and largely fruitless discussions in intergovernmental forums. But a shared understanding with allies (e.g. of operational concepts and risks/vulnerabilities) is vital. With adversaries, dialogue to improve mutual understanding, (e.g. of escalation dynamics) may also be beneficial. The UK should obviously continue to lead on these processes.
Recommendation: UK should remain engaged in REAIM and CCW; but a new forum is needed to bring together likeminded democracies, encompassing NATO and AUKUS, but with broader partners, like Singapore and Israel.
Exports and influence
AI will shortly alter the existing model of defence manufacture and export. Increasingly, fighting power will be generated by software, not physical platforms. Exporting software intensive capabilities allows a finely grained control over client states. The UK both imports and exports such technologies. Its sovereign ability to use its armed forces is constrained by our technological reliance on the US. In turn, however, AI offers the UK a tool to exert unprecedented control over its clients. The notion of ‘empires of AI’, where technology generates functional political influence, is not farfetched.
C: New Military Tactics are Coming
The next developments in AI are likely to produce more sophisticated/general purpose AI, capable of tactical military tasks in complex environments (e.g. via swarms and humanoid robots). Smaller, edge-computing (e.g. using neuromorphic chips, or liquid neural nets) will allow greater onboard processing in electronically contested environments. These tactical developments will have implications for military structures and concepts – including leadership and mission-command.
More capable AI will favour distribution and mass. Combined arms will remain a staple of warfare, as will the competitive cycle of measure versus counter-measure; but the particulars will change. For example, favouring rapid iteration of capabilities, and disposable physical platforms, with constantly updating software capabilities. Learning on-the-edge itself may be feasible.
The current, ad hoc, approach to AI adoption within UK Defence is likely untenable. AI leads in MOD, i.e. in the DAU and DAIC, are too junior for the scale of change ahead. There are no direct analogies for AI, because AI describes decision-making technologies, rather than a domain or activity specific technology, like a weapon or platform. But nor is AI a general-purpose technology, in the sense of steam, electricity, or internal combustion. Its utility comes from complimenting or substituting for human decision-making, across many aspects of defence. It alters, in so doing, the essence of war. It reaches across organisational boundaries, and hierarchies. So, I favour at least a Vice Chief for AI, able to drive and harmonise change across the services. AI today is, in contrast to my expansive take, treated as a subset of data/digital – which badly undersells the scope for change.
Among the pressing challenges are:
Holding or generating inventory of platforms and munitions which might become rapidly obsolete.
Developing effective tactical concepts to keep pace with rapid technology change. To highlight one facet of this: appropriate tactical formations. Squadrons, companies, and ships’ companies, with the hierarchies and skills associated with them, are the essential building blocks of military capability today. But they originate in structures designed for effective command of men under arms. They need not be most appropriate for control of autonomous systems.
And one, very specific challenge posed by AI: Ensuring the UK deterrent. The CAS SSBN patrol is a minimal deterrent, resting on a single delivery system. AI challenges the UK’s assured second strike capability, in theory at least. It is time to review our insurance premium. I favour air-launched cruise missiles.
Recommendation: Re-establish a second leg of the nuclear triad, to mitigate the risk AI poses to the SSBN patrol.
Recommendation: Establish a new post of Vice Chief of the Defence Staff for Artificial Intelligence and Emerging Technologies.
D: The Implications of Strategic AI and Artificial General Intelligence
The next generations of AI will have strategic utility – including improvements in foresight, creativity and ‘theory of mind’ (the ability to gauge other, human, agents). This will give AI greater utility in intelligence analysis, simulation, red teaming, force generation and other areas.
The implications reach beyond defence, and even national security, to the whole relationship between citizen, society and governance. In just few months, generative AI, owned and controlled by US companies, will begin conducing lifelong conversations with individuals, registering and retaining semantic and emotional meaning.
Definitions of AGI, ‘Artificial General Intelligence’ are hotly contested. Nonetheless, most expert estimates about when AGI might be attained are converging towards the present. My own definition focuses on AI’s ability to understand and interact with humans. AGI is not far off by that yardstick.
There are clear implications for freedom (especially of conscience) and for public expression – cornerstones of democratic societies. There are further implications for the relationship between state and the use of force. These implications are all fundamental to the British state, and deeply entwined with national security. I hope that serious discussion of these issues is occurring in the British national security establishment, but I am unaware of them. If they are not, they must begin as a priority.
Recommendation: The UK needs new forums to address the implications of AGI for democracy and the state. A citizen’s convention or a Royal Commission are possibilities worth exploring.
Summary of Recommendations
MOD needs larger scale internal ‘venture capital’ than presently. Something along the lines of CIA’s In-Q-Tel.
UK should remain engaged in REAIM and CCW; but a new forum is needed to bring together likeminded democracies, encompassing NATO and AUKUS, but with broader partners, like Singapore and Israel.
The UK needs to revisit its recent decision on exascale computer housed at Edinburgh.
Re-establish a second leg of the nuclear triad, to mitigate the risk AI poses to the SSBN patrol.
Establish a new post of Vice Chief of the Defence Staff for AI and emerging technologies
The UK needs new forums to address the implications of AGI for democracy and the state. A citizen’s convention or a Royal Commission are possibilities worth exploring.