‘Ivy Mike’ detonates in the Marshall Islands, 1952 - the first thermonuclear test.
Someone has asked for my views on the ways in which new technologies impact deterrence (nuclear and otherwise). I thought you might be interested too – so the following sketches my thinking, in three parts.
1. First big idea is that deterrence remains primarily a matter for psychology. Longstanding themes will remain salient, even as we encounter new technologies. Like these three:
Emotion matters. Specifically, two big ones – fear and anger – that have played a part in deterrence over the years. In particular, the dominant response to the prospect of any sort of nuclear use seems to be acute fear, giving rise to a clear nuclear-use taboo.
We’ve also seen plenty of cases where anger has prompted retaliation and escalation, discounting the risks. The classic example is the UK’s strategic bombing campaign against Germany in WW2 – fury and vengeance driving a morally dubious adoption of firebombing, whose effect on the war was at least debatable.Status often undergirds those emotions. I think of the initial fury and demand for retaliation in the ExComm during the Cuban Crisis, the result of the sheer insult of Khrushchev’s decision to send missiles to the Caribbean. Happily, a pause and deliberation allowed passions to cool. Khrushchev’s initial decision was itself motivated by status – his sense of humiliation, personal and on behalf of the USSR.
Lastly, from psychology, the idea that reality is an outward, top-down projection of the mind, not the reverse. So, the small group constructs a warped, ‘groupthink’ about what’s going on because the minds that make up the group evolved to stay close to our peers, not earn their opprobrium.
No amount of new technology is going to eliminate these psychological factors, so long as humans remain involved in decision-making, that is. But technology isn’t irrelevant. It changes the parameters of any confrontation, within which humans decide. What is the speed, range or destructive power of the systems involved? These alter the time available for deliberation, and the participants’ sense of risk involved, including by shaping their emotions. What, for example, do they mean for the balance between outrage and fear?
2. Second big idea – nuclear weapons are not revolutionary. They alter deterrence, but they don’t transmute it into something fundamentally different. Perhaps you’re dubious? Certainly, there’s been plenty written on the revolutionary nature of nuclear weapons, including the psychological revolution they prompt. Theorists have been particularly impressed by the changing balance between defence by denial and by punishment, as well as by the extent to which they render the positive use of force redundant. Nuclear war is unwinnable, so strategy becomes about the threat to use force, and the struggle to make that threat credible, given the disturbing consequences. There is, in many such schemes, a clear firebreak between the logic of conventional deterrence and the somewhat Alice-in-Wonderland logic of nuclear deterrence.
But these matters are all debatable. Consider the balance between denial and punishment. Both are hardy perennials of strategy, regardless of nuclear weapons. In the nuclear era, superpowers stressed punishment via retaliation; but they still worked hard to deny valuable targets – by dispersing and hardening weapons; by developing contingency plans to ensure government survived large-scale nuclear attack; and by investing in ballistic missile defences. Conversely, before nuclear weapons, belligerents certainly stressed denial, for example by fortification, but they also took prestigious hostages, against whom to retaliate (punishment). And whilst they lacked destructive warheads, they possessed, in the assassin and poison, viable first strike capabilities. Nuclear weapons may alter the balance, perhaps — but not invariably.
Today, the nuclear taboo is robust. Perhaps we take it a little for granted, after so long. We shouldn’t. Nuclear war may well be winnable, and on terms acceptable to the initiator. As President Biden cautions, there are real dangers of tactical nuclear weapon use by Russia in its war with Ukraine. This raises more questions than there are concrete answers. Deterrence would have failed – but what form would punishment take? Almost certainly not immediate escalation to general (all-out) war, even as it remained ominously in the background. But many other steps on the escalation ladder are available.
The moment the divide from conventional to nuclear conflict is crossed will be frightening. All parties will reconsider their stakes, and perhaps recalibrate. We know all this already, but until we are put in that moment, we can’t accurately feel what it will be like, or judge how we will respond. What we can say at this juncture is that nuclear weapons, used or unused, are part of ongoing calculations about deterrence. And that’s true too of the moment after a bomb detonates – psychological calculations about what it takes to compel and deter will continue, with the only twist that one side, at least, has proved willing to up the ante.
3. And my third big idea – no possible technology, save one, will alter the fundamental psychological tenets of deterrence. Can you guess which one?
There are plenty of candidate technologies. Some, like hypersonics, complicate the boundary between conventional and nuclear weapon system, and so are seen as destabilising and potentially radical. Uncrewed systems, anti-satellite space weapons, ballistic missiles with conventional warheads – there’s plenty going on today. But if you accept my premise that nuclear weapons themselves don’t obviate the core features of deterrence, then it’s clear that these technologies won’t either.
They’ll certainly introduce new complications and variations. How can we tell whether an attack underway is nuclear, if the weapon system is conventional too? Can we attack the enemy’s command and control systems, if they are also part of their nuclear command system? Will my strategic missile defence network guard against this new hypersonic threat? Insofar as technology means more uncertainty, or more complexity, or demands decision-making at greater pace, they impact risk. Exactly how, though, is debatable: greater complexity increases the scope for system failures, or loss of control. But it also gives adversaries more possible responses to provocation and makes calculating their moves harder, which might dampen adventurism.
Regardless, at bottom, deterrence retains its psychological essence, a matter for human judgment. Until it doesn’t. Artificial Intelligence, of course, is the radical proposition I teased. No one is proposing, I think, to outsource nuclear decision-making to machines, but that doesn’t mean AI won’t impact deterrence, and in a radical way – by altering its human foundations and introducing a wholly alien cognition.
How? I see at least three possibilities:
First, AI will construct the milieu within which human-decision makers deliberate. It will increasingly constitute their information environment – writing content, collecting and parsing information, deliberating in group discussions (as a red-team to challenge received wisdom).
Second, AI will directly control aspects of the national security systems – shoals of undersea drones detecting enemy submarines; automated cyber-agents seeking exploits in enemy C2 systems; autonomous missile detection and defence systems. We can expect ever more of this, and for it to creep up from tactical activities to operational and strategic ones. Perhaps in combination with those other new technologies this will create entirely new, autonomous escalation risks, blurring the distinction between conventional and nuclear; between (pre-emptive) attack and defence.
Third, and most sci-fi, AI may impact deterrence via the emergence of entirely novel intelligences. Even if such AI lacks intrinsic motivation, it may still surprise us in seeking to fulfil the goals we give it. And as AI blends with bio-technology, bio-computing, and augmented intelligence, so the scope for novelty increases. This sort of intelligence may have its own ideas about what to do. I’ve written about this already and will again.