Discover more from Ken’s Substack
The distant future: Star Date 2040
Imagining warfare beyond the frontiers of today’s science
To London this week, for more idle speculation on the strategic implications of emerging technology. Chatham House Rule on everyone else’s contribution, but happily I still own the content of my own mind* and can share it with you.
The timeline was limited to the next couple of decades, which on one hand is nothing, given the sclerotic pace of government procurement, but on the other is eons away, given the quite remarkable acceleration of AI and biotechnology research. This sort of discussion is, regardless, usually anchored firmly in the present day. We are all creatures of our times, which is why Buck Rogers imagines the 25th century as seventies roller-disco kitsch. Ten years ago, there’d have been much chat on wearable-tech, directed energy weapons and big data. Nowadays there’s invariably much discussion of hypersonics and quantum computing. And the AI chat at the moment is almost entirely ChatGPT related. Zzz.
Still, it’s possible to break the shackles of presentism if you try. Sometimes it’s better to look to fiction than science for useful guidance. In 1913, HG Wells imagined the destructive potential of atomic energy and worked through some strategic possibilities. Remarkable. In contrast, leading theoretical physicists like Ernest Rutherford and Niels Bohr didn’t think a bomb possible even, in Bohr’s case, until very late in the day, when British agents spirited him from occupied Copenhagen to lend a hand. (For that matter, US military officers involved in the early wartime discussions were lukewarm on the idea. As Henry Ford probably didn’t say, there’s no point asking the customer what they want – it’s a faster horse, not a car).
What’s the alternative? What we want is to be right at the cusp where ‘unknown unknowns’ start to shade into known ones.
And so, some ideas for you:
1. AGI, or superintelligent AI, obviously features in these sorts of discussion. By now, its a ‘known unknown’ – you can’t miss the deluge of articles about AGI and our impending doom. My take, though, is shaped by this terrific story from Ted Chiang, a quarter of a century ago. It’s the near future and AI has moved beyond the realm of human understanding. It’s doing stuff we can’t even imagine. All we know is that our lives are ok — this isn’t the malign intelligence of AI doomster imagination. And so, we catch the titular ‘crumbs from the table’ of AI. Humanity stands-up a branch of ‘meta-science’ dedicated to understanding what the hell AI is up to, but ultimately it’s just too difficult, and we have to sit back and enjoy the ride. Will we understand AI materials science in 2040? Its languages? Mathematics? Theoretical physics?
Perhaps the inscrutable AGI boffins will even take on some of my more far-out ideas:
2. Like this one: the accelerated self-domestication of humans. Richard Wrangham argues that Homo Sapiens evolved to become less violent. In a nutshell, the gains of cooperation were sufficient that we selected against selfish, dominant, violent males. By ganging up and killing them. Revenge of the beta male!
Now, what if you could somehow accelerate that process via genetic engineering – some blend of AI for modeling DNA and protein effects, allied to CRISPR Cas-9 like tech for the genetic editing? It’s feasible – we know of heritable tweaks in DNA already – and controversially – in China. ‘Homo Pacificus’ might be the result of my hypothetical changes — reversing the usual trope in discussing augmented humans, where they are engineered to be more effective warriors. That sounds agreeable, no – deploying science to end all wars, and all violence for that matter. Who wouldn’t sign up?
Of course, cheats would have a compelling military advantage over the less bellicose new humans. Perhaps they could somehow weaponize the domestication process against enemies? Less obviously, if we could do it, there’d be some profound changes. Human sociability is intimately connected to our dominance hierarchy: what happens to our groupiness if we dial that down? And what happens to creativity, or ambition if we remove our urge to gain status? I don’t know of a science fiction work that follows this plot – do you? I might have to write it myself!
3. Here’s one inspired by the awesome Kurt Vonnegut, whose wildly funny and dark science fiction I love. Cat’s Cradle was published in 1963, in the shadow of thermonuclear weapons. In it, a scientist has developed Ice-9 at the behest of the military; a concoction that freezes water at room temperature on contact. The result is a King Midas-like chemical that instantly ossifies everything, eventually destroying the world.
Is there a real discovery to be had in theoretical physics or chemistry, with similarly disturbing strategic consequences? I don’t know. I don’t even know what that would look like – but let me offer two hints from my reading.
In Reality+, the philosopher David Chalmers makes the case that we’re living in a simulation. It’s a familiar sci-fi trope, but Chalmers gives it a new tweak: What if, beneath the reality we experience, it transpires that the elementary particles we know from physics – quarks and bosons – were actually ‘bits’ of information, like those that move through our computers, rather than matter and forces? And why not? We don’t actually know exactly what they are made of. What is mass, when it comes to it, other than a relationship between objects – or information. So if we are indeed in a simulation, then the particles would simply encode the data on which we run – just as they do in our own, much less detailed, simulations.
Here’s another, related possibility – what if all the electrons in the known universe are just one and the same electron – whipping about unbelievably quickly in time and space, perhaps even backwards in time: so quickly as to create the illusion of many of them. What if the information they contain is in the pattern of movement, rather than the substance of the particle? That might explain a great deal – quantum entanglement, for example. It’s an old idea from Richard Feynman’s supervisor and colleague, John Wheeler. Sadly, it looks like the physics don’t stack up, at least in our corner of the universe. Or do they?
Anyway - weaponize that. Intercept the universe/single particle on its blistering, meandering course, and so control it. Good luck! But just suppose you could. That’s a weapon that would instantly end the universe, not just the planet. It’s the ultimate deterrent. Again, surely a concept calling out for a short story, at least.
4. One last yarn from the hazy boundaries of science fiction and science fact. ‘Teleportation’ at the subatomic level is apparently feasible – raising the prospect of using quantum entanglement not just for computers, but also for communication across vast distances. So far so intriguing.
But what about using it to actually travel vast, interstellar distances? In a fantastic book, Into the Silent Land, the neuropsychologist Paul Broks pens a short sci fi chapter, driving at what makes us authentically ourselves. In his imagined world, you can travel in space by being mapped in ultra-high definition - at subatomic level. This information is ‘teleported’ vast distances (as I recall, he doesn’t actually use entanglement for that, so I’m adding it in). This allows the reconstruction of a second ‘you’. Meanwhile, the original you is abruptly and painlessly executed – you have instantly materialised elsewhere. How does that feel? Blink and you miss it.
So far, so agreeably whacky. But then disaster – the execution machinery fails. There can only be one! So which is the real you? Who should the Galactic ethics council order killed?
The story is a thought-provoking meditation on self, authenticity, and the relationship between the material of the body and the subjective experience of mind. It’s also, of course, indebted to the Ship of Theseus and Trigger’s Broom. But for anyone contemplating the future of warfare, it’s more prosaically a cracking scheme of manoeuvre.
There’s only 17 years until 2040…. Those AGI scientists had better get a wriggle on.
* Not strictly true, it transpires. Contractually, my employer does.