A friend asked if my views on AI had changed over the years. He was reading a book I published in 2018, which seems an age ago in AI terms. By then, I’d already been working on AI for a few years – seriously since about 2012, unseriously since I saw WarGames as a youngster in the mid-80s and raced upstairs to hack the Pentagon on my Atari 800.
There are some big continuities in my thinking, not least in my overall approach. I came to AI through psychology, and that lens has shaped my ideas throughout. I still see AI in terms of minds and I’m most interested in explanations at the level of traits and behaviours. I know enough neuro- and computer science to understand something about the underlying processes behind those traits; certainly enough to understand that there are profound differences between biological and machine intelligence, but also to appreciate the commonalities. AI experts sometimes chastise others for anthropomorphising machines, rather than describing what they’re doing in mechanistic, or mathematical terms. ‘They’re just statistical models making probabilistic associations in data’ is a common refrain. Well, sure, but at a cellular level (and even at the molecular and atomic levels too) that’s exactly what the human brain is doing. It’s probabilities all the way down, as Terry Pratchett might have said.
That’s not the only continuity in my thinking. But there’ve been many changes too, and it’s been good to look back. Too often, domain experts don’t. They plough onwards, unwittingly shifting their ground with the mistaken belief not only that they’ve always been right, but also that they have always thought about things much as they do now. This, in psychological terms, is hindsight bias.
The alternative is equally bad – a dogmatic maintenance of their overarching position, whatever the subsequent weight of countervailing evidence. The signature of that mindset is goalpost shifting (‘sure an AI just did that thing I said it never would, but it's not an important yardstick anyway’); and confirmation bias (seeking out only evidence that fits their rigid overview). This sort of mindset is prevalent amongst the cynical school of AI expertise. Close followers of Twitter-AI can doubtless insert names of the guilty parties.
In short, it’s good to change your mind and also good to look back and acknowledge your shifting position; or at least I think so. To business…. What did I say, and what do I think about it now? I want to highlight three big ideas where my views have shifted:
First – from the start, I made a distinction between tactics and strategy, and the sort of intelligence that would be needed for each. I thought that the key distinction was about combat versus the higher level of war. And I thought that skills like creativity and imagination were where humans trumped AI, and would for a long time to come. These human-like skills would need something more like Artificial General Intelligence to emerge before machines could supplant humans.
Nowadays, I’d make a slightly different point – it’s not so much the difference between tactics and strategy that matters, but the degree of human involvement in whatever the problem at hand is. Greater human context moves the problem to less structured territory, less susceptible to machine computation and goal optimisation. That’s why aerial combat is the ideal arena for autonomous weapon platforms, not ground combat in cities. But even within aerial combat, there’s still human context – it’s not just a game of space invaders when there are humans around.
Where military problems are primarily about manoeuvre and the control of accurate, timely fires – that’s prime AI territory. But where the task is more about weighing human meaning, that’s harder for machines. This is true whether we’re talking about the political goals of large societies, or the individual person viewed through a targeting scope: who are they, what do they want, how badly? Those are challenging questions.
Are machines any nearer to solving those problems, and grasping human context and meaning? I think so - and that’s my second point:
From the start, I made a distinction between different types of mind – contrasting human and AI. The whole point of that 2018 book was to explore the evolutionary origins of human intelligence, specifically the connections between conflict and mind. I argued that human traits like theory of mind, empathy (and indeed a range of cognitive heuristics) are connected to our intense sociability and the need to understand what others are thinking. I contrasted that with AI, which doesn’t share that evolutionary context. But I suggested too that similar rationales – such as the need to coordinate with other agents, or the need to integrate multi-modal cognitions, might in time produce machine versions of human-typical traits.
In similar vein, I suggested that both humans and machines had limited autonomy – machines are constrained by their designers and the ‘reward function’ those designers impart; humans are constrained by their phylogeny and (being massively social beasts) by the company they keep. But within these bounds there was scope for agency, for humans and machines alike. We humans often feel like the captains of our ship, masters of our fate – even if we overemphasise that agency. Machines too might, I thought, develop their own subordinate goals, in the course of carrying out the ones we tasked them with – and these could be surprising and perhaps dangerous.
That far at least, I’d still maintain. But I’ve changed my mind on one big thing. I increasingly think mind-reading is a game machines can play too. In that respect, at least, the gap between human and artificial minds might not be as wide as I suggested. Back in 2018, I argued that AI had no ability to grasp meaning, with its rich psychological hinterland. Rather, it was good at associations and correlations; a statistical engine par excellence. But transformer models weren’t about when I wrote that book. The seminal paper dates from 2017, and GPT-2 was the first I heard about them, while writing Warbot, my next book. I remember asking a friend at DeepMind whether there was an algorithm that could summarise the book for the introduction – it would be fun to have a machine write it. He raised his eyebrow. But here we are, only five years later, and we all know what’s possible now.
The big question in AI today, I reckon, is the extent to which language (at which transformers excel) is sufficient for meaning. If so, I think we are well on the way to AGI. In part, that’s a question about machine theory of mind: can machines model other minds using language? There’s lots to say on this, more than I did in 2018 – but my short answer is that language models will get us quite a long way, certainly much further than I suspected ‘connectionist’ AI would get us back in 2018.
And we’ll get there more quickly too – which is the third point. In 2018, there was plenty of cynicism in AI-land about Ray Kurzweil’s famous prediction of a looming singularity, or Bostrom’s dystopian thoughts about a superintelligence racing away. Now the pendulum has swung to the other extreme. AI ‘doomsters’ are in Congress and Downing St, warning about the risks for humanity. I’m much less sanguine about that than I was then, because I think language does allow machines to construct useful ‘world models’ and to learn and improve those models. I still can’t make the imaginative leap to a world of HAL and Ex_Machina, where machines have intrinsic motivation. But I accept the gap between intrinsic motivation and subordinate goals might not be as clearcut as I’d like.
In contrast, I think I was overly optimistic about the short term. Of course, that’s a common failing of forecasters – overestimate the short range impact, underestimate the long range ones. In the short run, ‘bureaucracy does its thing’, as does wider culture, to filter and channel the adoption of AI. I reckon I was as guilty as the next technological determinist here. Even with a war in Ukraine to nudge things along, and an intense arms race brewing with China, inertia, habit and tradition are powerful forces.