A: something more like artificial minds, rather than just intelligence. And, as with humans, those will be distinctively strategic minds.
There are significant hints of that future in these three recent papers:
1. In Science: Meta’s Cicero achieved human level performance in the boardgame, Diplomacy. That’s a complex strategy game that requires negotiation and cooperation, in natural language.
2. From the Arxiv: ‘Theory of Mind may have emerged in Language models’ — in which language models passed standard ‘false belief’ tests, of the sort developmental psychologists give to small children. The model understood that humans could be mistaken about reality, on the basis of partial information. To do that involves more than perspective taking – in humans it requires a model of how the other mind is thinking, generated in our own minds.
3. Also on the Arxiv: ‘Inducing anxiety in LMs increases bias’ — a demonstration that language models became more risk averse when primed by anxiety, much like a human would.
Together, these papers suggest the possibility of artificial minds – in contrast to artificial intelligence. I see mind as a special subset of intelligence. To me, its defining feature is of a cohesive sense of itself, interacting with other, similar selves.
AI sceptics sometimes argue that today’s AI is merely a souped-up calculator, good at finding statistical correlations in vast datasets. The idea of a ‘mind’ emerging from AI strikes many as far fetched. Minds are something humans have, and perhaps other animals too. But not machines - it’s all a bit sci fi for some.
But let’s bring some rigour to bear. Minds don’t necessarily imply human-like consciousness or self-awareness. Anyway, much of what goes on in my mind is unconscious, yet feeds into my sense of self, shaping my behaviours as assuredly as the stuff I ruminate on. Consciousness is evidently useful for minds; language too. But mind, for me, is a larger concept. What’s needed for mind is just an integrated model of the self and (crucially, I think) of other selves. It doesn’t demand vast complexity either (the brain of a bee is small indeed, but bees certainly tick my box for mindedness). Some scale and integration of the information processing network is necessary, but not our trillions of densely packed synapses. Perhaps, then, minds are a matter of biology? I don’t think so, although I am sure that biological and artificial minds will differ qualitatively. But even if we subscribe to the view that biology is essential, as we’ll shortly see, bio-computers increasingly blur the distinction between ‘natural’ and artificial brains.
Meanwhile, these language models seem to be doing more than just probalistically generating words based on their training data. Instead, we are seeing emergent phenomena - and we should expect more. Exactly what emerges as these models scale and become more structurally sophisticated is the big question in AI research today. I think it will include more aspects of mind.
There are many implications – including for national security, my field. To date, much discussion of AI in this area has concentrated on its tactical application – notably for manoeuvre and fire control. The analysis often focuses on platforms, including ‘killer robots,’ which may or may not do what we want. That’s important stuff, and it’s all increasingly possible with today’s mindless AI.
Mind-reading and strategy
But the possibility of artificial minds is more germane to strategy, not tactics. Strategy is ultimately about mind-reading: ‘Know yourself, know your enemy,’ as Sun Tzu counselled. And, as I’ve argued, war itself exerted an important selection pressure for the development of human minds and human of mind reading. Today, I’d go even further: the human mind is a product of the need to read other human minds. The self, on this view, is social: it exists to make sense of others, and for us to make sense of ourselves to others. We construct it by triangulating what we think others think of us - so, truly, no man is an island. That’s why those recent papers are so fascinating; offering a tantalising glimpse of artificial agents presenting themselves to other agents and responding to other selves.
In strategy, it’s not just enough to predict other agent’s behaviours on basis of trend analysis, or pattern matching to find correlations. The world is just too noisy and chancy. A better strategy (‘better’ in the evolutionary sense of ‘adaptive’) is to theorise about other minds. Language gets you at least part of the way there. In humans, language probably evolved in response to that same pressure – to gauge who best to cooperate with in uncertain, dangerous times, and to communicate that information to other, interested selves.
‘World models’ of a sort are attainable solely via natural (human) language. Human language captures enough about reality, including about cause and effect, to permit some useful judgments about the world, for human and machines alike. It’s evidently substrate neutral, something we’ve discovered only in very recent times. This is a conclusion that would appeal to the young Wittgenstein, who thought that reality mapped directly onto language.
Today’s generative AI uses language alone to model reality, including other minds. By comparison, humans have a much richer way of gauging what’s going on, involving non-verbal communication, consciousness, empathy and emotional processing. Language is only a part of our mind, perhaps even a relatively small part. How far do such things matter in generating a self?
Emotional and conscious artificial minds
Machines lack emotions, though as the third paper above suggested, they might incorporate them into their judgments by linguistic proxy, even if they can’t actually feel them, as we do. Would that amount to a sort-of emotional self? It might at least make artificial minds more relatable, and perhaps allow them insight into human motivation and behaviour - a kind of ersatz empathy. In any event, it’s not inevitable: more likely that machines will develop minds that are very different from ours, especially as they start to model other machine minds and as they develop their own ways of communicating between those minds.
What about consciousness? Will the world models inherent in a machine’s language amount to a self-aware mind? Possibly, though it still sounds kooky to say that. For us, consciousness is costly (in prioritising some cognitions over others that might be useful), so must be adaptive, not some nice epiphenomenon built, floating on top of the real action. What then does it do? Michael Graziano argues that consciousness is simply a cognitive model of attention. It’s an abstraction that we can manipulate experimentally, in the privacy of our own minds. That accounts for its recursive, ‘strange loop’ character. As experienced by humans, consciousness might just be one possible way of modelling the various minds in our intensely social world. After all, some of our modelling is subconscious - our response to non-verbal social cues from body language, for example, or the smell of pheromones after a handshake. Maybe that's enough for machines: They won’t necessarily have the biological bandwidth constraints that necessitate human consciousness’s extreme focus.
So, it’s at least possible that machine self-awareness will similarly be another emergent property of machines monitoring attention in themselves and in other agents. Even so, it would still be qualitatively different from ours, because machines lack embodied cognition like ours, notably its emotional dimension. If consciousness in humans is, as Antonio Damasio suggests, the ‘feeling of what happens’, machine consciousness could be rather different. Still, as the third paper above suggested, machines might usefully borrow our emotional language, if they have to deal with humans, or perhaps come up with some alternative heuristic.
Artificial bio-minds
Artificial minds differ will differ from ours at a foundational level – our social minds are the product of our long evolutionary experience. We are grounded in natural selection, as biological organisms, in a way that machines are not. At least, not as they are mostly conceived and realised today. Many of our cognitive processes, the rich array of schemas and heuristics we deploy, owe to our distinctive evolutionary niche.
Recently, Stephen Wolfram argued that there are deep similarities in the way that machines and humans generate language. Perhaps, but I think you can produce similar logics with very different underlying architectures. Today’s machines capture the world models that we’ve built into our language. It gives their output an uncanny human quality. But their frequent gaffes and inconsistencies are a reminder that something rather different is going on under the bonnet.
Still, something else is coming to the future of AI: new architectures. Here are two more papers that hint at what’s coming:
1. A neural network made from biological neurons learned to play the video game, Pong. Bio-computing has been around for decades – it’s still an exotic, marginal field in computing. But that’s changing as biotechnology advances, assisted by machine learning.
2. AI designed ‘xenobots’ – clusters of robot-like living cells – can ‘reproduce’ by assembling other cells into similar clusters. Are the bots alive? Are they subject to the pressures of evolution, or could they be?
Together these two papers challenge our established conceptions about the ‘artificial’ in Artificial Intelligence. They raise the possibility of new types of agent, and of social processes that are not solely grounded in the artefact of human language. It’s ten years since DeepMind’s artificial neural networks achieved human-level performance in Atari video games. The pace of development shows little sign of slowing. We might not have to wait long to find encounter richer artificial minds.