I was invited to a fascinating working dinner this week on the subject of machine consciousness. I’m sworn to secrecy about other contributions – but I can tell you what I said. Human-like consciousness won’t happen in machines. We can’t design it, and it won’t emerge. Mic drop.
Why? My argument had four parts:
1. First, information integration (which human brains do superbly and which you can also do in artificial neural networks) may be necessary for consciousness, but it’s not sufficient.
(Information integration, as consciousness afficionados well know, is one of the key theories in a rather crowded field). But there is no compelling reason to suggest that if you build any old network and make it big and integrated, consciousness will somehow pop up. The internet isn’t conscious. The galaxy isn’t either.
And then, in the next three parts, the crux of my argument. Why won’t machines be conscious? TLDR: evolution.
Consciousness in humans evolved for a reason (or reasons). It’s not an epiphenomenon, or evolutionary ‘spandrel’ – nice, but essentially irrelevant. It’s costly, and so serves a purpose that must outweigh the cost. Machines aren’t grounded in that evolutionary context, and don’t necessarily share those budgetary imperatives.
I highlighted three evolved aspects of human consciousness. One that machines don’t need to do (focus); a second that they can already do just fine without consciousness (model); and a third that they can’t do and won’t until we make ‘living machines’ (feel).
1. First, focus. Human consciousness involves focused attention - we bring something to mind. But why focus? For us, I think it’s about bandwidth issues – we needed a way to concentrate extra cognitive effort on particular challenges. Our brain is doing lots of wonderful things, whirring away outside our conscious mind - but occasionally we need to prioritise some cognitions over others. Machines, by contrast, aren’t necessarily embodied, and don’t experience the same bandwidth issues – they can parallel process away, attending to multiple things at once.
2. Next, and related, is modelling. Machines can do this without any apparent consciousness. But for humans, the (re)combination of cognitive elements sometimes requires deliberation. We use consciousness to knit aspects of cognition together - so it serves as a ‘global workspace’ for recombining and integrating processes. Machines, without the bandwidth constraint, can do that recombination unconsciously.
Perhaps the central challenge for humans is modelling other minds, given our intense sociability. So Michael Graziano theorises consciousness as a (recursive) model of attention. We need a rich social model that we can fiddle around with to ‘see’ how things might play out. Our model, like all models, focuses on some elements and simplifies their interaction. That saves bandwidth. Lots is left out, but it’s still rich enough to be useful.
Consider a particular contrast between machines and humans that’s getting a lot of airtime at the moment - language models, like GPT-4. For humans, complex language seems to require consciousness - the ceaseless ‘chatter in the mind,’ as zen master Alan Watts put it, and which meditation tries to quieten. We can certainly recognise words unconsciously, but complex grammar seemingly requires conscious attention.
Evidently that’s not the only way of getting to complex language, as GPT-4 demonstrates. Machines use language to model the world, entirely unconsciously. Do they truly understand? Plenty of people say its just matching words statistically - but there’s more to it than that, as I’ve argued elsewhere. For one thing, they do ok on theory of mind tests, and on some other causal reasoning tasks too.
3. Lastly, feelings. Machines can’t feel. By contrast, moods and emotions are integral to human cognition and consciousness - they’re a vital part of our modelling. So Damasio aptly calls consciousness ‘the feeling of what happens’. They’re a way of highlighting what’s important; the vehicle through which our attention is focused. Machines don’t need to focus (points 1 and 2) and so don’t need emotions for the same task. That’s handy, because they don’t have emotions.
Perhaps some sort of ersatz emotion could be designed into an artificial neural network as a way of prioritizing information flows. Perhaps we could also induce bandwidth issues to get the machine to concentrate, and simplify. Sure, even if we’d be taking away many of the key advantages of artificial intelligence, we could handicap it like that. But however you label the information in the AI’s focussed model, it still won’t somehow magic into existence the ‘feeling of what happens’.
Why? My answer is evolved biology. If you want artificial minds, you might need to make them from living material. But that will have to wait for another post, where I’ll get into awe and curiosity as the key to being human.
So there I was, the dinner guest from hell. Still, if you’re worried about self-aware AI, I hope that’s reassuring. It doesn’t mean AI won’t be superintelligent, or that ‘Artificial General Intelligence’ is impossible. Still less that it won’t be very dangerous. But it does suggest that such intelligences will be very different from ours.
Meanwhile, here’s Alan and Sigur Ros to quieten your mind: