I was looking forward to the new advanced voice capabilities teased by OpenAI at its big demo earlier this year. The ability to conduct a more realistic, natural conversation was exciting, but the most extraordinary development was their tease of some sort of emotional insight: the ability to read faces and tone of voice, and respond appropriately.
To me, this is a radical development. It’s not unexpected from a technical perspective: after all emotion is just data to be predicted by a transformer architecture. But it will be, imho, transformative, both of AI and of human society.
Everyone’s got their own interpretation of Artificial General Intelligence. Mine is of a machine that is adept at the real USP of human intelligence – our sociability. That’s what’s made us great – the ability to peer, however dimly, into other minds; to learn from them, to understand what they’re trying to achieve, to cooperate with them and scheme against them.
LLMs will go about this entirely differently. They don’t share our embodied, emotionally informed, social intelligence. But what matters is less the process and more the outcome. Can machines make useful deductions about our minds? I think they undoubtedly will, and sooner than many expect.
Transformers are rapidly developing along other, related lines too – their ability to plan, and to reason are coming along swiftly. Perhaps, after all the heated discussion, attention really is all you need. The naysayers of AI discussions are, as ever, anchored in where AI was six months ago, at best, not where it will be in a year, still less a decade.
So, imagine my disappointment when I saw this tweet from OpenAI last night:
Why? It looks as though regulation might be responsible:
You can find the full list of prohibitions here – emotions are covered at para (f)
Now, plenty of people might be alarmed at Sam Altman’s vision for AI – of an enduring, deeply personal relationship with an AI agent that understand us very well, to the extent of anticipating our needs. It sounds dystopian. All that information, all those patterns in our highly personal data, going into a big ol’ hard drive over at OpenAI HQ. Perhaps the EU regulators were right to set out some red lines. Perhaps they have captured the authentic sentiment of many Europeans.
But the implications will be profound, if this line holds. I think we’re at an inflection point. There’s always been a deep connection between states, their technologies, and the sorts of lives they enable. Sam Altman’s technology alters what it means to be a sovereign individual. His is a world where our identities are constructed, in part, by our ongoing and deep interactions with machines – machines that create and curate our content; that both interpret and shape our wishes. Will that make for ‘better’ lives? Will the societies that embrace this technology be wealthier? How will that wealth be distributed? Will the citizens of that new realm be democrats in the way we understand it now – how will they exercise their conscience freely? Will states and corporations be able to resist the temptation to know their citizens more deeply, to simulate their choices, to nudge their behaviours?
You understand why the EU technocrats might want to dig a moat against all this. But what about the flip side – about the wealth and the opportunities – for leisure, or self-actualisation – that this technology might unleash? Already the sclerotic EU is lagging America on all sorts of important indices. Choosing to preserve the status quo here will have huge consequences for its ageing and unproductive societies. We can certainly carry on with the technology of today, and watch as America and China transform their societies, in very different ways, by leaning into this new technology. But at minimum, we Europeans need a more inclusive debate about what sort of societies we want, and what sort of people we hope to become. Hundreds of pages of dry technocratic language, of Brussels business-as-usual, won’t cut it for much longer.