I’m contributing to the new weekly newsletter on technology in national security from the Wavell Room (wavellroom.com). I’m sharing writing with Tony King and Emma Salisbury — you can sign up for the newsletter on the site or at the link below, but I’ll repost (at least some of) mine here too.
This week it’s all about spying, and why GCHQ’s man has got AI wrong…
Edge of Defence #5 | AI Spy
Issue #5 | 15 August 2023 | Edited by Kenneth Payne
AI will do much more for intelligence work than crunching through big data, or drafting low-level reports. It's starting to look like today's large language models can read people too.
AI Spy
What use might intelligence agencies make of large language models like ChatGPT? According to The Telegraph last week, not much:
‘Chatbots such as ChatGPT are only good enough to replace “extremely junior” intelligence analysts’.
I think that's flat out wrong. Here's why:
The Telegraph cited this recent report from the Turing Institute, co-authored with a bona fide spook, the chief data scientist at GCHQ. He argued that language models might make some contribution to rather bureaucratic processes, such as ‘auto-completing sentences, proofreading emails, and automating certain repetitive tasks’. All very dull and administrative. And given the tendency of such models to occasionally hallucinate nonsense, you’d certainly need to check their output.
This is far too conservative. A handful of recent papers suggests there’s something profound going on with language models. We might bracket these together as part of a new scholarly discipline – ‘machine psychology’. Together they suggest that facility with language can provide insights into human minds – something that’s surely of value for intelligence agencies. Here’s a flavour:
First, this pre-print study showed that GPT-4 was very good at understanding ‘false beliefs’ – the idea that people can have mistaken views on the basis of the partial evidence they’ve seen. That’s an important element in ‘theory of mind,’ the term psychologists use to describe how we intuitively figure out what other people might be thinking.
Next, an interesting study that showed language models can make pretty good guesses about the emotional states of people they read about in fictitious vignettes. Better than many people doing the same task. Again – more evidence of ‘theory of mind’ at work.
Then there’s this intriguing finding – if you prime a language model with anxiety inducing words, you can shape its subsequent decision-making, a bit like you might expect with a human.
There’s a lot more work to be done here. Can you make a language model angry, and so more certain in its judgment? I bet you can. That’s what happens to humans – and definitely an experiment I'd like to see done. All very interesting.
But what’s the take away for intelligence agencies?
Mainly that language models might be much more useful than cobbling together unreliable reports. Some intelligence agencies, like GCHQ are primarily interested in AI's ability to plough through vast quantities of data, looking for meaningful patterns or correlations. That’s the sort of spade work that simply can’t be done by humans. We have a bit of insight into what’s involved from the Snowden revelations – Barton Gellman’s revelatory accounts in TheWashington Post and here in his outstanding book Dark Mirror are a good summary. But this sort of AI is essentially mindless – just drawing on the advantages of computer memory and brute force processing.
Language models, by contrast, might be of interest to spooks looking for psychological insights. What are other people thinking? How might they be influenced, or perhaps even deceived? That’s the terrain of the other two agencies – especially the Secret Intelligence Service, whose boss has made no secret of his intention to leverage AI, even if he thinks that human intuition will remain beyond it.
Models like ChatGPT certainly don’t have a mind of their own. It’s not like they’ve achieved consciousness – whatever some insiders think. But they’re also clearly doing more than just crunching numbers at scale. Or, rather, that’s literally what they are doing, but in doing so, something else emerges: some sort of model of the world that is latent in our language. And after all, that makes perfect sense – we use language to reflect on the real world.
Where this all ends is as much your guess as mine. But language models are rapidly getting larger, and their reasoning abilities are improving generation by generation, especially those that also generate computer code. These capabilities, moreover, are about to get a huge boost. Any day now, we’ll move from text and pictures to realistic multimedia generation – video and audio. Expect more of this sort of thing, but entirely generated by machines, rather than voiced and scripted by humans, and able to interact naturally with you. Very soon we’ll live in a world with machines that can understand us better than ever before, and tailor their responses accordingly.
To me, that sounds handy for spymasters everywhere, and a long way from ‘automating certain repetitive tasks’.
The spying theme, meanwhile, continues below in my recommendations of the week.
IN BRIEF.
New DSTL guide on AI in intelligence
DSTL publishes the latest in its 'biscuit book' series - ‘Human centred ways of working with AI in intelligence analysis’:
Spy while you write...
Using AI to determine which key is being pressed from its sound - compromising passwords.
Read more.
AI fighter pilots inch closer
A non-intelligence one, just to mix it up: The XQ-58 Valkyrie demonstrates combat manoeuvres:
Do you have tech or science material you want us to cover? Reach out through our contact form here.
Unsubscribe
Wavell Room, Lytchett House, 13 Freeland Park, Poole, BH166FA, United Kingdom