Can you build a useful simulator of strategic decision-making using AI? A kind of Universal Schelling Machine, let’s call it - able to bring to bear decades of thinking about decision-making, and add in a dash of machine insight too? That’s what I’m up to, and my efforts are starting to bear fruit.
I’m advancing on two fronts.
The first, you might have seen signs of here. It’s what you might call ‘basic research’ in political psychology and strategic studies. We have a tonne of concepts and ideas, and I want to test them out with machines. Why?
The goal is to figure out how far machines think like humans, and how far they differ. So we know, for example, that humans are susceptible to ‘framing effects’, so that the way a choice is described to them shapes their subsequent decision-making. Make someone angry and they become more certain; more anxious and they become risk averse. What about machines? Spoiler alert - same. If machines think along the same lines as humans, we have a useful tool to explore strategic dynamics. And they do. Somewhat. Over the next month or two you’ll start to see publications on this from me and my exceptionally talented PhD students Leo and Baptiste.
This is useful, because there are many situations where we have a limited number of real world examples on which to draw, and some - like nuclear warfighting - where we have none at all. Put a machine in these situations, and they might well behave in more authentically human ways than an actual human, working through a scenario exercise at the Royal College, or even than an actual leader, trying to imagine how they might behave in some future scenario.
Which brings me to the second part of the project: building simulations. In my current effort, I’m exploring escalation dynamics in the Ukraine conflict. The simulation uses real world information - specifically, from IISS’s Military Balance and the SIPRI yearbook. It uses rich personality information for the leading participants - for example, President Putin’s bio reflects the analysis of Fiona Hill, among others. In each round of the simulation, the current assessment for the decision-makers is updated from events in the real world - a language model trawls the web to see what’s new, and blends that into the situation reports presented to decision-makers. In the version I’m currently working on, humans play Zelensky and make their decision in dialogue with their political advisor (played by a machine). The conversation reflects ideas from the MoD guide Making Strategy Better, which the political advisor has ‘read’. Aside from Zelensky, everyone else in the sim is a language model - including President Trump and Putin.
A snippet from the sim - clearly I need to build a proper front-end!
The result? Too soon to say, but hopefully, at least, a good way of injecting novelty into classroom simulations. Very occasionally, when they’re losing ground, Russia uses a tactical nuclear weapon. Nuclear warfighting sometimes ensues, as the US/NATO intervenes on Ukraine’s behalf (though mostly it does not); and that points to a second benefit of this sort of thing: exploring strategic concepts in novel circumstances. Prediction is certainly beyond us - though there are plenty of companies working on using AI in forecasting, that’s not us. But it’s still useful to work through what nuclear warfighting might look like. Does extended deterrence work? What about escalation dominance? How much risk might leaders take when they feel they’re losing? It’s also useful to explore how humans and machines interact when it comes to making strategy, since we’re likely to see much more of this in future - why not have a language model sit in on high level policy-making meetings, and point out when groupthink is developing, for example?
One last application of our project — AGI is coming, and it behoves us to think very seriously about how machine intelligence differs from our own. My ‘basic research’ suggests some overlap - in deception and priming, for example. But there are some differences too, at least to what strategic theory might lead us to expect. Perhaps it’s the theory that’s wrong, not the machine. Anyway, Leo’s first project on machine escalation highlights these themes really nicely - I can’t wait for you to read it.