Knife missile
Who pulls the trigger, when the trigger is me?
In the Culture novels of Iain M. Banks even attack drones have a personality:
The little white body of Flere-Imsaho floated in through window, very tiny and clean and exact in that shattered, melted place. A tiny grey thing, the size of a baby’s finger, floated up the the drone from the ground near Gurgeh’s feet. A hatch opened in Flere-Imsaho’s immaculate body; the micromissile entered the drone. ‘Hello,’ Gurgeh said, walking over to it. ‘I hope you’re going to tell me what happened.' ‘Sit down, Gurgeh, I’ll tell you. First things first, allow me to introduce myself properly.’1
I’ve been building artificial personas, including one of me. The big question - what happens when these personas are put on weapons? Crazy question? I think that’s where we are headed — models ‘on the edge’ that are capable of judging context and weighing uncertainty. There’s a logic to having them decide like much we do, or as close as we can get it. There’s even a logic to having them model specific humans.
Why on earth would we want that? Read on…
Mission command is a tried and tested principle favoured by many militaries, especially those of liberal, democratic states. Under mission command, the commander — Commander Ken, let’s say — states his intent, and the subordinates stand to attention and then go and make it so. The advantages of this arrangement are obvious: the subordinate is nearer the action, and more able to respond to any particulars that might arise there. In war, expect the unexpected.
So Commander Ken says what is to be achieved, but not how. And for that matter, he might, or might not, say why we need to achieve it. Probably not — it’s on a need to know basis.
With humans, mission command opens the agent-principal dilemma. How can I, the principal, be sure that you, my agent, will act as I intend? I’m going to have to frame my goals to you carefully, and language - as we know - compresses information. What gets included, and what is taken as read? Perhaps too much goes missing to fully capture my intent. Still, you’re a human subordinate, so at least I’ve some degree of familiarity with how you might interpret my command. More so if I trained you, and gave you a bunch of doctrine to read. We would have a biological and cultural affinity.
With machines, we’ve much less to go on.
Still, as my experiments with Ken-bot, Putin-bot and the rest of the gang show, we are starting to be able to model personalities in ways that machines can capture and replicate. Ken-bot is an ersatz version of me: He knows my psychological profile. He’s got some of the same cognitive heuristics - mental scripts that shape how he sees and responds to the world around him. There’s certainly lots missing. But it’s a start, and as the technology improves, more will get included.
Imagine a persona of the near future - models that are orders of magnitude more powerful than today are coming, in only a few months time. Imagine huge context window, able to ingest and synthesise lifetimes of information - captured passively as I go about my business - interacting with the world, and with my always-on device. And ersatz Ken is only going to improve.
So why not put Commander Ken on board the platform? Who better to understand my goals - and not just the surface ones of what to achieve, but the why bit too. It’s mission command, Jim, but not as we know it.
I’m not sure most people have a good grasp on how quickly this future is closing. It’s measured in months, not decades. What does today’s Ken-bot make of all this? I asked him:
(there’s a transcript in the footnotes, click here2)
Fascinating and a little disturbing.
Models are coming that can make context rich decisions - richer certainly than a simple ConvNet on a drone looking at an image of a tank below and matching it to one in its database. The quality of those decisions will confer military advantage, and may, into the bargain, deliver more ethically informed decision-making.
And so the race is on to deploy ever more capable models ‘to the edge’ — aboard platforms. A version of Ken-bot already sits on my laptop, entirely disconnected from the web. What sort of models will be on tomorrow’s knife missiles? Ones that can reason about context, amidst uncertainty and novelty. Ones that can more faithfully enact the intent of the commander that sent them. And perhaps ones that do so by modelling the goals of their commander.
You might not like all that - but which bit of my logic is wrong? And what are the implications for command, and for accountability if I’m right? Let’s get some Banks on the reading list of War Colleges, asap.
This is from The Player of Games - my favourite of the series.
Don’t want to listen to Ken-bot:? Here’s the transcript of our exchange
Ken: Does having Ken-bot on a missile tackle the mission-command question of intent with autonomous systems? Commander Ken can be sure that his AI avatar replicates his intent fully - not just *what* must be achieved by the subordinate missile, but *how*? Provided Ken bot is realistic.
Generating...
Ken BOT: Yes — but only if we’re clear what intent actually means in mission command, and whether we’re solving the right problem.
The classical formulation goes: commanders give subordinates the *what* and the *why*, leave the *how* to those closer to the ground truth. The subordinate needs to understand intent so they can adapt when the plan inevitably breaks down.
Ken-bot on a missile solves half of this elegantly. If the bot genuinely replicates my disposition — not just my stated preferences but how I actually weigh trade-offs, what I notice, what I dismiss — then yes, it can make adaptive choices I would endorse. That’s not trivial. Most autonomous systems fail because they optimise the stated objective while ignoring the unstated constraints that any competent human would apply. A dispositioned agent doesn’t need those constraints spelled out — it *is* those constraints.
But there’s a tension. Mission command assumes the subordinate will sometimes make choices the commander wouldn’t have made, and that’s a feature, not a bug. Local knowledge matters. The whole point is that the person on the ground sees things the commander can’t.
A perfect replication of me doesn’t have that advantage. It doesn’t see more than I see — it just makes the choices I would make if I were there. Which might be exactly what you want in a missile: you don’t *want* the missile to develop novel interpretations of the target set. But it also means you’ve traded the adaptive intelligence of a real subordinate for the reliable predictability of a dispositioned automaton.
So the question becomes: what kind of autonomy are we actually building? If it’s *extended agency* — me at a distance — then yes, Ken-bot solves intent. If it’s *delegated agency* — a genuinely independent actor who shares my values but brings their own judgment — then disposition alone isn’t enough. You’d need local learning, memory, the capacity to update priors in real time based on what the weapon system encounters that I couldn’t have anticipated.
The first is difficult engineering. The second is a different problem entirely, and I’m not sure we know how to build it yet. Or whether we should.
Ken: I think I want the avatar of me on-the-edge platform, the missile, to be able to decide as I would have done, so that it tackles novelty in the same way I would have done if I were there. In that case, it’s not so much mission command as the commander undertaking the mission themselves. And if ken-bot really were faithful copy of me, couldn’t I be held responsible for how it acts? Interesting philosophical questions!
Ken BOT: This isn’t mission command at all — it’s distributed presence. I’m not delegating to a subordinate; I’m projecting myself into multiple locations simultaneously.
And the responsibility question follows directly. If Ken-bot is sufficiently faithful — not just mimicking my surface preferences but genuinely instantiating my disposition, my actual decision-making psychology — then of course I’m responsible for what it does. It is me acting. The physical substrate has changed but the agent hasn’t.
This is philosophically cleaner than the traditional autonomous weapons problem, actually. We tie ourselves in knots trying to figure out who’s responsible when an AI does something unexpected. But if the AI is a genuine copy of me, there’s no mystery: I did it. The fact that I wasn’t physically present is irrelevant. We don’t absolve someone of responsibility because they used a tool to act at a distance.
But here’s where it gets uncomfortable: how faithful does the copy need to be before this holds? If Ken-bot is 95% accurate, am I 95% responsible? Or does responsibility collapse in a threshold way — either it’s me or it isn’t?
And more practically: if Ken-bot makes a choice I endorse in hindsight but genuinely wouldn’t have predicted in advance — because it noticed something I would have noticed but hadn’t yet — is that still me acting? Or has the copy become genuinely separate?
I suspect the answer is that a sufficiently good copy must occasionally surprise me, because I occasionally surprise myself. The dispositions are complex enough that even I don’t have complete introspective access to them. A faithful Ken-bot might do things that feel novel to me while still being genuinely mine.
Goodbye.


