OpenAI, the company behind ChatGPT, is worried about the emergence of a ‘superintelligent’ AI - perhaps even an AGI, or Artificial General Intelligence, that will be far smarter than humans. So, this week they published a memo sketching out how that risk might be managed. They’re proposing governance that looks like the international regime for nuclear energy, the IAEA. This will regulate who does what research (perhaps by international treaty, it’s not clear) and then monitor compliance.
Alas, there are some big, I think insurmountable, problems with their proposal. Like what? Read on…
No one knows what AGI is, or how it might be developed. Will it spontaneously emerge when AI researchers don’t expect it? Will today’s deep learning techniques be enough to produce AGI, when they’re run at sufficient scale, on sufficiently powerful computers? Who knows?
Not OpenAI, anyway. Their regulatory regime delimits AGI efforts as being ‘above a certain capability (or resources like compute) threshold’. So, either output (anticipated capability) or input (computer power). There are significant problems with both possibilities. On outputs: What capabilities constitute AGI? How do you regulate in anticipation of those capabilities, if they are emergent? And on inputs: How much computer power is needed? How do you control who has that computer power, especially as computers continue to become more powerful. Today’s world leading supercomputer is tomorrow’s laptop. What if the secret sauce for AGI isn’t raw compute, but the way in which it’s employed? Could you get to AGI with far less power than GPT-6? Who knows?!
Upshot: if you can’t specify what it is you’re regulating, you’ve no hope of regulating it.
Who is going to do the regulation? For nuclear energy, the action was clearly at the level of governments - both in terms of R&D and then of regulation. It took a lot of state coordination to get a nuclear programme off the ground, and governments everywhere resolved to keep a firm grip on things. When it came to regulation, it was state governments that signed up to the relevant treaties, and that acceded to monitoring by other states and by intergovernmental agencies. Despite some early conjecture about placing existing US nuclear weapons under international control, it quickly became apparent that the bomb was far too important to be outsourced to any wishy-washy world government, especially in the context of a deepening Cold War. So the US government retained control of the process from end-to-end, and the other early adopters did likewise.
That sort of tight government control will be much harder this time. OpenAI’s memo talks about ‘coordination among the leading development efforts’. In the US, that puts the weight of responsibility with the private sector, and its vast information technology outfits, Google, Microsoft, Meta and, perhaps, Amazon. Call me a cynic, but I can’t see these intensely competitive, ambitious corporations and research scientists unilaterally limiting their efforts. Witness Google’s unholy scramble to get Language Models into its search engine after it became apparent that ChatGPT was wildly popular.
Other approaches are available. In China the state retains a tight grip on what its nominally private outfits do. In Japan there’s a blend of public and private endeavour, with the government’s Riken agency recently partnering with Fujitsu, maker of the world’s most powerful supercomputer, to develop foundation models. In the UK, where DeepMind joined forces with Google specifically to access its massive computer power, there’s now talk of a government funded, and perhaps owned, foundational model.
Clearly there’s a possible role here for government regulation, whatever the blend of R&D. That’s true even if the action happens (contra the Manhattan project) mostly in the private sector. But to do that regulation, governments must understand AGI (and since the corporations themselves don’t, what chance has Uncle Sam?). Moreover, they must be willing to deliberately hamstring themselves in the context of intense geopolitical competition. There’s no evidence of that.
And meanwhile, there’s no moat around AI research, as Google concluded, reflecting on the leak of Meta’s powerful LLaMA model. Large foundational models need big research efforts, but once they’re out in the wild, anyone can play. How do you regulate actors you don’t know, doing research on bespoke systems you never heard of, in pursuit of processes you don’t understand? Tricky.
Back to the idea of international regulation. If only we could get all governments agree to regulation along OpenAI’s lines… Fat chance.
When it came to atomic energy, the eventual, tightly-policed arms control regime was winners justice. Having developed the bomb, the US (and a few other states) moved rapidly to pull up the drawbridge, via the establishment of the IAEA and then the Nuclear non-Proliferation Treaty. We can do research on nuclear weapon systems - you? Not so fast.
This time round the process is much more democratic. Sure, AI power is unevenly distributed geopolitically. But the international barriers to entry are far smaller (compute isn’t as scarce and trackable as enriched uranium ore; big data doesn’t need heavy industry; and you can smuggle your finished AGI about on a laptop). Those barriers are only going to keep falling as computer power increases.
Meanwhile, AI-control cheats will prosper. Even if you sign up to an IAEA-type regime, there’s a powerful incentive to ‘defect,’ in the jargon. Despite bottlenecks in nuclear weapon development and the potentially serious consequences of being rumbled in trying to clandestinely develop them plenty of states had a try - Iraq, Syria, Libya unsuccessfully; North Korea successfully. Some - India, Pakistan never signed on. Others are still trying - Iran. The lesson, it’s hard to develop nuclear weapons, you will almost inevitably be detected, and there will be consequences. For some states it’s worth it anyway.
What about AGI? If it’s really as transformative as OpenAI and others suggest, there’s a powerful incentive to cheat. And plentiful opportunity to do so. If you’re going to go for a deep learning technique, you won’t even need the most brilliant computer scientists - just wait for a promising lead to emerge from a competitor, and then seed your research effort with it. Proliferation is a (comparative) doddle.
—
So - Can you stop research on AGI? No. Scientists are going to do science, no matter what. Some concerned scientists might turn away from AGI research, just as some turned away from nuclear physics. But plenty won’t, and governments are going to support them. If it can be done, it will.
To be clear: OpenAI don’t want to stop research on AGI. For one there are potentially huge benefits. It will be, to borrow their word, ‘astonishing’, with the power to transform human societies for the better. Rather than block that research, OpenAI just want to control it. The problem is that they’ve nominated themselves to do the monitoring. That’s noble, in a sense - they’re the current leaders in the field, and feel a sense of responsibility to humanity. I know and respect some of the authors of their memo. But in another sense it’s entitled, monopoly/hegemony seeking behaviour. Not everyone will be enthused by their attempt to ring fence AGI research.