To code, or not to code?
On the unlikely decline and fall of the maths nerd
New Year, new you? What about learning to code, or brushing up your maths skills for the brave new world of AI? The Prime Minister reckons this is the way. Mandatory maths until 18, and a crackdown on ‘nonsense’ degrees in favour of the sorts of science subjects that will be vital to succeed in the technology intensive industries of the future.
Except, maybe not.
This week, IBM honcho Matt Candy argued provocatively that in the brave new AI-powered world to come, you might be better off with a liberal arts education than one in computer science. Supporting fire came from OpenAI researcher Logan Kilpatrick, who claimed that talking to a computer would soon be no different than talking to a human, and require similar interpersonal skills. Maybe that media and communications degree might not be so useless, after all. Hold the university applications!
I’m not sure. Both these gents have a dog in the fight. They work with language models, like OpenAI’s ChatGPT. Perhaps that makes them a bit blinkered. There’s much more to AI than chatting to chatbots, impressive though these are. Last month, for example, Google DeepMind published new work using deep learning techniques to discover more than 700 wholly new materials. Could the boffins at DeepMind have sweettalked their supercomputer into writing the code for that itself, and saved themselves a lot of effort? Not yet.
But perhaps one day. Hints come from another new DeepMind product, one that’s specialised in autonomous coding. Their latest AlphaCode2 software does pretty well in programming competitions against humans, autonomously generating software solutions to problems set by humans. Who needs human coders? Soon it might be enough to ask interesting questions and leave the machine to work on the answers.
Maths as a special sort of language
The question boils down to whether code or, at a deeper level, maths is just another language. And, if so, whether we can express mathematical concepts with sufficient precision in regular language, like English, for them to be translated into computer-speak by an AI. Even if we could, will we be able to ask interesting questions for our clever robot friends if we don’t understand maths ourselves?
I’m doubtful. We’ll need geeks for a long while yet, I reckon.
I’m of the school of thought that maths is ‘out there’ representing something real in the universe regardless of whether humans are around to play about with it. Maths isn’t invented, like our spoken languages. Of course, these languages also have some connection to reality – they’re not just gibberish; or at least, not always. But the connection is altogether flimsier than for maths. As the philosopher Ludwig Wittgenstein argued, prose language is best understood as a sort of ‘game’ – a social convention – than a direct mapping from reality. Maths, by contrast, seems more deeply connected to the universe – discovered, not invented, by humans.
So, a brilliant non-fiction writer like Carlo Rovelli can render complex scientific ideas in prose – translating from the realm of maths to the realm of written language. I love his books about quantum mechanics and, God help me, for just a moment after reading them, I’m sure I understand what Oppenheimer and the rest were up to.
But the translation is imperfect. And it’s coming from the realm of physics to the realm of English prose, not the other way about. Wittgenstein hit the mark again with his profound comment that – ‘whereof we cannot speak, thereof we must be silent’. There are concepts in maths that can’t be rendered into words. No amount of chatting up a machine will help us get to them. We are dealing with two separate realms. We won’t even know what to ask the machines, unless we acquire STEM skills.
So, maybe it’s worth taking that coding course, after all?
There’s another large philosophical question here about whether human ingenuity is needed for scientific breakthroughs. Scientists combine technical nous with human creativity to produce their insights. Machines, by contrast, ingest vast amounts of data and optimise outputs on the basis of it. Not very creative.
But hold the presses! DeepMind (yet again) have just unveiled another AI – called Funsearch – which found solutions for some arcane problems in pure mathematics. Clearly, if you throw enough computer power at enough data, it can come up with something creative – i.e. original, useful, and surprising. I think machines will soon be making more scientific breakthroughs like this that have eluded humans. But for now, at least, we’ll still need humans to set the problems, and interpret the results. That means STEM skills.
Of course, not everyone is going to be at the cutting edge of scientific research or the frontiers of pure maths. Interacting with machines, for many of us, will be increasingly happen via written or spoken English. So Candy and Kilpatrick are onto something important, even if they overstate it – we should value the sorts of communication skills that are essential to our social relations. Perhaps that’s especially so as machines become ever more human-like. We’d do well to understand the differences as well as the similarities in communicating with them, versus with other humans. More on that shortly…