Christopher Nolan said Oppenheimer was a cautionary tale for AI. Was he right?

We’re sorry, this feature is currently unavailable. We’re working to restore it. Please try again later.

Advertisement

Opinion

Christopher Nolan said Oppenheimer was a cautionary tale for AI. Was he right?

Vladimir Putin’s threats to use nuclear weapons have moved the so-called Doomsday Clock closer to midnight than it’s ever been, and they’ve brought the nightmare vision to a new generation.

The clock, maintained by the Bulletin for Atomic Scientists since 1947, in January was moved ahead by 10 seconds to be set at 90 seconds to midnight, the metaphorical moment of global catastrophe.

Illustration: Dionne Gain

Illustration: Dionne Gain

The newly released movie Oppenheimer, dramatising the invention of the nuclear bomb, is well-timed to titillate the new fear of an old threat. But the movie is about more than nukes.

According to its director, it’s an episode from the past to inform the present: “I’m telling the Oppenheimer story because I think it’s an important story, but also because it’s absolutely a cautionary tale,” Christopher Nolan tells London’s Financial Times.

Specifically, a caution about humanity’s latest breakthrough in how to destroy ourselves, according to the experts – artificial intelligence.

“The way tech companies transcend geographical boundaries, often in very aggressive ways, makes it very difficult to regulate [artificial intelligence] on a sovereign nation basis,” says Nolan.

Does Oppenheimer offer a glimpse into the future or the past?

Does Oppenheimer offer a glimpse into the future or the past?Credit: Universal Pictures

Time to panic? Much of the public discussion of AI is well on the way to panic already with experts speculating about the percentage chance that AI will exterminate humanity. In gallows humour for nerds, they have a cute nickname for that dread prospect – p(doom).

One of the inventors of ChatGPT, Paul Christiano, in May said that his p(doom) was 50 per cent. “If for some reason, God forbid, all these AI systems were trying to kill us, they would definitely kill us,” the researcher prognosticated helpfully.

Advertisement

Panic, however, is precisely the wrong response, according to an Oxford expert on secure technology, Ciaran Martin. “It’s really, really important to avoid panic over AI,” says Martin, the founding chief executive of Britain’s National Cyber Security Centre, set up in 2016 as part of GCHQ, the UK government’s electronic spying agency.

Why, exactly? “Firstly, AI has many potential benefits in areas I don’t fully understand like diagnostic healthcare,” he tells me. “But secondly, when you’re looking to secure something, panicking people about it is a really bad idea.

“We know this from cybersecurity. In 2010 The Economist magazine had a big front page, a cover of a city skyline falling in flames a la 9/11,” supposedly a vision of future cyberwar.

Cybersecurity expert Ciaran Martin: “Cybersecurity is about hacking code.... but it tends not to kill people.”

Cybersecurity expert Ciaran Martin: “Cybersecurity is about hacking code.... but it tends not to kill people.”Credit: James Alcock

“Cyberwar, the threat from the internet; that’s not the way cybersecurity works,” Martin explains. “Cybersecurity is about hacking code. Second-order effects can be highly disruptive, extremely costly, very intimidating, it does all sorts of harm, but it tends not to kill people.

“It’s almost impossible to detonate some sort of explosion that would bring down a skyscraper by cyberspace. Why does that matter? Because, as well as scaring people, it matters in two ways. One is it scares people and makes them feel powerless.”

Martin, a professor of management of public organisations at Oxford, also serves as an adviser to the Australian cybersecurity firm CyberCX and says he has to remind clients that they are not powerless, they do have agency: “We could talk about Ukraine, showing that when a defender really sets its mind to it, you know, all sorts of things are possible. So you don’t want to infantilise people in cybersecurity. And the second thing is that it may send you chasing after the wrong problem.”

He urges governments and companies instead to break down potential AI problems into specific parts and deal with each.

Joe Biden met with leaders from major artificial intelligence firms at the White House on Friday.

Joe Biden met with leaders from major artificial intelligence firms at the White House on Friday.Credit: Bloomberg

For instance, on Friday, a brace of US tech firms struck a deal with the White House to stamp a watermark on any content produced by AI, an effort to keep deepfakes apart from reality.

US President Joe Biden said that it was a step towards responsible use of AI, a technology he described as “astounding”. The watermarking deal is voluntary. The signatories – Google, Amazon, Meta, Microsoft, OpenAI Anthropic, and Inflection AI – said they’d apply it to text, images, audio and video. But many other companies refused to sign. These include Midjourney, whose product was used to make a fake clip of Donald Trump being arrested.

Other countries are trying other ways to regulate AI. More sweepingly, the Chinese Communist Party last week issued guidelines for China’s AI developers that stipulate the use of “socialist values” in any new products. This is, in effect, a demand that all the AI programs permitted access to the Chinese internet must promote Beijing’s authoritarian world view.

Loading

But overarching all the detail is a fundamental strategic principle. The creation of the nuclear bomb is, as Christopher Nolan says, a cautionary tale of humanity’s capacity for self-destruction. Yet only two have been used in anger, and none has been used in the last 78 years.

This is an instructive tale of the logic of mutually assured destruction, with one country’s atomic arsenal held in check by the threat posed by others’. Ciaran Martin says a similar principle applies to the online world, the logic of technological equilibrium.

“Can AI be used to scale malevolent software? Yes. But can the same techniques be used to scale the defence against that? Yes. And furthermore, can it be used to expand cybersecurity and solutions? Absolutely.

“This can be done for as long as our equilibrium holds – what can be done for attack can be done for defence, then we stay in a good place.” Of course, that equilibrium could be broken by the next revolution in computing, quantum. In truth, there’s always something to panic about if we choose to.

Loading

Martin says that we like to think that “because something can happen, it will happen”. Yet the Doomsday Clock never has struck midnight.

So “is there a robot that could be programmed to go and pour petrol in your toaster and then turn it on? Probably, but I think you’re more likely to face targeted extortion or financial theft threat from organised cyber criminals.” In other words, p(rip-off) is more likely than p(doom).

Peter Hartcher is international editor.

Most Viewed in Technology

Loading