Advertisement

What is superintelligence? How AI could wipe out humanity – and why the boss of ChatGPT is doomsday prepping

A backlit owl egg almost ready to hatch (National Centre for Birds of Prey/ Screengrab)
A backlit owl egg almost ready to hatch (National Centre for Birds of Prey/ Screengrab)

In the ‘Unfinished Fable of the Sparrows’, a group of small birds come up with a plan to capture an owl egg and raise the chick as their servant. “How easy life would be,” they say, if the owl could work for them, and they could live a life of leisure. Despite warnings from members of their flock that they should first figure out how to tame an owl before they raise one, the sparrows devote all their efforts to capturing an egg.

This tale, as its title suggests, does not have an ending. Its author, Swedish philosopher Nick Bostrom, deliberately left it open-ended as he believes that humanity is currently in the egg hunting phase when it comes to superhuman AI.

In his seminal work on artificial intelligence, titled Superintelligence: Paths, Dangers, Strategies, the Oxford University professor posits that AI may well destroy us if we are not sufficiently prepared. Superintelligence, which he describes as an artificial intelligence that “greatly exceeds the cognitive performance of humans in virtually all domains of interest”, may be a lot closer than many realise, with AI experts and leading industry figures warning that it may be just a few years away.

Last week, OpenAI boss Sam Altman, whose company created ChatGPT, echoed Professor Bostrom’s 2014 book by warning that the seemingly exponential progress of AI technology in recent years means that the imminent arrival of superintelligence is inevitable – and we need to start preparing for it before it’s too late.

On Tuesday, he was among other notable signatories of a statement warning that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

Mr Altman, whose company’s AI chatbot is the fastest growing app in history, has previously described Professor Bostrom’s book as “the best thing I’ve seen on this topic”. Just a year after reading it, Mr Altman co-founded OpenAI alongside other similarly worried tech leaders like Elon Musk and Ilya Sutskever in order to better understand and mitigate against the risks of advanced artificial intelligence.

Initially launched as a non-profit, OpenAI has since transformed into arguably the leading private AI firm – and potentially the closest to achieving superintelligence.

Mr Altman believes superintelligence has the potential to not only offer us a life of leisure by doing the majority of our labour, but also holds the key to curing diseases, eliminating suffering and transforming humanity into an interstellar species.

Any attempts to block its progress, he wrote this week, would be “unintuitively risky” and would require “something like a global surveillance regime” that would be virtually impossible to implement.

It is already difficult to understand what is going on inside the ‘mind’ of AI tools currently available, but once superintelligence is achieved, even its actions may become incomprehensible. It could make discoveries that we would be incapable of understanding, or take decisions that make no sense to us. The biological and evolutionary limitations of brains made of organic matter mean we may need some form of brain-computer interface in order to keep up.

Being unable to compete with AI in this new technological era, Professor Bostrom warns, could see humanity replaced as the dominant lifeform on Earth. The superintelligence may then see us as superfluous to its own goals.  If this happens, and some form of AI has figured out how to hijack all the utilities and technology we rely upon – or even the nuclear weapons we possess – then it would not take long for AI to wipe us off the face of the planet.

A more benign, but similarly bleak, scenario is that the gulf in intelligence between us and the AI will mean it views us in the same way we view animals. In a 2015 conversation between Mr Musk and scientist Neil deGrasse Tyson, they theorised that AI will treat us like a pet labrador. “They’ll domesticate us,” Professor Tyson said. “They’ll keep the docile humans and get rid of the violent ones.”

In an effort to prevent this outcome, Mr Musk has dedicated a portion of his immense fortune towards funding a brain chip startup called Neuralink. The device has already been tested on monkeys, allowing them to play video games with their minds, and the ultimate goal is to transform humans into a form of hybrid superintelligence. (Critics note that even if successful, the technology would similarly create a two-tiered society of the chipped, and the chipless.)

Elon Musk claims Neuralink’s brain chip technology will give users ‘enhanced abilities' (Neuralink)
Elon Musk claims Neuralink’s brain chip technology will give users ‘enhanced abilities' (Neuralink)

Since cutting ties with OpenAI, the tech billionaire has issued several warnings about the imminent emergence of superintelligence. In March, he joined more than 1,000 researchers in calling for a moratorium on the development of powerful AI systems for at least six months. That time should then be spent researching AI safety measures, they wrote in an open letter, in order to avert disaster.

It would take an improbable consensus of leading AI companies around the world, the majority of which are all profit-seeking, in order for any such pause to be impactful. And while OpenAI continues to spearhead the hunt for the owl’s egg, Mr Altman appears to have at least heeded the warnings from Professor Bostrom’s fable.

In a 2016 interview with the New Yorker, he revealed that he is a doomsday prepper – specifically for an AI-driven apocalypse. “I try not to think about it too much, he said, revealing that he has “guns, gold, potassium iodide, antibiotics, batteries, water [and] gas masks” stashed away in a hideout in rural California. Not that any of that will be much use to the rest of us.