Advertisement

AI Could Cause Human Extinction, Experts Bluntly Declare

the scream torso of giant man screaming watched by robot
Experts Warn that AI Could Cause Human ExtinctionDonald Iain Smith - Getty Images
  • In recent months, many AI experts and executives have sounded the alarm on the dangers of advanced AI development.

  • The latest salvo is a new 22-word statement equating the global risk of AI to that of pandemics and nuclear war.

  • Not all experts are worried, with some saying AI is nowhere near advanced enough to snuff out human existence.


Many people are worried about the future of AI—including the minds involved with creating that very future. Tech executives and CEOs from AI corporations, including such companies as OpenAI and Deepmind, have signed a simple-yet-haunting 22-word statement about the potential dangers of AI: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The statement is purposefully vague and avoids details as to how mitigating this growing risk should be made a priority, but the statement shows a high level of concern about AI among the very people behind the burgeoning technology. The statement, posted to the Center for AI Safety, is a “coming out” for AI experts worried about the technology’s future, according to the center’s Executive Director Dan Hendrycks.

“There’s a very common misconception, even in the A.I. community, that there only are a handful of doomers [sic],” Hendrycks told The New York Times. “But, in fact, many people privately would express concerns about these things.”

The threats of AI aren’t necessarily that a super-advanced artificial life form will enslave and exterminate humanity like in The Terminator. The Center for AI Safety’s website details a more subtle view of our doom, including AI’s potential role in designing more effective chemical weapons, supercharging disinformation campaigns, and exacerbating societal inequality.

These worries are nothing new. Fears about AI are nearly as old as the technology’s inception, and have played a large role in science fiction for nearly a century. Isaac Asimov created the Three Laws of Robotics back in 1942, and in the 1965 sci-fi novel Dune, AI is completely banned throughout the known galaxy in an attempt to save humanity.

In March, tech leaders—including Elon Musk (a longtime spokesperson for AI’s dangers) and other top AI researchers—called for a six-month moratorium on AI research “more powerful than GPT-4,” the latest version of OpenAI’s large multimodal model. In April, the Association for the Advancement of Artificial Intelligence released its own letter urging for the development of ways to “ensure that society is able to reap the great promise of AI while managing its risks.” And just earlier this month, AI expert Geoffrey Hinton, known as the “father of AI,” similarly expressed fears about the growing technology—going so far as to say he regretted his life’s work.



Not all AI experts are joining in on the hand-wringing. A computer scientist at Princeton University told the BBC that “current AI is nowhere near capable enough for these risks to materialize. As a result, it’s distracted attention away from the near-term harms of AI,” such as its biased and exclusionary behavior. Regardless, both Microsoft and OpenAI have called on the U.S. government to regulate AI development, and the Senate Judiciary subcommittee held a panel regarding AI oversight in mid-May.

The doom-and-gloom future of AI is still unwritten, and some tech executives and experts are trying to keep it that way.

You Might Also Like