Advertisement

Sam Altman says he worries making ChatGPT was 'something really bad' given potential AI risks

Sam Altman speaking on stage with his hands in front of his chest while wearing a green crew neck shirt with the sleeves rolled up to his elbows.
OpenAI CEO Sam Altman.Jason Redmond / AFP via Getty Images
  • Sam Altman said he worried creating ChatGPT was "something really bad" given the risks AI posed.

  • The OpenAI CEO was speaking to Satyan Gajwani, the vice chairman of Times Internet, in New Delhi.

  • Altman is on a six-nation tour that includes Israel, Jordan, Qatar, the UAE, India, and South Korea.

OpenAI CEO Sam Altman says he loses sleep over the dangers of ChatGPT.

In a conversation during a recent trip to India, Altman said he was worried that he did "something really bad" by creating ChatGPT, which was released in November and sparked a surge of interest in artificial intelligence.

"What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT," Altman told Satyan Gajwani, the vice chairman of Times Internet, at an event on Wednesday organized by the Economic Times.

Altman said he was worried that "maybe there was something hard and complicated" that his team had missed when working on the chatbot.

Asked whether AI should be regulated similarly to atomic energy, Altman said there had to be a better system to audit the process.

"Let's have a system in place so that we can audit people who are doing it, license it, have safety tests before deployment," he said.

The risks are high

Numerous tech leaders and government officials have raised concerns about the pace of the development of AI platforms.

In March, a group of tech leaders, including Elon Musk and the Apple cofounder Steve Wozniak, wrote an open letter with the Future of Life Institute to warn that powerful AI systems should be developed only once there was confidence that their effects would be positive and the risks are manageable.

The letter called for a six-month pause in training AI systems more powerful than GPT-4.

Altman said the letter "lacked technical nuance about where we need the pause." 

Earlier this month, Altman was among a group of more than 350 scientists and tech leaders who signed a statement expressing deep concern about the risks of AI.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement read.

Read the original article on Business Insider