Advertisement

AI experts: Mitigating AI risks should be global priority

Yahoo Finance tech editor Dan Howley discusses why AI experts believe that mitigating the risks of AI should be a global priority.

Video Transcript

AKIKO FUJITA: Well, another day, another warning about the risks artificial intelligence could pose to society. Top experts releasing a statement today calling for more regulation. Yahoo Finance's Dan Howley has that story. Dan, we're talking about sort of the same names that have been leading the way like an OpenAI, like an Alphabet. But my question to you is, why this letter given that they're the ones that are building the software?

DAN HOWLEY: Yeah, it's interesting. And this letter coming from the Center for AI Safety, and just to give you a rundown of some of the signatories. We have Sam Altman over at OpenAI, we have folks from Microsoft, we have folks-- Kevin Scott, the CTO of Microsoft, we have folks from Google, chief scientific officer at Microsoft, people from kind of across the spectrum. You see here everybody who's kind of signing on to this.

And you know, this comes, you know, just months after we saw Future of Life Institute put out that call to put a six-month pause on the training of AI systems more powerful than OpenAI's GPT-4. And these kinds of pronouncements or letters that we're seeing will likely be more common as we move forward with generative AI. The reason could be multifaceted. It could be because of a general thought that this is truly some kind of danger, or it could be a call for more regulation.

Some folks are saying that the businesses, the larger firms like OpenAI calling for regulation would essentially allow governments to favor them since they're the larger organizations. Smaller groups may have a harder time being able to get regulations in place or abide by regulations. So there could be a number of reasons why we're starting to see these kind of percolate to the surface, and this kind of pushback against AI from the very people that are putting it out there.

- And so Dan, just put all of this into perspective for us because, I think, a lot of people when they read some of these headlines, when they read some of these warnings, especially from the people who are behind the tech, they get a little bit nervous. So how real are some of these threats that we have heard about just in terms of AI, and what exactly that means for humanity going forward?

- Yeah. Terminator is not going to come knocking on your door and stealing your sunglasses or whatever, you know, he does that in the movie. What this means is it's just a focus on the-- I guess we could say, they say themselves this existential threat that could occur, right? I think that the bigger problem and one that isn't pointed out here, this threat to humanity is the real problems that exist today with AI, and those include things like disinformation, misinformation.

I mean, we just saw a few days ago where the market reacted to a photo that was generative AI image of an explosion by the Pentagon-- completely fake, right? But that's an issue that we need to talk about with generative AI. There's also the fact that the models that are used to run these AI systems are inherently biased just because of the people who create them. It's something that we've seen in the past.

So those are issues, I think, that are more important to discuss because it's happening already in the here and now that should be addressed. You know, I can't say whether or not of AI is going-- is the same threat level as climate change or nuclear war. I'm a reporter, but I do think that, you know, those are really the issues that people should be focusing on rather than these grandiose statements and having all these famous people sign on. Maybe just check it out and make sure that it's not able to put out photos or there's some regulations around photos like what we saw the past couple of weeks.

- Yeah, Dan, well, tech is always-- kind of the fear is that the regulation is not keeping up with the speed of the advancement, but we turn to you as the expert on AI. Dan Howley, as always, thanks so much for that.