Advertisement

Ampere CEO on fixing AI's energy problem, AI arms race

Ampere Computing CEO Renee James joins Yahoo Finance Live to discuss how her company is working to reduce artificial intelligence's energy consumption.

Video Transcript

[AUDIO LOGO]

- Could the AI arms race spark a new energy crisis? According to MIT, AI consumes more energy than traditional computing, with a single model's training burning more electricity in a year than 100 US homes. Joining us now, Renee James, Ampere Computing CEO.

Renee, thanks so much for being here. And you guys, just in full transparency up top, you guys make more energy-efficient chips. But let's talk, first of all, about the scope of the issue, right? Because data centers, even existing without this full AI capability, they burn a lot of energy. Is the industry doing enough to address that?

- Thank you, first of all, for having me on to talk about something that, of course, is near and dear to Ampere because we were founded on the principle of building sustainable compute. Putting aside the demands of AI, 30% of the data centers in the world, which really power the cloud growth that we are all experiencing, are in the United States, and they already use 2% of all of our energy.

And so when we founded the company five years ago, we said, look, you know, I-- my team and I had been building microprocessors for a long time and felt that it was time for us to build something that could deliver more power-- more performance with low power. And that's really the thesis of the company-- high-efficiency sustainable microprocessors to fuel the cloud growth, which is only going to be accelerated even further with the use of AI. That's what we're about.

- Certainly, and so kind of further along what your business, what your company is doing to make sure that we are advancing artificial intelligence to the event that or to the extent that demand is continuing to come forward, but in a more sustainable and environmentally-conscious way. How many of the customers that you speak with are prioritizing their environmentally or sustainable practices as they move towards AI versus just looking for better cost as they're looking to get into this next wave of technology?

- I think that's a great question. The good news is that the cost factors into sustainability because one of the great drivers of costs, of course, is your power bill. So the more sustainable the compute solution, the lower the overall total cost of ownership because it brings down, of course, power and cooling.

So our customers consider this to be quite important. It's one of the primary calculations that they look at. Every single data center operator knows that the grid has told them that they can't guarantee more power, or they're asking them to offload power, so it's a real-time situation. As they think about adding AI, which only requires 20-plus X on some kinds of these queries, these searches, if you will, they have to find even more sustainable solutions.

So we talk in our industry about a term that normal people who don't do this for a living are like, what are these people are talking about? which is called dense computing, which is getting better utilization out of the existing data center that you've already built with the existing power that you're already using. So if you were able to do-- use a more efficient microprocessor more efficient per, if you will, server rack inside a data center, then you're more tightly densely packed, you get a better utilization.

And that's really what the thesis of Ampere is about, but it is now playing more broadly into the ESG goals of our customers, and I believe the demands that users are going to want long term about the ability to sustain our growth.

- Now, Renee, you guys are a private company. So I don't know how much you can tell me. But what can you tell me about demand for the chips thus far? I mean, when we hear Nvidia talking about the exponential gains in demand we have seen, what are you guys seeing?

- We are private, and I will just say thank you for asking. Some of us know we're on registration. We are excited for the market to open for new-growth companies at some point in the future. It's been good. I think we have a bright outlook on that.

Look, when I lived through a couple of different phases of compute, and when the PC first hit its stride when I was new to the industry, the thing that really fueled it-- there's always some big new use of computing that fuels that growth phase. Our current phase is cloud and AI in the cloud. At that time, it was media.

And in the beginning, we had to have special purpose-- you know, a bunch of new hardware to be able to create videos, if you will, and the PC, the thing the PC could do is play it back very efficiently. So when I think about AI, I think about it the same way. You're going to have to build accelerated compute modules to be able to train the AI models that's absolutely accurate. That's something that Nvidia does very well.

But there's a growth in the AI, if you will, inference, which is kind of if you think about it like media, it's like playback. It's the thing that's in your browser. It's the thing that's in every one of your applications that assists the app. That should be done most efficiently on the regular processor in the computer.

And so from our perspective, building sustainable and low-power super-high performance computing, which the history of our industry and microprocessors has been, use power as a proxy for performance. So higher power gives you higher performance. What we're pioneering is lower power and high performance, not low-power, low performance, which has been the history of low power.

And that, we think, will allow data center operators and cloud-- hyperscale cloud-- operators to efficiently deploy artificial intelligence into everything-- into all their services. And that's really the exciting part about our company, and we're-- our thesis of sustainability has become even more relevant over the course of the last five years.

- And, Renee, just quickly here. How should people think about what your chips replace. In other words, it sounds like Nvidia is not a direct competitor. Who would be your direct competitors?

- Yeah, I mean, what we're really looking for is the growth of the cloud for data processing and for inference processing, and we think about the traditional CPU companies that really have lived in that space for the last couple of-- 40 years or so, like Intel and AMD.

Nvidia is a great partner of ours, and they are doing accelerated compute and very exciting things, and I'm really-- I'm always happy for someone who's doing well because this business is so difficult. And certainly, Nvidia's been after it for 30 years, so it's exciting for them.

- Renee, even prior to the latest wave of focus around AI and chips and how they play a role within generative AI and other uses, there's still the larger geopolitical overhang around chips right now. How much of that is continuing to be a factor in where AI can have its next leg of growth?

- Yeah, I think for the traditional compute part of the business, that, thankfully, has not been a tremendous overhang for us. I do think for the predominance of-- the long-term AI will be able to do the nontraining portion, and a lot of the general-purpose use of it will be able to be done on processors that are not currently subject to export constraint.

I don't know. I think that's a lot-- there's a lot of discussion going on. There's a lot more to be discovered about what we should and shouldn't do. It's always difficult. In the beginning of a new phase of computing, this always happens. We're not clear on what are the things that we should hold tight or we should deploy widely. And I think we're going through those discussions right now with the Commerce Department.

- Renee, really fascinating conversation here today, and we appreciate the time. Renee James, who is the Ampere Computing CEO. Thanks so much

-Thank you.