Google's new visual recognition app can identify flowers you don't know

Want to know what flower you’re looking at? Google Lens can figure it out. Screenshot by Yahoo Finance

Google (GOOGL, GOOG) CEO Sundar Pichai announced a visual recognition product called the Google Lens on Wednesday at its Google I/O developer conference.

The Lens feature, which will be used on phones, sees what the viewer sees through the camera and provides information about the object. In a demonstration, Pichai showed the app correctly identifying a flower, inputting a Wi-Fi router’s password and SSID from the sticker, and giving a restaurant’s Google rating and reviews all when the phone camera was pointed at each object. Google wants to pre-empt your googling.

Google Lens follows other visual recognition products put out recently by other tech companies. Amazon, for instance, has had a product recognition tool built into its shopping app to allow users to see how much the company will undercut brick-and-mortar competitors for the same item. Samsung’s Bixby app can scan a photo of a business card and save the information as a contact, something more aligned with Google’s new capabilities.

 

Google Lens can plug in Wi-Fi router info from the camera. Screenshot by Yahoo Finance

Powering all this is new hardware from Google, Tensor Processing Units, or TPUs, which are behind Google’s AI training system. Users will never see these “deep learning” systems, however, because Google is all about the cloud doing the heavy lifting it takes for a computer to identify real-life stuff through its camera.

As the HBO show “Silicon Valley” illustrated on a recent episode with its “food Shazam” app, getting a camera to identify real-life stuff from a variety of angles, lighting situations, and with different phone cameras is quite the computational challenge. This time, however, Google isn’t buying these processors from Nvidia (NVDA), but is making its own, optimized to its software. (Nvidia was Yahoo Finance’s company of the year in 2016.)

 

Google Lens can identify which restaurant you’re looking at, which you know anyway. Screenshot by Yahoo Finance

Tech companies have become increasingly obsessed with the camera, seeing it as a gateway for integration between the virtual world and the real one. Snapchat (SNAP) calls itself a “camera company,” Facebook (FB) is doubling down on virtual reality, through phones, and smartphone manufacturers have been engaged in an arms race for camera quality and features.

For recognizing unknown objects like a flower species, Google Lens shows itself to be an extremely useful tool, a “Shazam” for the physical world. But its use of pointing the phone at a restaurant for info raises the question of what is too far. With GPS already on the phone and a compass showing your orientation, why would you even have to raise up the phone to get the restaurant in the camera? Still, the technology is impressive, and Google is showcasing an enormous amount of processing power that could be very useful.

Ethan Wolff-Mann is a writer at Yahoo Finance focusing on consumer issues, tech, and personal finance. Follow him on Twitter @ewolffmann. Got a tip? Send it to tips@yahoo-inc.com.

Read more:

Airlines are giving crews leeway to hand out money when necessary

Banks take just 90 seconds to approve a credit card. Here’s what they look for.

Uber’s new tipping dilemma: low prices or smooth experience

Zuckerberg at Facebook conference: if you take one thing away—this is it

The cost of unifying North and South Korea

The trick to getting credit card fees waived? Just ask

Chase’s Sapphire Reserve is very worth it, even with its slashed bonus

What to Read Next