Hands on: Here's what it's like to use Google's magical new AI

Tech Critic
Yahoo Finance

Google (GOOG, GOOGL) is a massive company, with its fingers in a thousand pots. There’s Google Docs, Google Maps, Google Assistant, Google Home speakers, Google Photos, Gmail, Android, self-driving cars, and on and on.

At its annual Google I/O developer conference, then, the company has adopted a clever system for putting these projects on display, so that developers can try them out and ask questions: Yurts.

(If you’re not Mongolian, here’s a refresher: “Yurt: A traditional portable, round tent covered with skins or felt and used as a dwelling.”)

I wandered through them this week, and took a few notes on some of the most interesting demos.

The keynote speech at Google’s I/O conference drew 7,000 spectators.
The keynote speech at Google’s I/O conference drew 7,000 spectators.

Visual Assistant

Google’s voice-controlled Assistant is getting smarter (and more ubiquitous) all the time. This summer, companies like Lenovo, LG, and JBL will offer new Google Home devices with screens, much like the Amazon (AMZN) Echo Show and Echo Spot before them.

Therefore, Google’s been working on ways to show you the answers to your questions— on those screens, or your phone — instead of just speaking them. (Assistant “punts” you to your screen, in devspeak.) That’s especially handy when you ask for, for example, a recipe (you get cooking videos), a YouTube video, or details about the Mona Lisa.

Lenovo’s Google Smart Display, coming in July, adds a screen to the smart-speaker concept.
Lenovo’s Google Smart Display, coming in July, adds a screen to the smart-speaker concept.

At one of the yurts, a rep demonstrated a sample app: A Google Home speaker asks you to name three animals whose parts you want to combine (head, body, legs) — and then directs you to look at your phone, where it has created a cartoon of that hybrid critter.

Lookout

In the Accessibility yurt, a mockup of a kitchen and living room were the setting for demos of Lookout. This is a new app, coming soon, for people with sight impairments: you point your phone at anything (mug, bowl, cereal box, painting of a leopard, person), and the Assistant identifies it by name and position, so you know what you’re looking at. “Cheerios, 12 o’clock,” she’ll say. Your phone can hang in a sling around your neck, camera facing out, identifying everything it sees.

Microsoft’s (MSFTSeeing AI app, for iPhone, does the same thing and is already available, but Google’s version has a few nice tweaks; for example, you can hold your hand over the lens briefly to shut off Lookout’s speaking, and then double-tap the phone to make it resume.

Smart Home Smarts

For months, reviews of the Amazon Echo and Google Home smart speakers inevitably concluded like this: “Unfortunately, Google’s devices fall down in the smart-home department. The Amazon can operate thousands more thermostats, door locks, lights, and other home devices than Google’s machine can.”

Google must have gotten really tired of hearing that. Nick Fox, VP of Google Assistant, says that since January, Google has increased compatibility with the world’s smart devices from 1,500 devices to 5,000 devices. “We want it to be in everything,” he told me. “We’re partnering with all the major manufacturers.” (Amazon, for its part, now says that Echo works with 12,000 smart home devices.)

That new breadth was on display in one of the yurts, where a rep speaking to his phone commanded lights to go on, music to start, an outdoor camera to turn on, and even misters to start spraying the houseplants.

TensorFlow on display

One of the most enlightening yurts was the TensorFlow tent. That’s Google’s machine-learning technology, which is open-source (free for anyone to download, use, and even adapt).

The exhibit I loved wasn’t anything fancy; it was a robotic toy car driving around and around a figure-8 track. But the guys who taught the car to self-drive aren’t Google employees or even professional engineers. They created this car kit (DonkeyCar.com) as a hobby, and made it available for anyone to set up themselves.

Machine learning takes many forms, but one of them is called behavioral cloning: You, the human, repeat some task over and over, and the machine learns how to do it by watching you. That’s how the DonkeyCars (and, in an infinitely more sophisticated way, self-driving cars) work. You drive them around a track for 20 minutes with a remote control—and after that, they can navigate the course themselves.

The figure-8 track had been set up by Google—the little cars had never seen it before arriving at the conference. But after 20 minutes of manual driving, the creators were able to turn off the remote control and let the car zoom around by itself.

Waymo Getting Ready

Waymo, the self-driving car division of Alphabet (Google’s parent company), displayed two gleaming white self-driving vehicles—a Pacifica minivan and a Jaguar iDrive. Reps noted that Waymo is the only self-driving car company that currently has fully autonomous cars on U.S. roads with nobody—no safety driver, no engineer—behind the wheel.

When you summon a Waymo self-driving taxi, this Jaguar will come to pick you up.
When you summon a Waymo self-driving taxi, this Jaguar will come to pick you up.

In fact, the company’s Early Rider program, in Phoenix, has been operating a free self-driving taxi service for residents. It’s like Uber, except the car that picks you up has no driver in it. Waymo plans to launch a commercial self-driving taxi service later this year in limited areas.

Should be quite a ride!

David Pogue, tech columnist for Yahoo Finance, welcomes non-toxic comments in the comments below. On the web, he’s davidpogue.com. On Twitter, he’s @pogue. On email, he’s poguester@yahoo.com. You can sign up to get his stuff by email, here.  

More from David Pogue:

What to Read Next