I’m standing in a kitchenette in a Google office in Mountain View, California, watching a robot at work. It’s staring at items on a counter: sparkling water, a bag of whole grain chips, an energy drink, a protein bar. After what seems like an eternity, it extends its arm, grabs the chips, rolls a few feet and drops the bag in front of a Google employee. “I’m ready,” it says.
This snack delivery, which the bot performed at a recent press conference, may not seem like a great feat of robotics, but it’s an example of Google’s progress in teaching robots how to be helpful — not programming them to set of well-defined tasks, but by giving them a broader understanding of what people might ask and how they should respond. That’s a much more demanding AI challenge than a smartphone assistant like the Google Assistant that responds to a limited, carefully worded series of commands.
The robot in question has a tubular white body, a gripping mechanism on the end of its single arm, and wheels. The fact that it has cameras where we have eyes gives it a certain anthropomorphism, but it seems primarily designed for practicality. It was made by Daily Robots, part of Google’s parent company Alphabet. Google has teamed up with its robot-centric corporate sibling on the software side of the challenge to make robots useful. This research is still early and experimental; In addition to tasks such as finding and retrieving items, it also involves training bots to play ping pong and catch racquetballs.
And now Google is sharing news about its latest milestone in robotic software research, a new language model called PaLM-SayCan. (The “PaLM” stands for “Pathways Language Model.”) Developed in collaboration with Everyday Robots, this software provides companies’ robots with a broader understanding of the world, enabling them to respond to human requests such as, “Bring me a snack.” and something to wash it down with” and, “I spilled my drink, can you help?” That requires you to understand spoken or typed statements, tease the ultimate goal and break it down into steps, and accomplish them using the skills a given robot may have.
According to Google, the current PaLM-SayCan study is the first time robots have access to a large-scale language model. Compared to previous software, the company says, PaLM-SayCan makes robots 14% better at planning tasks and 13% better at completing them successfully. Google has also seen a 26% improvement in robots’ ability to schedule tasks that consist of eight or more steps, such as responding to “I left out a soda, an apple, and water.” Can you throw them away and then bring me a sponge to wipe the table?”
Not quickly to a house near you
While Everyday Robots bots have done some useful work such as: sort waste at Google offices for some time now, the whole effort has still revolved around learning how to teach the bots to educate themselves. In the demos we saw at the recent press conference, the robot performed its snack retrieval tasks so slowly and methodically that you could practically see the wheels turning in its head as it invented the task one step at a time. In terms of ping pong and racquetball research, Google doesn’t see a market for athletic robots, but these activities require both speed and precision, making them good proxies for all kinds of actions that robots need to learn to handle.
Google’s emphasis on robot ambition to get something to market right away contrasts with the strategy followed by Amazon, which already sells Astro, a $999 home robot, on an invite basis. As it stands, Astro only does a few things and is little more than an Alexa/security camera gadget on wheels; when my colleague Jared Newman tried one at home, he struggled to find uses for it.
Google Research robotics leader Vincent Vanhoucke told me the company is not yet at the point of trying to develop a robot for commercial release. “Google is trying to be a company focused on providing access to information and helping people in their daily lives,” he says. “You could imagine a huge overlap between Google’s overarching mission and what we do in terms of more concrete goals. I think we’re really at the level where we’re providing opportunities and trying to understand what opportunities we can provide. It’s still a quest of ‘what are the things the robot can do? And can we broaden our imagination about what’s possible?’”
In other words, don’t assume you’ll be able to buy a Google robot anytime soon, but stay tuned.