Video of a sidewalk delivery robot breaching yellow warning tape and rolling through a Los Angeles crime scene went viral this week, garnering more than 650,000 views on Twitter and sparking discussions about whether the technology is prime-time ready.
It turns out that the robot’s error, at least in this case, was caused by humans.
The video of the event was taken and posted to Twitter by William Gude, the owner of Film the police LA, a police watchdog account in LA. Gude was near a suspected school shooting at Hollywood High School around 10 a.m. when he captured the bot on video as it hovered on the corner of the street, looking confused, until someone lifted the tape, letting the bot make its way could be prosecuted by the crime scene.
Uber spinout Serve Robotics told londonbusinessblog.com that the robot’s self-driving system did not decide to cross the crime scene. It was the choice of a human operator who controlled the bot remotely.
The company’s delivery robots have so-called Level 4 autonomy, which means that they can drive themselves under certain conditions without a human taking over. Serve has been testing its robots with Uber Eats in the area since May.
Serve Robotics has a policy that requires a human operator to remotely monitor and assist their bot at any intersection. The human operator will also remotely take control if the bot encounters an obstacle such as a construction zone or a fallen tree and cannot figure out how to navigate around it within 30 seconds.
In this case, the bot, which had just completed a delivery, approached the intersection and a human operator took over, according to the company’s internal company policy. Initially, the human operator paused at the yellow warning ribbon. But when bystanders hung up the tape and apparently “swung through,” the human operator decided to carry on, Serve Robotics CEO Ali Kashani told londonbusinessblog.com.
“The robot would never have crossed (by itself),” Kashani said. “There’s just a lot of systems in place to make sure it never crosses until a human gives permission.”
The error of judgment here is that someone decided to actually keep crossing, he added.
Regardless of the reason, Kashani said it shouldn’t have happened. Serve has extracted data from the incident and is working on a new set of protocols for humans and AI to prevent this in the future, he added.
A few obvious steps are to make sure employees follow standard operating procedure (or SOP), including proper training and developing new rules for what to do if a person tries to swing the robot through a barricade.
But Kashani said there are also ways to use software to prevent this from happening again.
Software can be used to help people make better decisions or avoid an area altogether, he said. For example, the company can work with local law enforcement to send current information about police incidents to a robot so it can navigate those areas. Another option is to give the software the ability to identify law enforcement officers and then alert human decision makers and remind them of local laws.
These lessons will be critical as the robots progress and expand their operational domains.
“The funny thing is that the robot did the right thing; it stopped,” Kashani said. “So this really goes back to giving people enough context to make good decisions until we’re sure we don’t need people to make those decisions.”
Serve Robotics’ bots haven’t reached that point yet. However, Kashani told londonbusinessblog.com that the robots are becoming more independent and mostly work alone, with two exceptions: intersections and some sort of blockage.
The scenario unfolding this week goes against how many people see AI, Kashani said.
“I think the story in general is that people are really great in edge cases and then AI makes mistakes, or maybe it’s not ready for the real world,” Kashani said. “Funnily enough, we learn the opposite, which is that we notice that people make a lot of mistakes and that we should rely more on AI.”