SAN FRANCISCO — When you picture the future of transportation, you probably see lanes of autonomous cars driving highways full of humans hither and yon, with no carbon-based life forms taking control of the wheel.
That’s a nice fantasy, but it leaves out the in-between — the era of transportation when self-driving cars are sharing the road with humans. Those self-driving cars need to understand not only what other autonomous vehicles might do; they also need to predict the entire spectrum of human driving behavior.
Today at Inman Connect San Francisco, Anca Dragan of the University of California at Berkeley took the stage to explain how her lab is shaping human-robot interaction. She explained what it takes to teach a robot how to anticipate and respond to human activity.
The merging problem
Consider what happens when a self-driving car tries to merge on a highway.
Its activity is going to depend on what the cars in the merging lane are doing. Will they slow down to let the car in, or will they drive more aggressively and speed up, blocking the merge?
One way to build a robot, Dragan explained, is with an “interaction as obstacle avoidance” philosophy. In this circumstance, the self-driving car would “see” every other car on the road as a moving obstacle and does its best to stay out of the way.
However, this leads to a self-driving car that’s really too defensive, she added.
Most people will “actually hit the brakes, slow down and let the other car move in as opposed to insisting on moving forward” when someone tries to merge into their lane.
“Modeling people like moving obstacles is modeling them like they don’t react to whatever you’re doing,” she explained.
“That leads to cars that are physically safe, but they tend to be on the defensive side.” (If you read about the Google car that got stuck at a four-way intersection, this is how that happens — when every other car keeps inching forward at the intersection, the autonomous car just won’t go.)
Another stance — “interaction as centralized collaboration” — tells the self-driving car to treat the other humans on the road as “collaborators,” but Dragan noted that this model can assume a little bit too much about human behavior, and it still doesn’t address the fact that the robot’s behavior is going to affect how the people around it react.
The middle ground
Robot-human interaction “doesn’t have to be either-or,” Dragan said. There’s a middle ground “where we actually embrace the fact that, unlike the obstacle avoidance case, our robot will have effects on what other people do.”
She called this model an “under-actuated system for interaction.”
Consider this: Whether you plan to do it or not, you will influence what other people on the road are doing when you drive with them — and you should take that influence into account to be a safe driver.
Under this model, the robot considers people “as having their own utility function” and then chooses “the trajectory that approximately optimizes that utility function.”
Giving the robot autonomy to act
She discussed machine learning as an example — not every driver is an “average” driver, and her lab worked on collecting driving data to try to figure out what people are “optimizing” for in their drive. Are they an aggressive driver or a safe driver, and how can you tell?
“When you drive next to a person and try to estimate their driving style — are they aggressive, or will they let me in when I try to merge? You won’t get a lot of information,” she said. When you’re just driving next to them, aggressive and defensive drivers will behave almost exactly the same.
But what happens when the robot starts slowly inching into the next lane?
Then you get clearer data: Defensive drivers brake to let the merger in, and aggressive drivers don’t — they might even accelerate. Either way, “the car can update its model and react,” Dragan said.
They’ve even tested driving behavior that would be considered odd or abnormal in a human — for example, inching backward at an intersection (if there are no cars behind the self-driving vehicle).
Dragan described this as “very polite behavior,” inviting the other cars at the intersection to go ahead and move through “instead of just sitting there and playing this ‘do I go, do you go’ kind of game.”
“This inching-backward behavior is super cool because it incentivizes people to go through because they realize ‘this car is clearly not going, I should be going,'” she added.