Saturday, January 02, 2016

The Predator and the Google car

[From "Our Robots, Ourselves: Robotics and the Myths of Autonomy" by David A. Mindell .]


The Predator presents an image of an autonomous ethereal presence, loitering with murderous intent over the distant battlespace. The trailer view? It's a pig to drive.
"Predator remains prone to computer crashes and lockups during critical periods. Crews learn through difficult experience not to press certain key combinations, not to issue commands too quickly, not to confuse this button with the one next to it. It takes several steps to do simple things. It takes more than twenty keystrokes, for example, simply to turn on the aircraft's autopilot. "The conjecture among us pilots," observes one operator, "is [that] the engineers thought we were too stupid, and we would be idiots and be hitting buttons all the time and doing dumb things, so they tried to put a two-step process into everything that we do."

"Manuals are lengthy and unclear. Some important features are hidden in the system's code and documented nowhere, passed down through word of mouth between the operators in a kind of oral tradition of stories of workarounds to make the system perform. No small part of the operators' skill is simply making the system do things it was not designed to do."
Meanwhile Google has much of the world believing that autonomous cars will take over the world in just a few years. But this is part utopian engineering optimism and part smoke-and-mirrors.
"Google has been testing self-driving cars on California roads since 2009, claiming hundreds of thousands of miles of accident-free highway driving. They travel routes mapped with great precision by Google's human-driven survey cars; the maps serve as virtual railway tracks for the cars (indeed, they are as yet unable to drive on roads without these detailed maps). The drives have included human safety drivers and software experts who can turn the autonomy on and off. "The idea was that the human drives onto the freeway, engages the system, [and] it takes them on the bulk of the trip—the boring part—and then they re-engage," said Google engineer Nathaniel Fairfield.
...
"Google's rhetoric around the project has the kind of Silicon Valley optimism that typically surrounds software systems. Roboticist Sebastian Thrun, lead engineer for the project, envisions a future of utopian autonomy "without traffic accidents or congestion."

"A number of critics of Google's approach have pointed out its limitations. Most of the work has been done in northern California or other western states. The Google car's successful driving tests in Nevada were run under tight constraints from the company for good weather and simple routes (the company also sought to avoid disclosure of details of the safety drivers' disengagements of the autonomous operation). The vehicle's algorithms had difficulty negotiating construction sites, requiring the safety driver to take control. Technology journalist Mark Harris has recently shown that becoming a safety driver for one of these cars can require weeks of training, suggesting the computer to human hand-offs remain complex and risky.

"In contrast to the wide-open West of car commercials and Google's trials, urban driving entails a great deal of social interaction, as we drive through a messy, complicated, and dynamic physical and social landscape. Google admits this problem is ten to one hundred times harder than driving on highways. Once again, autonomy within a human context proves much tougher than the abstracted technical problem.

"MIT's John Leonard, who helped develop some of the basic algorithms that driverless cars use for localization and mapping, points out how much driving depends on social interaction. My late friend Seth Teller, formerly an MIT roboticist, perceptively observed that urban driving consists of hundreds of "short-lived social contracts between people," as we scan the streets, make eye contact, let people in and wave "thank you." Computers are slowly getting better at assigning labels to the physical world and to different kinds of objects. Yet as Predator pilots can attest, techniques to similarly interpret human identities and intentions remain primitive.
...
"Only half joking, Leonard contends that driving in Boston can be considered operating in an extreme environment. He put a video camera on the dashboard of his car and is collecting examples of driving situations that are difficult for algorithms to handle: merging onto a busy road at rush hour; staying within road lines obscured by dust or snow; turning left across several lanes of traffic. In the snowy Boston winter of 2015, the three-dimensional landscape of urban driving would change overnight, as snow piles nine feet high narrowed the roads and altered traffic patterns.
...
"What have we learned from extreme environments that might shed light on possible futures for autonomous cars? We know that driverless cars will be susceptible to all of the problems that surround people's use of automation in the environments we have examined—system failures, variability of skills among users, problems of attention management, the degradation of manual skills, and rising automation bias as people come to rely on automated systems."
The classic failure mode: some unexpected human or otherwise unpredictable intervention occurs in the driving environment; the automation gives up, handing control to the disengaged and inattentive driver; the situation rapidly spirals to disaster; and the verdict is - driver error.

So don't hold your breath.



David Mindell's book is, by the way, excellent.

No comments:

Post a Comment

Comments are moderated. Keep it polite and no gratuitous links to your business website - we're not a billboard here.