How Do We Secure Self-Driving Cars?

It was 9:58 p.m. on March 18th, 2018, on a road in Tempe, Arizona. The sky was particularly dark, and the road poorly lit. Elaine Herzberg, 49 years-old, was crossing with a bicycle, carrying shopping bags. The dashcam footage tells the rest of the story.

Driverless technology is fast approaching. We as a society will have to come to a consensus about what level of security is sufficient, before autonomous automobiles are allowed on our roads en masse. Right now, the instinct seems to be to distrust the machines until they’re made nearly flawless. Just days after Herzberg’s death, for example, Uber-the company behind the self-driving car which struck her-abandoned their entire Arizona-based project. Was this a reasonable reaction? It depends on what you value, and whom you ask.

What is the best way to approach the threats posed by self-driving cars? Interestingly enough, there is historical precedent to this very problem we’re facing.

At the turn of the 19th century, Europeans weren’t quite sure what to make of a radical new invention: horseless carriages. In urban areas, people who’d never before had to think twice about crossing or allowing children to play in the middle of the road, now had to adjust to a new reality.

The threat posed by the automobile was real. However, since it was such a new technology, most people didn’t have the tools to rationally consider how best to stay safe. There was, after all, no precedent to follow.

In the first years of driving in the U.K., the law stated that cars could not travel more than two miles per hour in urban areas, and four in rural areas. Not only that: a man was to walk sixty yards ahead of the automobile, waving a red flag to indicate that a machine was on its way. More than idiosyncratic laws, though, it was public perception where the automobile faced its most ardent pushback. From Brian Ladd’s Autophobia:

In 1904, a German motor journal deplored the press’s reference to “automobile accidents” even when the automobile was not the cause: “The noble horse, despite all its virtues still stupider than a motorist, remains untouchable, although it has been proved a hundred times that horses and horse-drawn wagons cause more accidents that automobiles.” A similar lament appeared in an Italian auto magazine in 1912: “Horses, trams, trains can collide, smash, kill half the world, and nobody cares. But if an automobile leaves a scratch on an urchin who dances in front of it, or on a drunken carter who is driving without a light,” then woe to the motorist.

The automobile was held to a much higher standard than any other means of transportation, not because it was less safe, but because it was less familiar.

When a self-driving Uber that struck Elaine Herzberg on March 18th of last year, the headlines painted a grim picture. “Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam”. “Video shows Uber robot car in fatal accident did not try to avoid woman”. Those unfamiliar with the intricacies of self-driving technology were left to ask: were robot cars mowing down pedestrians, without even trying to avoid them? If so, it’s certainly understandable that Uber should have had to close down its Arizona operation, reassess their technology, and maybe even face criminal charges.

The headlines unintentionally revealed something else, though-something deep-rooted to how we think of autonomous car accidents.

Self-driving is not a technological problem so much as it is a responsibility problem.

Even a relatively innocuous headline, “Self-driving Uber car hits, kills pedestrian in Tempe” makes an insinuation: that the car did the hitting, and the killing. That may be literally true, but in the case of Herzberg, it tells only part of the story.

A whole host of human errors: from the driver, the pedestrian, the car’s engineers, Uber, and the Arizonan government, contributed to that crash.

Ms. Herzberg was crossing a four-lane road in the black of night, outside of any pedestrian crosswalk. That was clearly a risky move. (In fairness, we would expect the car’s LiDAR system to have picked her up and braked accordingly.)

Rafaela Vasquez, the Uber employee sitting in the driver’s seat, didn’t see Ms. Herzberg until the second she hit her. She was looking down at a screen. She told investigators that she was “monitoring the self-driving interface and that while her personal and business phones were in the vehicle, neither were in use until after the crash.” Later reports indicate that she was, in fact, watching The Voice. (In fairness, the car itself was “Tier 3”: the class of driverless vehicle which requires, in theory, only occasional manual control from its handler.)

Interestingly enough, according to postmortem analysis, the car registered Ms. Herzberg a full six seconds before hitting her. In other words, it had plenty of time to stop. Why did it not? Counterintuitively, the car had been programmed not to emergency brake on its own. Uber engineers disabled the auto brakes in order “to reduce the potential for erratic vehicle behavior.” At the same time, they did not have a system for alerting the driver of impending danger.

Not long after the crash, The Guardian published emails revealing a more-than-comfortable relationship between Uber, and Arizona Governor Doug Ducey. Together, they kept their joint venture largely under wraps, and outside of any strict regulatory framework. (Perhaps, had the experiments been exposed to more light, the problems that led to the March 18th crash could have been stamped out before a vehicle was allowed on the road.)

Each of us can decide for ourselves who bears how much responsibility in this story: the pedestrian crossing a four-lane road at night, the driver watching The Voice behind the wheel, the engineers who set her up for failure, or the powerful men who failed to take proper caution. No matter how you look at it, though, one thing is clear: Herzberg’s death was the result not of a technical failure, but of a failure in human responsibility.

How, then, does society address the problem of responsibility in self-driving? With so little precedent, it’s easy to take an emotional view: to banish a company for one mistake, or write off the technology entirely. The Tempe case, however, can provide a better roadmap.

All driverless cars are powered by software designed by human engineers. If those engineers work with complete transparency, and their programs are tightly scrutinized, then they’re far less liable to slip up.

Companies tend not to be friendly to regulation, but in cases where lives are on the line, having rules to follow benefits both sides: forcing car companies to uphold the highest safety standards, while relieving those companies of legal liability in cases of human error.

Self-driving cars are still unfamiliar to us, but they tend to be much safer than ordinary vehicles. Eventually we will move past the hysteria, much as our great grandparents moved past theirs. When we do, the world will be better for it. One and a quarter million people die in car crashes every year, but few among us would prefer a world without cars. Self-driving cars will kill far fewer people, and they’ll be so ubiquitous one day that we won’t ever want to go back.

 

About the author: 
Nathaniel Nelson writes the internationally top-ranked “Malicious Life” podcast on iTunes, hosts programs on blockchain and SCADA security, and contributes to AI and emerging tech blogs.