Now That a Pedestrian Has Been Killed, Should We Hit the Brakes on Self-Driving Technology?

A woman was struck and killed by an Uber self-driving test vehicle Sunday. Could the rush to roll out the technology end up delaying it?

At 10 p.m. on Sunday night, 49-year-old Elaine Herzberg became the first pedestrian to have been killed by a self-driving car. What we know is that she was walking with her bicycle in Tempe, a suburb of Phoenix, Arizona, and as she crossed Mill Avenue, a self-driving test vehicle struck her at a speed of around 40 mph. The SUV belonged to Uber, the ride-sharing giant, and it was in autonomous mode, meaning that while a human driver was behind the wheel, the car was driving itself.

Self-driving technology aside, there are many things we can already be doing to reduce fatalities on our roads.

viewpoints

Herzberg was taken to the hospital, where she died from her injuries.

Questions prompted by the rapid rise of automated vehicle technology in recent years have suddenly become a lot more real. Why didn’t the automated car avoid the pedestrian? Was there a sensor failure? A faulty algorithm? Why didn’t the human “safety driver” prevent the crash? Was the car at fault? Or, as seems to be implied by coverage that takes pains to point out that she was not on a crosswalk, was the pedestrian really to blame? Talking to The San Francisco Chronicle, Tempe Chief of Police Sylvia Moir noted gravely that “It is dangerous to cross roadways in the evening hour [outside crosswalks] when well-illuminated, managed crosswalks are available.”

It’s tempting to speculate on what happened and why, but the particularities of the case will take time to emerge. Uber hasn’t yet detailed the incident, and it could be months before the National Transportation Safety Board, which took 16 months to wrap up its investigation of 2016’s fatal crash of a Tesla that was under the control of its partial automation feature, completes their analysis. Two things, however, are clear: One is that the safety of self-driving cars — a qualification that’s hard to gauge given the tiny sample size and murky data — still needs proving. Another is that self-driving technology aside, there are many things we can already be doing to reduce fatalities on our roads.

Humans behind the wheel, after all, are profligate purveyors of death — more than 37,000 people were killed in crashes on American roads in 2016. By comparison, self-driving vehicle advocates contend that with millions of miles of testing conducted so far, robots have already proven safer than humans. In particular, Uber’s arch-nemesis in the self-driving field, Waymo, a subsidiary of Google’s parent company, Alphabet Inc., appears to have an impressive record. Last May, the Rocky Mountain Institute reported that Waymo had a rate of at-fault crashes ten times lower than Americans aged 60 to 69 — the safest demographic of human drivers in the country.


Share this story!

That’s reassuring at first glance, but it’s worth noting that the data that’s publicly available doesn’t really support clear-cut comparisons between robots and humans. Waymo’s test vehicles, for instance, have only been tested on particular roads, mainly in the sunny, less densely populated southwestern states, and only when monitored by trained human drivers who can take over if it looks like something might go wrong. Comparing their safety record under such conditions to vehicles of all types driven by humans in all conditions on all roads across the United States is a bit of apples-to-oranges, and any pronouncements are tenuous at best.

As for Uber, Sunday’s fatality clearly hurts their numbers. Their test fleet has logged over two million miles, a figure the company hit in late December. That would extrapolate out to a rate of close to 50 fatal crashes per 100 million miles driven, which, caveats about apples and oranges aside, stands poorly against the figure for human drivers in the U.S. of 1.18 deaths per 100 million miles.

It’s possible that authorities will find that the crash in Tempe was unavoidable: According to the San Francisco Chronicle, Moir commented that “it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway.” But the police also reported that the car was traveling at 38 mph in a 35 mph zone and made no attempt to brake.

However blame for the crash is doled out, technology enthusiasts will warn against getting cold feet. Elon Musk, for one, warned in 2016 that anything that slows adoption of automated vehicles is “killing people.” In other words, if we want to save lives, we can’t hit the brakes on testing and deployment — we need to accelerate. But this strategy could backfire. Even while human drivers spread daily carnage, every death and injury where a robot is behind the wheel will inevitably attract intense scrutiny. (When did you last see so much coverage of a pedestrian death?) Tragedies like the one in Tempe could erode public trust and fuel a backlash against the technology. Rushing to get the tech on the roads could ultimately slow its adoption.

But those who want to make our roads as safe as possible, as soon as possible, can take heart: We already have the means. We can encourage walking, cycling, and public transit use, and we can make road system safety the responsibility not just of its users, but of road system designers — one of the key tenets of an international road safety program called Vision Zero. The aim of this approach: complete elimination of fatalities and serious injuries on the roads.

Those who want to make our roads as safe as possible, as soon as possible, can take heart: We already have the means.

In Sweden, the birthplace of Vision Zero, the per capita rate of road fatalities is almost four times lower than that in the U.S. This is the result of intentional system design features aimed at tempering driver behavior and reducing the consequences of human error: low urban speed limits, ample pedestrian facilities, safety barriers to protect bicyclists, innovative highway design, and strict traffic rule enforcement, among other features. Dozens of cities across Europe, Canada, and the U.S. — including Tempe — have nominally endorsed these concepts, though in most cases so far, actions have been far weaker than the words.

Self-driving vehicles will almost certainly help to reduce road fatalities, too — but developers would do well to proceed slowly, to avoid public backlash. And then, while they are cautiously proving and improving the technology, the rest of us can embrace common-sense road-safety strategies with the same urgency and zeal that we’ve seen so far in the frantic race to develop self-driving technology.


Antonio Loro is an urban planner specializing in helping cities prepare for automated vehicles. He is based in Vancouver, Canada.

Top visual: Angelo Merendino/AFP/Getty
See What Others Are Saying

4 comments / Join the Discussion

    No. You can only take technology so far, and then you have to test it in real world conditions. That time is now. It is unfortunate that accidents and fatalities will happen, but they have already been happening. That is the issue, are we happy with 30-40000 fatalities, or are we willing to make changes that will substantially reduce those numbers. That what autonomous vehicles bring to the table. There are two situations that we are all going through right now. First, AI is written by humans, who are fallible. Second, we are already in a transition period with autonomous and non-autonomous vehicles in the road. That transition period is problematic, but unavoidable. So, accidents are still unavoidable, but are in the process of being reduced. When all transportation modes are connected and communicating, vehicles will know what other vehicles are doing, and so will bicycles and pedestrians. We will all “know” so the probability of any accident will eventually be extremely low. That kind of a future is worth moving towards.

    Reply

    Makes me wonder how autonomous vehicles will operate in rural areas such as mine, where deer and other wildlife do not use well-lit crossings.

    Reply

    Same type of people who make your computer “work”, your cell phone “work” and a hundred other things that need constant attention are now in charge of making cars drive on their own. Good luck.

    Reply

    More biased headlines. Kill means “cause the death of.” Would you say that the ground killed someone who jumped off a building? Or would you say that the person killed themselves? It is completely incorrect to say that the car killed the pedestrian when the car isn’t at fault.

    Reply
Join the discussion

Your email address will not be published. Required fields are marked *

Top

Whistleblowers
& Tipsters

Corruption in science?
Academic discrimination?
Research censorship?
Government cover-ups?

Undark wants to hear about it.

Email us at tips@undark.org, or visit our contact page for more secure options.