The very successes of the Autopilot system can lull the driver into a false sense of security.

The Real Flaw in Tesla’s Autopilot

Last Thursday, the electric car maker Tesla Motors disclosed that 40-year-old Joshua Brown had died in a crash in Florida while using the “Autopilot” system in one of its Model S cars. The feature allows the car to control its own steering and speed, but neither the automated system, nor Brown, apparently noticed a tractor trailer crossing the road ahead. In a statement on its website, Tesla lamented the loss of life, but also emphasized that it warns drivers to keep their hands on the steering wheel at all times, and to stay vigilant while using the automated driving feature.

viewpoints

Whether such admonishments are practical is now a looming legal question, but the tragic incident sheds light on what might be a crucial flaw in the emerging ecosystem of partially self-driving vehicles. Features like Autopilot, it seems, might not adequately perform what is most critical in such a technology — more critical than even the driving itself: keeping the driver engaged and alert.

“[W]hen used in conjunction with driver oversight,” the company declared in its explanation of the incident, “the data is unequivocal that Autopilot reduces driver workload and results in a statistically significant improvement in safety when compared to purely manual driving.” The company also pointed out that this was the first known fatality in over 130 million miles where Autopilot, which has been available as an option for Model S owners since October, was being used. By comparison, the Tesla statement noted, there is a fatality every 94 million miles among all vehicles in the U.S.

But those statistics are less reassuring than they might appear.

In the first place, if we want strong evidence that Autopilot improves safety, 130 million miles of fatality-free driving isn’t nearly enough. In an analysis compiled by the RAND Corporation earlier this year, the research organization determined that “autonomous vehicles would have to be driven hundreds of millions of miles — and sometimes hundreds of billions of miles — to demonstrate their reliability in terms of fatalities and injuries.” Using the numbers provided by Tesla, this means we can have, at best, a 75 percent level of confidence that Teslas with Autopilot switched on will be less frequently involved in fatal crashes than the average human-driven vehicle.

If we want 99 percent confidence, we would need 400 million miles of fatality-free driving. And to know that our findings aren’t just due to chance – if we want our conclusion to be statistically significant – then we need an enormous number of miles. For example, to prove to just a 95 percent level of confidence that an automated feature results in a fatality rate 20 percent lower than the human driver rate, the RAND analysis suggests you would need to look at 215 billion miles of driving.

All that math aside, the underlying idea is straightforward: What we know so far is that one fatality occurred after 130 million miles of Autopilot. We have no way of knowing that another fatality won’t come just 13 million miles later (or 200 million miles later, for that matter).

Secondly, Tesla cites the fatality rate “among all vehicles.” But newer, higher-end cars like the Model S tend to be safer, and Autopilot is designed to be used mainly in fair weather (its sensors have trouble with rain and snow), on well-maintained roads, and in less complex traffic. For a fair comparison, then, we would need to compare Teslas using Autopilot with relatively new cars being driven by humans in similarly ideal conditions.

Statistically speaking, comparing apples to apples this way might eliminate Autopilot’s apparent safety superiority entirely.

Most crucially, Autopilot isn’t a fully-fledged self-driving technology that gives you free rein to surf the web or nap. Tesla warns drivers to always pay attention, and to be ready to take over control of the vehicle, even after switching on Autopilot. This would seem to almost immunize Tesla against any blame, but it’s not so straightforward — particularly given that humans are exquisitely bad at maintaining unwavering watchfulness over an automated system.

Decades of research literature has been devoted to studying such “human factors” limitations in relation to automated technologies, from factory robotics to piloting an aircraft. One early study, for example, noted a clear “monitoring inefficiency” among human test subjects during a flight simulation exercise. Most notably, participants were more likely to detect an automation failure in the first 10 minutes of a 30-minute exercise than in the last 10 minutes of the session, suggesting that human vigilance deteriorates the longer an automated system performs without error.

And that’s the real challenge for Tesla’s Autopilot – and for partially automated driving in general.

Keeping an eye on a car that’s driving itself might sound simple enough. But after an automated vehicle covers miles and miles with nary a blip, drivers will inevitably begin to pay less attention. And then – with no warning – a situation that the system can’t handle arises, demanding instant action from a suddenly overwhelmed driver. Tesla may insist that drivers keep their hands on the wheel and their eyes peeled for trouble, but if the company’s technology performs as intended, it’s far more likely to convince drivers that it’s OK to let their guard down.

The very successes of the system lull the driver into a false sense of security.

While the details of the crash are still not fully known, it’s quite possible this happened with Brown. In a video he posted to YouTube in April, he credited Autopilot with saving him from a side-collision with a boom lift truck. “I actually wasn’t watching that direction,” Brown noted.

Since reliably monitoring an automated system is difficult for humans, it’s essential to compensate for that recognized weakness and provide the human the support they need to keep their mind on the task. It’s not enough to wave a warning finger at the driver and expect them to defy human nature by tenaciously maintaining vigilance over the system. The technology must be easy to use well, and hard to use poorly: it must be designed in a way that unambiguously demands driver engagement and discourages driver distraction and overconfidence in the system. In other words, while it might be acceptable for the system to be good but somewhat failure-prone at actually driving the car, it’s of paramount importance that the system not fail at keeping the driver engaged at all times.

It’s questionable whether Autopilot accomplishes this. The system does occasionally check whether the driver has their hands on the wheel, providing both visual and auditory alerts to remind them to do so. If the system detects that the driver is going hands-free, Autopilot will slow the car down.

But these measures might not be enough. Drivers can go several minutes without touching the wheel, for example, and YouTube has plenty of examples of drivers misusing the technology in creative ways. In the case of Brown, it has been speculated that he was watching a DVD at the time of his fatal crash.

 

With improved sensors and algorithms, technologies like Autopilot will become capable of handling a wider range of situations on the road, and it’s likely that a future version of Autopilot will be able to avoid crashes like the one that killed Brown. (Tesla has said that Autopilot could not detect the white broadside of the tractor trailer set against a brightly lit sky.) But until an automated system is so reliable that it has no need for constant human supervision, it will be imperative, above all, to design the system in a way that ensures the driver is engaged.

This might mean imposing more stringent requirements for the driver to keep their hands on the wheel, giving drivers more frequent and urgent alerts, or using eye-tracking or similar driver monitoring technologies to ensure the driver is paying attention to the road. Adding such restrictions would no doubt make Autopilot less attractive – you’d be less inclined to use a feature that purports to drive the car in certain conditions but constantly nags you to put your hands back on the wheel. That cost to convenience, though, is probably worth the decreased risk to those using partially automated features and to those on the roads around them.

Finally, we need to do a better job of clearly explaining to drivers what the technologies can and cannot do. As it stands, drivers are getting mixed messages: They’re warned to stay watchful, but it’s easy nonetheless to get the impression that the technology is more advanced than it is. In that sense, the very name “Autopilot” might well be problematic.

When even the CEO’s wife posts a video to Instagram where she waves her hands and mugs for the camera while using Autopilot, it’s easy to see how some consumers might decide the official warnings aren’t to be taken all that seriously. If the tragedy of Joshua Brown’s fatal crash can have a positive outcome, perhaps it will be that it helped to dispel the potentially dangerous naïveté around these technologies, and replaced it with a more sober understanding of their true capabilities and limitations.

With luck, it will also motivate Tesla and other automakers to design partial automation technologies that effectively perform the most critical underlying task: Keeping the driver engaged and alert.

Antonio Loro is an urban planner specializing in helping cities prepare for automated vehicles. He is based in Vancouver, Canada.