Autonomous Vehicles Can’t Distinguish a Pedestrian from Other Moving Objects. What Does That Mean for Liability?

The details of a fatal crash highlight the limitations of artificial intelligence, and the complexity of determining fault when machines are in charge.
By: | December 4, 2019

On March 18, 2018 just before 10 p.m., a woman was crossing a street in Arizona outside of a crosswalk, rolling her bicycle next to her. It was dark, except for a headlight on the front of her bike. Just before she made it to the other side of the street, she was struck and killed by an Uber autonomous vehicle out on a test drive.

The incident has sparked discussion around the capabilities and limits of artificial intelligence, the need for more stringent safety protocols when testing autonomous vehicles, and the determination of liability when cars that are driving themselves are involved in a crash.

What Happened: Autonomous driving systems are supposed to be able to recognize other vehicles and objects in the road and make appropriate adjustments, slowing down and switching lanes to avoid contact. Uber’s test car did not swerve around the pedestrian because it couldn’t identify what it was seeing.

An NTSB investigation into the crash found that Uber’s self-driving system could recognize pedestrians in a crosswalk and predict that they would continue to move through that crosswalk. It did not, however, account for the possibility of jaywalking.

The bicycle headlight further confused the system. Was it seeing a pedestrian or a vehicle? Unable to compute in what direction or how quickly this object would move, the system could not determine how to react, so it didn’t.

At the time of the crash, the car did have an operator behind the wheel – an employee of Uber Advanced Technologies Group tasked with monitoring the system while it completed a pre-mapped route. Investigators said she was streaming video on her phone when the crash occurred.

The Problem with AI: The system’s inability to identify a pedestrian in the street reveals artificial intelligence’s shortcomings.

The reality is that programming and training AI to respond to unique scenarios takes time and expertise. An autonomous vehicle could encounter thousands of potential obstacles or challenging situations on the road, many unpredictable.

Uber’s system was based on a set of common logical rules, including that pedestrians cross the street in safe places like cross walks. But it doesn’t factor in what humans know inherently – not every person or vehicle follows the rules.

Autonomous vehicle developers have called this a “glaring programming oversight.”

Some in the industry have said there is a need for more collaboration between programmers and safety experts, and that the code should be reviewed by software, safety and mechanical engineering experts before being cleared for test drives.

The test drives themselves have also been called into question. There is no regulatory standard for how much off-road simulation should be conducted before self-driving vehicles are allowed on public streets.

Who’s at Fault: The NTSB has yet to make a final determination in this case, but the details of the investigation demonstrate how blurry the lines of liability become when autonomous systems are in charge.

According to the NTSB, Uber’s self-driving software was fully in charge of the car at the time of the crash, but the operator is still considered to be “a part of the system,” and is meant to take control in emergencies.

Thanks to an interior surveillance system, investigators could see that the operator spent 34% of her time behind the wheel looking at her phone, sometimes glancing down for as long at 23 seconds.

A toxicology report also revealed, however, that the pedestrian had a high concentration of methamphetamine in her blood when she was struck, which likely impaired her perception of the oncoming car and her judgement. Even if the operator had been alert, according to investigators, she would only have had two to four seconds to react.

So who is liable? The pedestrian who stepped into oncoming traffic? The operator who wasn’t paying attention? The system programmers who failed to fully account for the safety of jaywalkers? Other decision-makers at Uber ATG who allowed the car on public roads while knowing its limitations? Or does everyone share some blame?

As semi-autonomous vehicles become more common, more crashes are likely to occur. Multiple lines of insurance could be impacted, including the owner’s personal or commercial auto policy, the manufacturer’s product liability policy, and even software developers’ E&O policies.

For now, there is no consensus on how liability should be determined in these cases. It seems that knot is one insurers, underwriters and attorneys may have to untangle as they go. &

Katie Dwyer is a freelance editor and writer based out of Philadelphia. She can be reached at [email protected].

More from Risk & Insurance