Report Claims Uber’s Software Labeled Victim of Fatal Crash as ‘False Positive’ To Be Ignored
More news has emerged about the self-driving Uber crash, where an autonomous vehicle operated by the ride-hailing company struck and killed a woman as she pushed her bike across the street in Tempe, Arizona. At the time, it wasn’t entirely clear why the self-driving Uber, which should have been able to detect the woman using LIDAR according to manufacturer Velodyne, did not stop or attempt to avoid the collision.
Take Better Care: These driving habit resolutions would help make everyone’s commute safer
Now, according to reporting by The Information, Uber has narrowed down the cause to the software’s higher reasoning functions. These software functions are supposed to act similarly to human decision-making, by determining which objects need to be reacted to and what that reaction will be. A bike on the side of the road, for instance, doesn’t merit a response, as it is a “false positive.” This system keeps the car from slamming on the brakes for every single new object it detects.
Sadly, it seems that for some reason the system, after detecting Elaine Herzberg in the road, decided that her presence was a false positive.
Strong Safety: The IIHS named the Chevy Bolt a Top Safety Pick
Uber has declined to comment on these claims, instead issuing a statement saying that it is cooperating with the National Transportation Safety Board, can’t comment on the ongoing investigation, and is conducting a “top-to-bottom safety review” of its self-driving car program.
While unconfirmed, this finding would be in line with other claims that emerged following the crash. One such claim from The New York Times found that Uber’s vehicle technology was far behind its competitors, and one from Velodyne blamed the problem solely on Uber’s software.