Uber reportedly thinks its self-driving car killed someone because it ‘decided’ not to swerve

Uber has discovered the reason why one of the test cars in its fleet of unmanned cars hit and killed a pedestrian earlier this year, according to The Information . Although the company believes that the sensor suite of the car detected Elaine Herzberg, 49, when she crossed the road in front of the modified Volvo XC90 on March 18, two sources tell the publication that the software was adjusted in such a way that "decided that" it was not necessary to take evasive measures, and possibly marked the detection as a "false positive".

The reason why a system would do this, according to the report, is because there are a number of situations in which computers that drive an autonomous automobile can see something that they think is a human or other obstacle. Uber reportedly established that low threshold, that the system saw a person crossing the street with a bicycle and determined that immediate evasive action was not necessary. While Uber had an operator, or "safety driver," in the car that was supposed to be able to take control of a fault like this, the employee was seen looking down in the moments before the accident in images published by the Tempe Police Department.

All of Uber's self-exploration efforts have been suspended since the accident, and the company is still working with the National Transportation Safety Board, which has yet to issue a preliminary report on the progress that has been made in its investigation. . [19659004] When a comment was reached, a spokesperson for Uber issued the same statement to The Verge found in The information story:

We are actively cooperating with the NTSB in your research. Out of respect for that process and the trust we have created with NTSB, we can not comment on the details of the incident. Meanwhile, we have initiated a top-down security review of our autonomous vehicles program, and have turned to the former NTSB president, Christopher Hart, to advise us on our general safety culture. Our review is analyzing everything from the security of our system to our training processes for vehicle operators, and we hope to have more to say soon.

Following the accident, signs have emerged that Uber's self-control program was potentially fraught with risks. On the one hand, Uber had reduced the number of "security drivers" on its two-to-one test cars, according to a report from the New York Times . This explains why the driver in the car that killed Herzberg was alone.

Then, at the end of March, Reuters discovered that Uber had reduced the number of LIDAR sensors in its test cars. (LIDAR is considered by most as a critical hardware for autonomous driving). All this happened in an environment with little oversight by the Arizona government. The emails obtained by The Guardian in the weeks after the accident detailed a relationship between Uber and Arizona Governor Doug Ducey, which could have allowed the company's test cars to hit the road. sooner than previously thought.

Many of Uber's competitors, and even some of his partners, have spoken since the accident, as the company tried to find an answer for what went wrong. Nvidia, which supplies the GPUs that help drive Uber's autonomous technology, distanced itself at the end of March and said the failure must have been with Uber software. Velodyne, which manufactures the LIDAR sensor that Uber uses, says that its technology should not have been affected by nighttime conditions. Intel's Mobileye division published a breakdown of how and why its technology would have recognized Herzberg, although now that does not seem to have been the problem according to Information Report .

Despite the death of Herzberg, Uber CEO Dara Khosrowshahi – who The New York Times recently reported that he had considered ending the self-control program when he joined last August – he said in a April interview with Today show that the company is "absolutely committed to cars without a driver".

Leave a Comment

Scroll to Top