Uber’s Fatal Flaws Require Transparency

Subscribe To Download This Insight

2Q 2018 | IN-5081

The tragic, fatal accident involving an Uber self-driving car in Tempe, Arizona highlights the dark side of the fierce competition, developing technologies tested in populated cities, lack of centralized reporting, and missing data on the efficacy and status of the technologies.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

Multiple Points of Failure

NEWS


A pedestrian was struck and killed by an Uber self-driving car operating in autonomous mode, despite massive investments and an original equipment manufacturer (OEM) with a long history and focus on safety. The failures likely include issues with hardware, software, and human intervention through the “safety driver.” The company suspended its autonomous operations there, as well as those in Toronto, San Francisco, and Pittsburgh.

Questions remain regarding the technologies’ settings, defects, redundancies, and ethical programming, as well as driver training and monitoring. This was the second accident that an Uber self-driving car was involved in during the last year in Tempe. The first accident came about a month after launch and was attributed to another vehicle’s failure to yield, resulting in a short-term hold for the pilot.

Investigating and Reporting on the Technology

IMPACT


Where did the technology fail, who will know, and how will regulators and impacted citizens gain data to make informed decisions on upcoming “robotic taxi” services from Uber, Waymo, and others in Arizona, Nevada, and beyond? The data are needed to allow for safer mobility-as-a-service (MaaS) and broader adoption of autonomous cars. Cadillac’s Super Cruise and Tesla’s Autopilot have been largely successful (no pedestrian deaths, to date), leveraging different solution sets in the consumer segment. Even factory-fit technologies in a Volvo XC90 include automatic emergency braking (AEB). Companies like Waymo are also broadly testing in Arizona, and GM does not appear to have the same issues. Self-driving technology development also includes Aurora/NVIDIA, Intel/Mobileye, and numerous other ventures that could be tapped.

Technologies to investigate and report include the vehicle’s camera sensors, Lidar, and radar. Was the AEB deactivated to allow for autonomous testing? What type of redundant sensors were available and activated? The technologies assumed to be operational should identify objects, people, and animals outside of the line-of-sight. Additive technologies that could address these issues include thermal night vision and multiple driver monitoring like video solutions from Lytx and Omnitracs used for commercial driving applications.

Private companies need to be held accountable and that begins with reporting, rigorous simulation, closed track testing, and approval before public usage or accidents happen. Companies like Waymo are certainly watching before launching driverless taxi services later this year. Toyota has put testing operations on hold. California approved completely driverless vehicles beginning April 2, but 1 week prior, no company had yet applied for a permit. To date, the California Department of Motor Vehicles (DMV) has received 59 reports of accidents involving autonomous vehicles since 2014. The Nevada DMV does not appear to be changing its position.

Ready for Prime Time?

RECOMMENDATIONS


One important consideration should be greater simulation or “virtual learning” prior to testing vehicles on public roadways. One option is NVIDIA’s driving simulation platform recently announced at the GPU Technology Conference (GTC). Additional testing of artificial intelligence, machine learning, and sensor advancement is needed across all platforms. Additive technology may also benefit systems such as vehicle-to-everything (V2X) to proactively alert cars about pedestrians. Cellular V2X will be available on phones to enable another layer of security on top of “last resort” advanced driver assistance system (ADAS) radar or image sensors.

There is understandable outrage that tech companies are permitted to allow autonomous vehicles to “learn on the road,” placing other drivers, passengers, and pedestrians at risk of injury or death. Moral considerations include ethical programming, which is sometimes referred to as the “trolley problem.” Who is saved when a machine makes the decision?

The future of autonomous public testing needs greater clarity on requirements and agreements between public and private entities. This needs to include plans after any accident and a deep understanding of the capabilities to address any human or machine failure. Details should include focus areas and any areas off-limits (geo-fencing). This may include time of day or speed restrictions prior to full deployment and citizens becoming part of a living lab. The U.S. federal government appears to be sticking to voluntary guidelines only, leaving states and cities to create a patchwork of agreements.

Uber should be required to release relevant insights from the vehicle, so others can simulate the accident to avoid potential multiple failures. The pressure to outdo the competition should not hide fatal flaws from the industry. Not all companies have the same track record; Waymo, for example, has logged extensive miles and has a solid safety history. Further regulation as seen in Europe and more stringent testing prior to public pilots need to be considered.

Services

Companies Mentioned