Need for Formal Vehicle Safety Validation forces AI to Take a Back Seat
The inherent nature of neural nets in terms of their rule-less black box character and reliance on training data sets makes it very hard to formally proof the safety of Artificial Intelligence (AI)-based driverless vehicle technology, however powerful these systems might be. Various forms of deep learning are applied to machine vision, especially for imaging sensors and path planning (reinforcement learning).
The fatality involving a Tesla vehicle in Florida back in 2016 failing to recognize a truck crossing its path was a first strong warning about the limitations of deep learning, prompting Tesla to take redundant and more deterministic radar sensor data into account.
As the automotive industry seeks to match or at least come close to aviation’s safety record, with fatalities happening only every 10 billion kilometers (a safety record with 10 9s: 0.999999999999 ), it is very hard to validate this by merely ...
You must be a subscriber to view this ABI Insight.
To find out more about subscribing contact a representative about purchasing options.