Reality Check for Automotive AI: Driverless Vehicles Will Require Deterministic Safety Frameworks

Subscribe To Download This Insight

By Dominique Bonte | 4Q 2019 | IN-5650

The inherent nature of neural nets in terms of their rule-less black box character and reliance on training data sets makes it very hard to formally proof the safety of Artificial Intelligence (AI)-based driverless vehicle technology, however powerful these systems might be. Various forms of deep learning are applied to machine vision, especially for imaging sensors and path planning (reinforcement learning).

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

Need for Formal Vehicle Safety Validation forces AI to Take a Back Seat

NEWS


The inherent nature of neural nets in terms of their rule-less black box character and reliance on training data sets makes it very hard to formally proof the safety of Artificial Intelligence (AI)-based driverless vehicle technology, however powerful these systems might be. Various forms of deep learning are applied to machine vision, especially for imaging sensors and path planning (reinforcement learning).

The fatality involving a Tesla vehicle in Florida back in 2016 failing to recognize a truck crossing its path was a first strong warning about the limitations of deep learning, prompting Tesla to take redundant and more deterministic radar sensor data into account. 

As the automotive industry seeks to match or at least come close to aviation’s safety record, with fatalities happening only every 10 billion kilometers (a safety record with 10 9s: 0.999999999999 ), it is very hard to validate this by merely exhaustively testing a particular vehicle, be it physically or through simulation.

Additionally, and even more worryingly, if and when a fatality with a Level 4 or 5 driverless vehicle occurs, it will not be possible to explain what exactly went wrong and how it can be avoided in the future, which is a very thorny issue for an automotive industry haunted by liability cases. We simply don’t know how exactly a neural net reaches its decisions.

What is needed is a range of additional, non-AI based technologies able to overrule deep neural nets according to deterministically verifiable safety hardware and software technologies in terms of avoiding fatal collisions.  

Formal, Verifiable, and Deterministic Driverless Vehicle Safety Technologies and Approaches

IMPACT


A range of possible complementary, non-AI-based driverless vehicle technologies and approaches are listed below. Ideally, most or even all of these principles should find their way into Level 4 and 5 vehicles, preferably as part of well-established standards and industry-wide agreed practices:

  • Deterministic Safety Frameworks
    • Adding algorithmic fence around AI black box (white boxing) specifying safe distances and maneuvers
    • Akin spatial separation practices in aviation
    • Examples include Mobileye’s Responsibility-Sensitive Safety (RSS), NVIDIA’s Safety Force Field (SFF) mathematical models, and Zenuity’s precautionary advisor and manager
  • Complementary Non-AI Deterministic Obstacle Detection Technologies
    • Vehicle-to-Everything (V2X) and cooperative mobility allowing non-Line-of-Sight (LOS) collective perception but also providing redundancy (second opinion) for imaging machine vision
    • V2X roadside infrastructure—somewhat akin to aviation’s automated Instrument Landing System (ILS)
    • Physical-based sensing via basic radar and Lidar not requiring AI machine vision  
  • Decomposition and Redundant Designs
    • Late Sensor Fusion as adopted by Mobileye
    • Avoiding one single big black box bottle neck
  • Remote Monitoring and Control of Driverless Vehicles
    • Akin to aviation’s air traffic control practices
  • Advanced Maintenance Processes, Safety Documentation, and Standards Including Over-the-Air (OTA) practices
  • Simulation-Based Training and Testing
  • AI Explanation Techniques, Including Training Data Audits
  • Formal Proof Based on Models and Analyzable Algorithms
  • Low-Speed, City-Based Fleet Deployments (No Mixed Environment)

The main principles underlying some of the technologies and approaches described above include redundancy, decomposition, precautionary measures, and foundational principles of functional safety requirements outlined in the ISO 262626 standard.

Tapping into the Power of AI While Putting Safety Fences around It

RECOMMENDATIONS


What has been described above does not mean in any way that AI, and neural nets in particular, will not be important for driverless vehicles. It is a critical capability without which driverless vehicles simply cannot be developed. But what is becoming increasingly clear is that the safety of humans cannot be put entirely in the hands of a single neural net. To gain the trust from consumers and regulators alike, the automotive industry needs to be able to validate, verify, and proof the safety of driverless vehicles in a formal and deterministic way while continuing to harness the incredible power of AI. It is this very balance between relying on AI on one hand and putting traditional algorithmic checks and balances around it on the other hand that will determine the success and fate of driverless vehicles. It is a realization within the automotive industry that has extended the commercialization time tables but for good reason.

This is a strong reminder for suppliers to not overhype specific technologies as the single solution to solve problems or enable new paradigms. What holds for AI also applies to 5G and other next-generation technologies.  No single technology will be able to power the complex, automated systems of the future. The reality and end user environments into which these technologies have to be deployed mandates a balanced, cautious, and responsible approach, whether related to automotive, government, or any other verticals.

 

Services

Companies Mentioned