Don't Miss
Home / Blog / Autonomous cars are not trustworthy but researchers are (trying) changing it - Part 2

Autonomous cars are not trustworthy but researchers are (trying) changing it - Part 2

The second part of the article that discuss the challenges and the recent efforts for increasing the reliability of machine-learning-based autonomous cars.

We saw in part 1 of this article that the main problem of AI-based autonomous vehicles is that ML provides no guarantees of safe operation in such safety-critical environments since it learns from training data. In this second part, we show how researchers are trying to develop safe ML algorithms in order to increase the reliability of these systems. Since Deep Neural Networks (DNN) and Reinforcement Learning (RL) algorithms are the most applied ML solutions for helping to develop autonomous vehicles, researchers are focusing their efforts on them.

P.S.: If you Want to know how Deep Learning works? Here’s a quick guide for everyone.

P.S.2: If you want to know an introduction to Reinforcement Learning. Here’s a quick video.

How researchers are overcoming these challenges?

A DNN can be inspected through the weights which contains the compressed information about the input. However, it is only possible to analyze the connections on the first level, since on further levels it’s more complex. Because it is difficult to explain the decisions of the deep learning algorithms (and AI may be useless until it learns how to explain itself), researchers looked at other ways to ensure that these decisions are safe or to detect the bad/uncertain decisions. To do this, researchers are trying different solutions in the Safe ML field.

Monitoring Deep Neural Networks

In order to develop safe neural networks, some researchers from the Research Institute of the Free State of Bavaria proposed to create a monitor after the standard training process in order to store the neuron activation patterns of the deep neural networks (DNN). In operation, if the monitor does not contain any pattern similar to the previous generated patterns, it raises a warning that the decision is not based on the training data, as illustrated in the Figure of the paper Runtime monitoring neuron activation patterns.

Source: Cheng, Chih-Hong, Georg Nührenberg, and Hirotoshi Yasuoka. “Runtime monitoring neuron activation patterns.” 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2019.

Increasing robustness of Deep Neural Networks

While some researchers try to store the DNNs patterns, other researchers from Stanford University, are trying to verify the DNNs properties through counter-examples by modifying its activation functions values. “In a neural network, the activation function is responsible for transforming the summed weighted input from the node into the activation of the node or output for that input” (more about activation functions can be read in this excellent post).

The activation function studied by this research is the rectified linear unit (ReLU). The idea here is that when the DNN model is built, their framework, entitled Reluplex, allows ReLU variables to temporarily violate their bounds as it iteratively looks for a feasible variable assignment. Besides, it also allows variables that are members of ReLU pairs to temporarily violate the ReLU semantics. As it iterates, Reluplex repeatedly picks variables that are either out of bounds or that violate a ReLU, and corrects them. Hence, the DNNs can be more robust about their decisions when receiving unexpected inputs or data not contained in the training data. More details in the paper Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks.

Hybrid approach for analyzing DNNs

Since autonomous cars use a huge amount of different inputs (control signals, sensors…), the input space to be searched can be intractable. Besides, we saw that one of the main problems is the large feature space. In order to verify the entire autonomous system, researchers from the University of California, Berkeley along with DARPA, developed a framework for the falsification of these kind of systems.

It divides the search space for falsification into that of the ML component and of the remainder of the system. The projected search spaces are respectively analyzed by a temporal logic falsifier (“CPS Analyzer”) and a machine learning analyzer (“ML analyzer”). It searches for a behavior of the system that violates a property. In other words, the analyzer identifies inputs that “fool” the ML algorithm and could lead to catastrophic events. Thus, the information gathered by the “temporal logic falsifier and the ML analyzer together reduce the search space, providing an efficient approach to falsification”.

The system was tested in the Udacity-Unity Self-Driving Simulator and the autonomous car was able to correctly identify an unexpected object in the road, as shown in the little video below, provided in the paper Compositional Falsification of Cyber-Physical Systems with Machine Learning Components.

(Include here the video from youtube -> https://www.youtube.com/watch?v=Sa4oLGcHAhY)

Automatic Emergency System. Source: Dreossi, Tommaso, Alexandre Donzé, and Sanjit A. Seshia. “Compositional falsification of cyber-physical systems with machine learning components.” NASA Formal Methods Symposium. Springer, Cham, 2017.

What’s next ?

We are a bit far from having the opportunity to drive a fully autonomous car but the technology has advanced very fast in the last 2 years. Governments are more open to discuss the possibility of autonomous cars in the streets. For example, the Queensland government launched what it’s calling “the most advanced automated vehicle in Australia”. It completed a six kilometer trip around suburban streets without driver intervention.

Other huge initiatives are being developed across the world for tackling the future of autonomous cars. For example, the Safer Autonomous Systems (SAS) project, is program that has received funding from the European Union’s EU Framework Programme for Research and Innovation Horizon 2020. This initiative started with 15 Marie-Curie scholarships for doing research projects along with Europe's flagship companies — such as Bosch, Airbus and Jaguar Land Rover — together with leading European universities/institutions like KU Leuven, CNRS and the University of York.

The autonomous vehicles are the future, but how far is this future from us ? We don’t know. However, this is an evolution that won’t stop. Car companies such as Tesla, Audi, Jaguar, Renault, Mazda, Ford and other companies like Uber are already testing their autonomous cars.

 


About the author: Raul Sena Ferreira


Raul Sena Ferreira is an early stage researcher for the ETN-SAS project. His research is about computer science and artificial intelligence techniques to ensure safety decisions of AI-based autonomous vehicles. This research is conducted at LAAS-CNRS, France.

Raul studied Computer Science at the Universidade Federal Rural do Rio de Janeiro, and graduated his master degree in Systems Engineering and Computing at Universidade Federal do Rio de Janeiro. Here, he carried out a research in the field of Adaptive Machine Learning to study how Machine Learning classifiers can evolve in dynamic environments with minimal human intervention.