Don't Miss
Home / Blog / Autonomous cars are not trustworthy but researchers are (trying) changing it — Part 1

Autonomous cars are not trustworthy but researchers are (trying) changing it — Part 1

OK, first things first. The intention of this article is not just to present challenges about autonomous vehicles but also to show the important and recent efforts of researchers in overcoming these challenges. Since these cars are being developed with machine learning (ML) techniques, the content here will be more focused on the technical aspects of ML. Thus, this article is presented in two parts:
1) Current ML challenges for autonomous cars.
2) Current solutions developed by the scientific community.


Therefore, who is this article for?

  • Students (with ML background) who want to know more about this prominent field.
  • ML engineers who want to develop solutions to overcome the current autonomous vehicles challenges.

If you don’t have a ML background but you are a reader eager for knowledge, you will be able to find complementary information through the available links in this post and understand it well.

The world of autonomous vehicles and its levels of autonomy

The use of artificial intelligence (AI) solutions, specially ML algorithms, are becoming common in critical autonomous tasks such as advanced driver assistance systems (ADAS). For example, the Jaguar company has an ADAS named INCONTROL, containing parking & driving features such as lane keep assist, autonomous emergency braking, traffic sign recognition and so on.

Even though researchers and engineers have achieved important advances in autonomous tasks, the final goal is to reach fully autonomous driving capabilities. The National Highway Safety Administration (NHTSA), and the Society of Automotive Engineers (SAE) have each published a classification system for automated vehicles. According to the SAE there are six levels:

  • Level 0: driver is responsible for driving, (vehicle can provide warnings).
  • Level 1: driver can perform all driving tasks, but can take advantage of cruise control, lane keeping, and parking assistance.
  • Level 2: driver detects when to take control over the automated system.
  • Level 3: driver, under limited conditions, can focus on tasks other than driving, but able to take over when notified by the vehicle.
  • Level 4: expands the scenarios in which automated vehicle can safely operate, but requires the driver to determine when it is safe to do so.
  • Level 5: no human intervention (just start it and provide a destination).

Despite the industry desire to produce autonomous cars with a level 4 or 5, we are still far from this dream. Recently, Tesla announced a new custom chip designed to enable full self-driving capabilities. However, Tesla vehicles are not considered fully autonomous, (Level 4). Instead, they are level 2.

Currently, we are still stuck at level 3. The reason is that even with the constant advancements of deep learning and reinforcement learning algorithms, machine learning (ML) algorithms provide no guarantees of safe operation in such safety-critical environments.

ML: no guarantees in safety-critical environments

This is the biggest issue of the current technology (and in the next decade): ML provides no guarantees of safe operation in such safety-critical environments since it learns from training data. That is, even with modern techniques to augment data, the current solutions have a partial view of the environment. Thus, it is almost impossible for a dataset to cover all possible situations regarding a public road. Besides, ML models make probabilistic predictions based on this data. Hence, even with an accuracy of 99%, it might lead to catastrophic consequences (black swan theory).

There are other important and challenging problems such as distributional shift, adversarial inputs, unsafe exploration, interpretability and so on. An interesting example was showed by the company open.ai. They trained a reinforcement learning (RL) algorithm on the game CoastRunners, whose objective is to finish the boat race quickly and (preferably) ahead of other players and the player earns higher scores by hitting targets along the route.

 

RL agent discovers an unintended strategy for achieving higher scores. Source: Deep Mind blog; Open.AI blog;

The agent achieves a score on average 20 percent higher than that achieved by human players but using an unexpected and undesirable behavior. Thus, we can see that reinforcement learning algorithms can break in counter-intuitive ways, leading to catastrophic events in real environments.

These challenges are very important and are being studied by experienced companies including DeepMind. Traditional methodologies such as formal methods (FM) rely on formulating mathematical models about the system properties. Then, these models are subjected to model checking and theorem solvers to guarantee that certain properties have been met. There is some recent research applying formal methods for less complex autonomous systems, such as automatic gearbox trained with RL. Since, such a strategy should consider all possible system behaviors during the design and validation process, applying FM is something very difficult in autonomous vehicles. Instead, researchers are trying to improve the reliability of ML algorithms.

How? This will be explained in part 2. See you soon !

 

About the author: Raul Sena Ferreira


Raul Sena Ferreira is an early stage researcher for the ETN-SAS project. His research is about computer science and artificial intelligence techniques to ensure safety decisions of AI-based autonomous vehicles. This research is conducted at LAAS-CNRS, France.

Raul studied Computer Science at the Universidade Federal Rural do Rio de Janeiro, and graduated his master degree in Systems Engineering and Computing at Universidade Federal do Rio de Janeiro. Here, he carried out a research in the field of Adaptive Machine Learning to study how Machine Learning classifiers can evolve in dynamic environments with minimal human intervention.