INTRODUCTION
The coming of autonomous systems doesn’t just mean self-driving cars. Advances in artificial intelligence will soon mean that we have drones that can deliver medicines, crew-less ships that that can navigate safely through busy sea lanes, and all kinds of robots, from warehouse assistants, to search-and rescue robots, down to machines that can disassemble complex devices like smartphones in order to recycle the critical raw materials they contain.
As long as these autonomous systems stay out of sight, or out of reach, they are readily accepted by people. The rapid and powerful movements of assembly-line robots can be a little ominous, but while these machines are at a distance or inside protective cages we are at ease. However, in the near future we’ll be interacting with “cobots” – robots intended to assist humans in a shared workspace. For this to happen smoothly we need to ensure that the cobots will never accidentally harm us. This question of safety when interacting with humans is paramount. No one worries about a factory full of autonomous machines that are assembling cars. But if these cars are self-driving, then the question of their safety is raised immediately. People lack trust in autonomous machines and are much less prepared to tolerate a mistake made by one. So even though the widespread introduction of autonomous vehicles would almost eliminate the more-than 20,000 deaths on European roads each year1, it will not happen until we can provide the assurance that these systems will be safe and perform as intended. And this is true for just about every autonomous system that brings humans and automated machines into contact.
Until now, safety assurance has been integrated into the design processes, based on safety standards and demonstrating compliance during the system’s test phases. However, existing standards are developed primarily for human-in-the-loop systems, where a human can step in and take over at any time. They do not extend to autonomous systems, where behaviour is based on pre-defined responses to a particular situation. What’s more, current assurance approaches generally assume that once the system is deployed, it will not learn or evolve. On the one hand, advances in machine learning mean that autonomous systems can be given the potential to learn from their mistakes, and the mistakes of all the systems they are connected to, making their abilities to operate safely infinitely better than previous generations. On the other, machine learning means more uncertainty about how the system will decide to react to a particular circumstance in the future, making safety assurance a hard task, which can only be accomplished by a highly-skilled, interdisciplinary workforce.
Are you ready yet, to take a seat on an autonomously controlled airplane? If you hesitate to say “yes”, then you are tacitly acknowledging the need for a training and research programme such as the Safer Autonomous Systems ITN.
TRUST MATTERS
If deployed tomorrow, existing self-driving cars would have many fewer accidents than those driven by humans. But this doesn’t mean that people are ready to hand-over the steering wheel. We tolerate many thousands of deaths on the road every year, but the first crash involving two full-autonomous vehicles that results in a fatality will be headline news all over the world. And then what? Will there be a public outcry? Will gangs come with pitchforks to smash the machines? Will self-driving cars be like the Hindenburg disaster and airships? Autonomous vehicles, indeed all autonomous systems, need to be made safe enough so that people trust them. The destination, therefore, is clear; the route, however, is a difficult one. The Safer Autonomous Systems ITN project is designed to get us to our destination, safely.
OBJECTIVES
The main objective of the Safer Autonomous Systems (SAS) project is to identify ways that we can establish people’s trust in autonomous systems by making these systems demonstrably safer. In order to achieve this objective we have identified three challenges to be addressed by the early-stage researchers (ESRs) in their 15 individual research projects. This simply-stated objective, and the interdisciplinary needs required for its realization, is of such complexity that we see a large training network involving some of Europe’s flagship companies – such as Bosch, Airbus and Jaguar Land Rover – together with leading European universities – like KU Leuven and the University of York – as the best way to tackle these challenges, which are briefly described as follows:
- Increased autonomy, by definition, means a significant reduction of the time during which a human is involved in the system’s decision making, thereby reducing the residual control afforded to humans. Studies have shown that it may take minutes for a non-actively involved human operator (e.g. a passenger in a self-driving car) to take over control in case of an emergency. Moreover, just putting a self-driving car to a full stop on a busy highway by removing its power (so-called fail-stop behaviour) is definitely not a safe action. In contrast, an autonomous system should be fail-operational (perhaps with reduced functionality) under all circumstances, monitor its own safety and make its own decision about a sensible and safe reaction. The challenge therefore is to design autonomous systems in such a way that they remain safe under all conditions, even in the case of component failures.
- Testing is the most intuitive way to reveal unsafe behaviour. However, autonomous systems must operate in a near infinite range of situations. When we test autonomous systems, we must therefore systematically determine which range and diversity of situations should be simulated and tested. We need to test them on roads, in the rain, and with people in the way. We need to test them when they’re in intermittent supervisor contact and when they’ve got an unbalanced wheel. And we need to test them in all the possible combinations of those cases. Testing autonomous systems in the field is clearly too costly and too time consuming and might even be harmful for the system or its environment. Hence, virtual model-based testing is the only viable option. However, breakthrough solutions are required to guarantee the rigor of our virtual testing and to optimize its overall coverage.
- More autonomy is possible only through new technologies, e.g., machine learning, for which no accepted safety-assurance strategies currently exist. Legacy experience as well as established standards and regulations are lacking. Implicitly or explicitly, current safety-assurance practices and safety standards assume that the behaviour of the system is known at the design stage and can be assessed for its safety prior to system deployment. As autonomous systems might learn and evolve over time, this is no longer possible. This means that meeting the current safety standards for autonomous systems is either impossible to do or completely insufficient to assure safety throughout the life time of the system.
To achieve the main Scientific/Technical (S/T) objective of trust in autonomous systems by overcoming the 3 challenges we have decided on three sub-objectives that will be the aims of the project’s 3 research Work Packages (WPs):
- Objective 1: To integrate guaranteed safe behaviour directly into the architecture/design of the autonomous system (WP1).
- Objective 2: To prove by model-based safety-analysis techniques that the behaviour of an autonomous system remains safe under all possible conditions (WP2).
- Objective 3: To ensure that the safety-assurance strategies that combine the architectural/design measures with the evidence allow us to have trust in the autonomous system, which is very likely to be learning and evolving (WP3).
OVERVIEW OF THE RESEARCH PROGRAMME
The SAS project combines intensive training with doctoral research. It is this combination that makes SAS fit so well in the context of a European Training Network. Our aim is not just to train the ESRs to become good researchers, we also want to train them to think differently, to give them a different mind-set, to get them to understand the complexity of safety and how autonomous systems place very different demands on us. The well-structured training programme will involve the 15 ESRs in the development of trustworthy autonomous systems with a focus on safety. The project will do this by creating an international training programme that offers top doctoral candidates from all over the world the opportunity to work in an international, multidisciplinary group of leading research institutions and industrial partners involved in system-safety engineering, dependability engineering, fault-tolerant and failsafe hardware/software design, model-based safety analysis, safety-assurance case development, cyber-security, as well as legal and ethical aspects. The already well-established collaborations between the institutions involved will ensure that the network runs smoothly, while strengthening the interactions and the exchange of academic and non-academic resources. SAS aims to actively research the development of safer autonomous systems at multi-nationals like Bosch, medium-sized companies like MIRA and RH Marine, but also to stimulate the development of new safety designs, modelling and assurance techniques by involving the ESRs in SMEs and, potentially, their own start-ups.
Each of the 15 ESRs will be working towards a PhD degree, supported by a carefully chosen supervisory team that maximizes both scientific excellence as well as interdisciplinary and intersectoral collaboration. Indeed, the supervisory team of each ESR comprises two academic supervisors, chosen for their scientific expertise and wide-ranging supervision experience, and one non-academic supervisor, chosen to ensure that the research always remains relevant to real-world safety issues. Moreover, each supervisory board covers at least two countries and in many cases even three. This triple-supervisor approach, from which all 15 ESRs will benefit, provides them with an optimum blend of support and feedback, giving them the opportunity to gain experience and see any problem from both the industrial and academic perspectives. In addition, we believe that SAS’s ambitious S&T objectives will be met through our strategy of putting 7 industry-driven application case-studies central to the training and research programme. These case studies comprise a self-driving vehicle, an autonomous vessel, a driverless tractor, a farming robot, a conversational clinical bot, an autonomous oil-drilling platform, and a pilotless plane.
The SAS Beneficiaries are 3 high-technology companies, Bosch (GE), MIRA (UK) and RH Marine (NL), 2 non-university research institutes, LAAS (FR) and FHG (GE), and 2 universities, KU Leuven (BE), UoY (UK). The consortium is completed by 10 Partner Organisations that include 9 companies and 1 university. This gives SAS some of the best and most relevant of European industry and the key academic players, guaranteeing not only an exciting interdisciplinary, intersectoral research-and-training programme, but also a head-start for bringing about trust in autonomous systems.
RESEARCH METHODOLOGY AND APPROACH
The SAS project is based on 6 Work Packages (WPs), three of which are S&T WPs (WP1–3), one for training (WP4), one for Exploitation, Dissemination and Communication (WP5) and one for Management (WP6). The training and research activities start at month 7, with the first 6 months being devoted to recruiting top candidates for the ESR positions.
The S&T WPs are organized along 3 research tracks covering the 3 main steps in the safety-assurance process:
- building safety and dynamic risk mitigation into the system by design,
- gathering evidence that the behaviour of the system will actually be safe, and
- combining these into a clear strategy that allows us to put our trust in the system.