Don't Miss
Home / Blog / The Techniques for Assurance Case Evidence Generation

The Techniques for Assurance Case Evidence Generation

To close the development loop of Assurance case (AC), the final task is the generation of evidence to support the claims. The techniques for evidence generation vary with different system development solutions.

To close the development loop of Assurance case (AC), the claims need to be supported by the concrete evidence. For the purpose of automation of AC process, Formal methods (FM) can be incorporated into the model-based assurance case process with the formal foundation, and regarded as a proper choice.

The verification of claim satisfaction in assurance cases is a means of showing that the property has been satisfied, which is in the form of the solution in terms of Goal Structuring Notation (GSN) or evidence in terms of Claims Arguments Evidence (CAE). The verification techniques to be exploited vary depending on the nature of the claim properties. Safety properties may require the safety analysis such as Failure Mode and Effects Analysis (FMEA) [1], the testing and/or simulation of safety-related functional requirement. The reliability property calls for the quantitative analysis based on reliability theory, etc.

Evidence for supporting claim satisfaction may be used from three main sources, including service experience of the previous usage, system verification and validation, compliance with standards [2]. Depending on the nature of the standards to be compliant with, there might be an overlap between the evidence of system verification and validation, and the evidence of compliance with standards.

The AC evidence could be a subset of the ones used for system validation and verification as all the properties in AC are part of the system. To understand the possible techniques for evidence, a proper way is to understand methods used for the system validation and verification in terms of automatic evidence generation.

Different ways to categorize the system validation and verification methods exist [3] [4] [5] and share a similar principle in essence. According to [5], the validation and verification method for the system may include inspection and review, analysis, testing, and similarity/service Experience.

Inspection and review consists in demonstrating adequate consensus that the product complies with its requirements. Generally, a checklist or similar aid is used manually, and the engineering judgement is required. Inspection shows that the system or item meets established physical implementation and workmanship requirements. Design reviews show how the system or item is expected to perform in normal and abnormal conditions. Test reviews establishing the applicability of test cases to system or item requirements [5].

Analysis consists in using “mathematical modelling and analytical techniques to predict the suitability of design to stakeholder expectations based on calculated data or data derived from lower system structure end product verifications”[3]. Analysis provides a repeatable result. Analysis methods include, but are not limited to modelling and coverage analysis.

  • Modelling of complex, deterministic systems may be entirely computational or a combination of computation and test. The model may or may not consist/comprise of formal semantics.
  • Coverage analysis consists in determining the degree to which the requirements are addressed throughout the development and verification activities. This is typically implemented using some form of traceability, and combined with testing or simulation [5].

Testing consists of using an end product to obtain detailed data needed to verify performance, or provide sufficient information to verify performance through further analysis [3][6].

Similarity/service experience consists of making use of the evidence of satisfactory service experience based on the similarity of the application [5].

From the perspective of automatic generation of AC evidence, it is desired that the system models reference by AC are verified automatically, i.e., “they can be used in conjunction with methods such as model checking or simulation, to obtain evidence that the controller and the self-adaptive system meet their requirements”[20]. Based on the above, the analysis is possible to be automated to a certain extent. In addition, from the perspective of AC argumentation, model verification including simulation and formal verification would be of more interest.

As the simulation is directly applied to actual designs with a huge or infinite number of states, it is impossible or impractical to check all of the executions in a systematic way. Therefore, simulation is not as exhaustive as formal methods in terms of a specific property or requirement [7], and is more suitable for demonstrating the system functionality instead of the safety-related requirements.

In contrast, formal verification does not currently scale well to large designs. However, it provides value that other techniques cannot match. It can achieve greater behavioural coverage, and prove the absences of some classes of errors [8]. From the safety point of view, it is important to assure the absence of [8]. From the safety point of view, it is important to assure the absence of the error of critical safety requirements.

Formal verification includes model checking and theorem proving.

Model checking is an automatic technique for verifying finite-state reactive systems. A search procedure is used to determine automatically if the specification is satisfied. The model checker will either terminate with the answer “true”, meaning that the model satisfies the specification, or provide a counterexample which is typically important in finding the errors [9]. As the number of state variables in the system increases, the size of the system state space will grow exponentially [10], but model checking techniques based on explicit state enumeration can only handle relatively small examples. This phenomenon is called the “state explosion problem”, and it is the main disadvantage of model checking [11]. In order to exploit the model checking techniques, the system states have to be constrained to a limited size. The popular model checking tools include Uppaal [12], SPIN [13], PRISM [14], NuSMV [15], ProB [16], FDR [17] etc..

A theorem proving program mechanizes a proof system. A calculus or proof system for a logic consists of a set of axioms and a set of inference rules. The automatic theorem proving (ATP) typically provides automatic reasoning for the first-order logic. The interactive theorem proving (ITP) typically uses higher-order logic, and requires user interaction, such as providing invariants or applying proof tactics, and the understanding of a logical representation of the program [50]. The common tools include Isabelle [18], KeYmaera [19], HOL4, Coq, etc..

Due to various computational paradigms and levels of abstraction [21], the integrated Formal Methods (FMs) that embraces different types of FMs is a need [22] for providing evidence in an AC.

The argumentation of AC exploits all the possible methods to generate evidence. FM is a key assurance for the critical properties (e.g. safety, security), and the integrated formal methods can be a solution to the disadvantages of individual methods. Moreover, FM can be incorporated into the model-based assurance case process with the formal foundation.

References

[1] J. A. Mcdermid, “Support for safety cases and safety arguments using SAM,” Reliab. Eng. Syst. Saf., vol. 43, pp. 111–127, 1994.
[2] “Safety Case Development Manual,” 2006. [Online]. Available: https://usermanual.wiki/Document/EUROCONTROL20Safetycasedevelopmentmanual.687924513/html.
[3] National Aeronautics and Space Administration, “NASA System Engineering Handbook Revision 2,” 2016.
[4] N. Hoboken, Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities, version 4.0, 4.0. USA: John Wiley and Sons, Inc, 2015.
[5] “Guidelines for development of civil aircraft and systems,” 2010.
[6] E. Ahmad, B. R. Larson, S. C. Barrett, N. Zhan, and Y. Dong, “Hybrid annex: An AADL extension for continuous behavior and cyber-physical interaction modeling,” HILT 2014 – Proc. ACM Conf. High Integr. Lang. Technol., pp. 29–38, 2014.
[7] A. Sen and V. K. Garg, “Formal verification of simulation traces using computation slicing,” IEEE Trans. Comput., vol. 56, no. 4, pp. 511–527, 2007.
[8] D. L. Dill, “What’s between simulation and formal verification?,” in Proceedings 1998 Design and Automation Conference. 35th DAC.(Cat. No. 98CH36175), 1998, pp. 328–329.
[9] O. Grumberg, E. M. Clarke, and D. A. Peled, “Model checking.” MIT press Cambridge, 1999.
[10] E. M. Clarke, W. Klieber, M. Nováček, and P. Zuliani, “Model checking and the state explosion problem,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 7682 LNCS, no. 2005, pp. 1–30, 2012.
[11] E. Clarke, O. Grumberg, S. Jha, Y. Lu, and H. Veith, “Progress on the State Explosion Problem in Model Checking,” pp. 176–194, 2001.
[12] K. G. Larsen, P. Pettersson, and W. Yi, “UPPAAL in a nutshell,” Int. J. Softw. tools Technol. Transf., vol. 1, no. 1–2, pp. 134–152, 1997.
[13] G. J. Holzmann, The SPIN model checker: Primer and reference manual, vol. 1003. Addison-Wesley Reading, 2004.
[14] R. Calinescu, D. Weyns, S. Gerasimou, and I. Habli, “Architecting Trustworthy Self-adaptive Systems.”
[15] A. Cimatti, E. Clarke, F. Giunchiglia, and M. Roveri, “NuSMV: a new symbolic model checker,” Int. J. Softw. Tools Technol. Transf., vol. 2, no. 4, pp. 410–425, 2000.
[16] M. Leuschel and M. Butler, “ProB: A model checker for B,” in International Symposium of Formal Methods Europe, 2003, pp. 855–874.
[17] T. Gibson-Robinson, P. Armstrong, A. Boulgakov, and A. W. Roscoe, “FDR3: a parallel refinement checker for CSP,” Int. J. Softw. Tools Technol. Transf., vol. 18, no. 2, pp. 149–167, 2016.
[18] L. C. Paulson, Isabelle: A generic theorem prover, vol. 828. Springer Science & Business Media, 1994.
[19] A. Platzer and J.-D. Quesel, “KeYmaera: A hybrid theorem prover for hybrid systems (system description),” in International Joint Conference on Automated Reasoning, 2008, pp. 171–178.
[20] R. Calinescu, D. Weyns, S. Gerasimou, M. U. Iftikhar, I. Habli, and T. Kelly, “Engineering trustworthy self-adaptive software with dynamic assurance cases,” IEEE Trans. Softw. Eng., vol. 44, no. 11, pp. 1039–1069, Nov. 2018.
[21] C. A. R. Hoare and H. Jifeng, Unifying theories of programming, vol. 14. Prentice Hall Englewood Cliffs, 1998.
[22] R. F. Paige, “A meta-method for formal method integration,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 1313, pp. 473–494, 1997.

 

About the Author: Fang Yan


Fang has her bachelor’s degree in Electronic and Information Engineering at the Civil Aviation University of China and completed her master’s degree in Ecole National Superieur D’ingeneurs De Constructions Aeronautiques. After the university, she worked as a lecturer in a university and had 7 years of part-time experience in aviation certification in terms of system safety and software safety. She has her interest in safety methodology for autonomous systems and chose to continue in academia and therefore into this PhD.