|COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring.|
Reasoning about Software Safety Integrity and Assurance
If you have a question about this talk, please contact Stephen Clark.
With increasing amounts of software being used within safety critical applications, there is growing concern as to how designers and regulators can justify that this is software is sufficiently safe for use. At the system level, it is reasonable and sensible to talk in terms of risk mitigation, and to establish arguments that the probability of occurrence of identified risks is acceptably low. Whilst it is not difficult to cascade these risk-based requirements to software, it becomes extremely difficult to reason about software system failure probabilistically (for all but trivial examples). Instead, qualitative arguments and evidence (concerning the satisfaction of specific software safety properties and requirements) are instead typically offered up. These can be test-based arguments, or analytic (e.g.) proof-based arguments. However, these arguments (even when deductive reasoning is employed) cannot be established with absolute certainty. There remains epistemic uncertainty surrounding such approaches: Has the software (and its interface with the real world) been modeled adequately? Can the abstractions used be justified? Are the tools used in the process qualified? This talk will examine the problems of exchanging safety arguments concerning real-world risk (associated with aleatoric uncertainty) for issues of confidence associated with software safety arguments (associated with epistemic uncertainty). We’ll present these concerns in the context of structured (but informal) argumentation approaches used within software safety justifications, and the guidance that we have developed for safety-critical industries as part of the Software Systems Engineering Initiative (www.ssei.org.uk).
Dr Tim Kelly is a Senior Lecturer within the Department of Computer Science at the University of York. He is Academic Theme Leader for Dependability within the Ministry of Defence funded Software Systems Engineering Initiative, and was Deputy Director of the Rolls-Royce Systems and Software Engineering University Technology Centre. His research interests include safety case management, software safety analysis and justification, software architecture safety, certification of adaptive and learning systems, and the dependability of “Systems of Systems”. He has supervised a number of research projects in these areas with funding and support from the European Union, EPSRC , Airbus, Railway Safety and Standards Board, Rolls-Royce BAE Systems and the Ministry of Defence. Dr Kelly has published over 140 papers on safety-critical systems development and assurance issues.
This talk is part of the Computer Laboratory Wednesday Seminars series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
Other listsMicrosoft Research Computational Science Seminars PDN Postdoc Symposium Plenary talks Graduate Programme in Cognitive and Brain Sciences
Other talksEvaluation of Lexogen Quantseq for gene expression profiling of novel compounds in drug discovery Healthcare Design Neural mechanisms supporting the development of visual perception Non-parametric methods for the dynamic stochastic block model and the time-dependent graphon Models and Capacity Bounds for Optical Fibre Channels TBC (SP Workshop)