Automating Safety Argument Change Impact Analysis for Machine Learning Components

2022 IEEE 27th Pacific Rim International Symposium on Dependable Computing (PRDC)(2022)

引用 1|浏览7
暂无评分
摘要
The need to make sense of complex input data within a vast variety of unpredictable scenarios has been a key driver for the use of machine learning (ML), for example in Automated Driving Systems (ADS). Such systems are usually safety-critical, and therefore they need to be safety assured. In order to consider the results of the safety assurance activities (scoping uncovering previously unknown hazardous scenarios), a continuous approach to arguing safety is required, whilst iteratively improving ML-specific safety-relevant properties, such as robustness and prediction certainty. Such a continuous safety life cycle will only be practical with an efficient and effective approach to analyzing the impact of system changes on the safety case. In this paper, we propose a semi-automated approach for accurately identifying the impact of changes on safety arguments. We focus on arguments that reason about the sufficiency of the data used for the development of ML components. The approach qualitatively and quantitatively analyses the impact of changes in the input space of the considered ML component on other artifacts created during the execution of the safety life cycle, such as datasets and performance requirements and makes recommendations to safety engineers for handling the identified impact. We implement the proposed approach in a model-based safety engineering environment called FASTEN, and we demonstrate its application for an ML-based pedestrian detection component of an ADS.
更多
查看译文
关键词
Safety Cases,Machine Learning (ML),Operational Design Domain (ODD),Change Impact Analysis (CIA)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要