Software Error Classification and Dependency Assessment for NASA Missions

Leila Meshkat, Daniel Chang,Ying Shi

2024 ANNUAL RELIABILITY AND MAINTAINABILITY SYMPOSIUM, RAMS(2024)

引用 0|浏览0
暂无评分
摘要
Software anomalies are typically captured in databases, classified, and traced until they are resolved. These databases then remain part of the history for the space missions to refer to as necessary. The classifications used for the software anomalies are not always orthogonal and it is not trivial for developers and software assurance personnel to make the mapping between the anomaly in question and its related classification. Perhaps more significantly, the classification scheme and the attributes included in the standardized forms do not clarify whether or not the error in question, and its corresponding fix, affect other areas of the system. Our research indicates that there are a few keywords that can help the users of these systems identify each of these factors with a reasonable level of confidence. Of course, since the information is captured using natural language, and is highly specialized, these keywords may be different from one mission to another and from one center to another. However, tools that enable the team to develop their own keywords or modify keywords that are seeded into the system could be beneficial in helping to correctly classify the anomalies and better understand the dependencies between them. The first step, is, of course, a clear, concise and descriptive classification scheme. In this paper, we explain our efforts towards developing a standardized classification scheme, based on our examination of these schemes across several NASA centers, as well as the literature, and we also define the concept for a module for user-assisted analysis based on keywords that we have extracted from studying a subset of the problem failure reports for the Mars Perseverance Rover mission.
更多
查看译文
关键词
Software Errors,Risk,Reliability,Empirical Analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要