论文标题
正交缺陷分类用于软件可靠性分析
Application of Orthogonal Defect Classification for Software Reliability Analysis
论文作者
论文摘要
使用数字仪器和控制系统(DI&C)对现有和新的核电站的现代化是一个最近且高度流行的话题。但是,美国(美国)核监管委员会(NRC)和该行业都缺乏对最佳可靠性方法的共识。在这项工作中,我们开发了一种称为正交缺失分类的方法,用于评估软件可靠性(ORCAS),以量化DI&C系统中各种软件故障模式的概率。该方法利用公认的行业方法来用于通过实验证据验证的质量保证。本质上,该方法将语义故障分类模型与可靠性增长模型结合在一起,以预测软件系统故障模式的概率。在代表性I&C平台(Chibios)上进行了一个案例研究,该平台运行了由弗吉尼亚联邦大学(VCU)开发的智能传感器采集软件。应用了Orcas中的测试和证据收集指南,并在软件中发现了缺陷。定性证据(例如修改条件决策范围)用于评估评估的完整性和可信度,同时使用定量证据来确定软件故障概率。然后估算软件的可靠性,并将其与传感器设备的现有操作数据进行了比较。可以证明,通过使用访问者,可以开发一个语义推理框架以证明软件是否可靠(或不可靠),同时仍利用现有方法的强度。
The modernization of existing and new nuclear power plants with digital instrumentation and control systems (DI&C) is a recent and highly trending topic. However, there lacks strong consensus on best-estimate reliability methodologies by both the United States (U.S.) Nuclear Regulatory Commission (NRC) and the industry. In this work, we develop an approach called Orthogonal-defect Classification for Assessing Software Reliability (ORCAS) to quantify probabilities of various software failure modes in a DI&C system. The method utilizes accepted industry methodologies for quality assurance that are verified by experimental evidence. In essence, the approach combines a semantic failure classification model with a reliability growth model to predict the probability of failure modes of a software system. A case study was conducted on a representative I&C platform (ChibiOS) running a smart sensor acquisition software developed by Virginia Commonwealth University (VCU). The testing and evidence collection guidance in ORCAS was applied, and defects were uncovered in the software. Qualitative evidence, such as modified condition decision coverage, was used to gauge the completeness and trustworthiness of the assessment while quantitative evidence was used to determine the software failure probabilities. The reliability of the software was then estimated and compared to existing operational data of the sensor device. It is demonstrated that by using ORCAS, a semantic reasoning framework can be developed to justify if the software is reliable (or unreliable) while still leveraging the strength of the existing methods.