论文标题
用于故障的分类系统导致开源AI事件的分析
A taxonomic system for failure cause analysis of open source AI incidents
论文作者
论文摘要
尽管某些工业部门(例如,航空公司)具有悠久的强制性事件报告历史,并结合了分析结果,但人工智能实践(AI)的安全益处没有此类任务,因此必须对公开已知的“开放源” AI事件进行分析。尽管局外人很少知道AI事件的确切原因,但这项工作表明了如何应用有关AI事件数据库中事件人群(AIID)的专家知识来推断导致报告失败和危害的潜在和可能导致的技术原因因素。我们介绍了涵盖一系列相互关联的事件因素的分类系统的早期工作,从系统目标(几乎总是已知)到方法 /技术(在许多情况下是可知的)以及所涉及系统的技术故障原因(受专家分析)。我们将这种本体结构与全面的分类工作流程相结合,该工作流利用专家知识和社区反馈,从而导致了以事件数据和人类专业知识为基础的分类注释。
While certain industrial sectors (e.g., aviation) have a long history of mandatory incident reporting complete with analytical findings, the practice of artificial intelligence (AI) safety benefits from no such mandate and thus analyses must be performed on publicly known ``open source'' AI incidents. Although the exact causes of AI incidents are seldom known by outsiders, this work demonstrates how to apply expert knowledge on the population of incidents in the AI Incident Database (AIID) to infer the potential and likely technical causative factors that contribute to reported failures and harms. We present early work on a taxonomic system that covers a cascade of interrelated incident factors, from system goals (nearly always known) to methods / technologies (knowable in many cases) and technical failure causes (subject to expert analysis) of the implicated systems. We pair this ontology structure with a comprehensive classification workflow that leverages expert knowledge and community feedback, resulting in taxonomic annotations grounded by incident data and human expertise.