论文标题
通过最终用户启发的设计超越XAI算法边界
Transcending XAI Algorithm Boundaries through End-User-Inspired Design
论文作者
论文摘要
现有的可解释人工智能(XAI)算法的界限仅限于技术用户对解释性的需求所基于的问题。这项研究范式不成比例地忽略了较大的非技术最终用户群体,他们对各种解释目标的AI解释的需求更高,例如做出更安全,更好的决策并改善用户的预测结果。缺乏以解释性为重点的功能支持可能会阻碍在高风险领域(例如医疗保健,刑事司法,财务和自动驾驶系统)安全和责任使用AI。基于对最终用户对XAI需求的先前人为因素分析,我们确定并模拟了四个新颖的XAI技术问题,涵盖了从设计到评估XAI算法的完整范围,包括基于边缘的推理,可自定义的反事实解释,可折叠的决策树,以及评估XAI实用程序的可验证性指标。基于这些新发现的研究问题,我们还讨论了以用户为中心的XAI技术开发的开放问题,以激发未来的研究。我们的工作与以人为本的XAI与技术XAI社区融为一体,并呼吁对以用户为中心的XAI的技术开发进行新的研究范式,以负责在关键任务中负责使用AI。
The boundaries of existing explainable artificial intelligence (XAI) algorithms are confined to problems grounded in technical users' demand for explainability. This research paradigm disproportionately ignores the larger group of non-technical end users, who have a much higher demand for AI explanations in diverse explanation goals, such as making safer and better decisions and improving users' predicted outcomes. Lacking explainability-focused functional support for end users may hinder the safe and accountable use of AI in high-stakes domains, such as healthcare, criminal justice, finance, and autonomous driving systems. Built upon prior human factor analysis on end users' requirements for XAI, we identify and model four novel XAI technical problems covering the full spectrum from design to the evaluation of XAI algorithms, including edge-case-based reasoning, customizable counterfactual explanation, collapsible decision tree, and the verifiability metric to evaluate XAI utility. Based on these newly-identified research problems, we also discuss open problems in the technical development of user-centered XAI to inspire future research. Our work bridges human-centered XAI with the technical XAI community, and calls for a new research paradigm on the technical development of user-centered XAI for the responsible use of AI in critical tasks.