论文标题

解释性在创建可信赖的人工智能中为卫生保健的作用:对术语,设计选择和评估策略的全面调查

The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies

论文作者

Markus, Aniek F., Kors, Jan A., Rijnbeek, Peter R.

论文摘要

人工智能(AI)具有改善人们健康和福祉的巨大潜力,但在临床实践中的采用仍然有限。缺乏透明度被确定为实施的主要障碍之一,因为临床医生应该相信可以信任AI系统。可解释的AI有可能克服这个问题,并且可能是迈向值得信赖的AI的一步。在本文中,我们回顾了最近的文献,以向研究人员和从业人员提供有关医疗保健领域的可解释AI系统的设计,并有助于解释AI领域的正式化。我们认为要求解释性的原因决定了什么应解释,因为这决定了解释性的性质(即可解释性和忠诚)的相对重要性。基于此,我们提出了一个框架,以指导可解释的AI方法类别(可解释的建模与事后解释;基于模型,基于属性或示例的解释;全球和本地解释)。此外,我们发现对于客观标准化评估很重要的定量评估指标仍然缺乏某些属性(例如清晰度)和解释类型(例如,基于示例的方法)。我们得出的结论是,可解释的建模可以有助于值得信赖的AI,但是在实践中仍然需要证明解释性的好处,并且可能需要采取补充措施来创建可信赖的医疗保健(例如报告数据质量,进行广泛的(外部)验证和法规)。

Artificial intelligence (AI) has huge potential to improve the health and well-being of people, but adoption in clinical practice is still limited. Lack of transparency is identified as one of the main barriers to implementation, as clinicians should be confident the AI system can be trusted. Explainable AI has the potential to overcome this issue and can be a step towards trustworthy AI. In this paper we review the recent literature to provide guidance to researchers and practitioners on the design of explainable AI systems for the health-care domain and contribute to formalization of the field of explainable AI. We argue the reason to demand explainability determines what should be explained as this determines the relative importance of the properties of explainability (i.e. interpretability and fidelity). Based on this, we propose a framework to guide the choice between classes of explainable AI methods (explainable modelling versus post-hoc explanation; model-based, attribution-based, or example-based explanations; global and local explanations). Furthermore, we find that quantitative evaluation metrics, which are important for objective standardized evaluation, are still lacking for some properties (e.g. clarity) and types of explanations (e.g. example-based methods). We conclude that explainable modelling can contribute to trustworthy AI, but the benefits of explainability still need to be proven in practice and complementary measures might be needed to create trustworthy AI in health care (e.g. reporting data quality, performing extensive (external) validation, and regulation).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源