论文标题
可解释的深度学习:初学的现场指南
Explainable Deep Learning: A Field Guide for the Uninitiated
论文作者
论文摘要
深度神经网络(DNNS)已成为一种经过验证且必不可少的机器学习工具。作为黑框模型,很难诊断模型输入的哪些方面驱动DNN的决策。在无数的现实领域,从立法和执法部门到医疗保健,这种诊断对于确保DNN的决定是由在其使用中适当的方面驱动的。因此,方法和研究的发展实现了DNN决策的解释,因此已经发展为一个积极的广泛研究领域。该领域所采取的众多正交方向可能会吓thor,这可能会吓tothora。通过对'解释''的含义的竞争定义的竞争定义进一步加剧了这种复杂性,并评估方法的``解释能力''的行为。本文提供了一个现场指南,以探讨针对现场未知的人的可解释深度学习空间。现场指南:i)介绍了三个简单的维度,定义了有助于解释深度学习的基础方法的空间,ii)讨论模型解释的评估,iii)在其他相关的深度学习研究领域的背景下进行解释,以及iv)最终在用户面向用户的解释设计和潜在的深入学习方向上进行了阐述。我们希望该指南可以用作刚刚在该领域进行研究的人易于消化的起点。
Deep neural networks (DNNs) have become a proven and indispensable machine learning tool. As a black-box model, it remains difficult to diagnose what aspects of the model's input drive the decisions of a DNN. In countless real-world domains, from legislation and law enforcement to healthcare, such diagnosis is essential to ensure that DNN decisions are driven by aspects appropriate in the context of its use. The development of methods and studies enabling the explanation of a DNN's decisions has thus blossomed into an active, broad area of research. A practitioner wanting to study explainable deep learning may be intimidated by the plethora of orthogonal directions the field has taken. This complexity is further exacerbated by competing definitions of what it means ``to explain'' the actions of a DNN and to evaluate an approach's ``ability to explain''. This article offers a field guide to explore the space of explainable deep learning aimed at those uninitiated in the field. The field guide: i) Introduces three simple dimensions defining the space of foundational methods that contribute to explainable deep learning, ii) discusses the evaluations for model explanations, iii) places explainability in the context of other related deep learning research areas, and iv) finally elaborates on user-oriented explanation designing and potential future directions on explainable deep learning. We hope the guide is used as an easy-to-digest starting point for those just embarking on research in this field.