论文标题
解释深度神经网络及以后:方法和应用程序的综述
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
论文作者
论文摘要
随着工业和科学机器学习的广泛成功使用,人们对可解释的AI的需求越来越大。因此,可以更好地了解非线性机器学习能力和策略,尤其是深层神经网络,从而获得了越来越多的关注。在这项工作中,我们旨在(1)及时概述这个积极的新兴领域,重点是“事后”解释,并解释其理论基础,((2)将可解释性算法从理论和比较评估的角度来看,从理论和比较评估的角度来看,使用广泛的模拟,(3)最佳实践方面的概述。 AI在代表性的应用程序方案中选择。最后,我们讨论了这一激动人心的机器学习基础领域的挑战和未来方向。
With the broader and highly successful usage of machine learning in industry and the sciences, there has been a growing demand for Explainable AI. Interpretability and explanation methods for gaining a better understanding about the problem solving abilities and strategies of nonlinear Machine Learning, in particular, deep neural networks, are therefore receiving increased attention. In this work we aim to (1) provide a timely overview of this active emerging field, with a focus on 'post-hoc' explanations, and explain its theoretical foundations, (2) put interpretability algorithms to a test both from a theory and comparative evaluation perspective using extensive simulations, (3) outline best practice aspects i.e. how to best include interpretation methods into the standard usage of machine learning and (4) demonstrate successful usage of explainable AI in a representative selection of application scenarios. Finally, we discuss challenges and possible future directions of this exciting foundational field of machine learning.