论文标题

关于人工神经网络的解释性:调查

On Interpretability of Artificial Neural Networks: A Survey

论文作者

Fan, Fenglei, Xiong, Jinjun, Li, Mengzhou, Wang, Ge

论文摘要

人工深度神经网络(DNN)代表的深度学习在许多重要领域取得了巨大的成功,这些领域涉及文本,图像,视频,图形等。但是,DNN的黑盒性质已成为其在关键任务应用(例如医学诊断和治疗)中广泛接受的主要障碍之一。由于深度学习的巨大潜力,解释神经网络最近引起了很多研究的关注。在本文中,根据我们的全面分类法,我们系统地回顾了了解神经网络机制的最新研究,描述了可解释性的应用,尤其是在医学中,并讨论了解释性研究的未来方向,例如与模糊逻辑和脑科学有关。

Deep learning as represented by the artificial deep neural networks (DNNs) has achieved great success in many important areas that deal with text, images, videos, graphs, and so on. However, the black-box nature of DNNs has become one of the primary obstacles for their wide acceptance in mission-critical applications such as medical diagnosis and therapy. Due to the huge potential of deep learning, interpreting neural networks has recently attracted much research attention. In this paper, based on our comprehensive taxonomy, we systematically review recent studies in understanding the mechanism of neural networks, describe applications of interpretability especially in medicine, and discuss future directions of interpretability research, such as in relation to fuzzy logic and brain science.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源