论文标题

使用基于规则的方法解释深度学习模型

Interpreting Deep Learning Model Using Rule-based Method

论文作者

Wang, Xiaojian, Wang, Jingyuan, Tang, Ke

论文摘要

深度学习模型在许多研究和行业领域都受到青睐,并达到了近似甚至超过人类水平的准确性。但是,研究人员长期以来一直将它们视为其复杂非线性特性的黑盒模型。在本文中,我们提出了一个多层次的决策框架,以为深度神经网络模型提供全面的解释。 在这个多层决策框架中,通过为每个神经元拟合决策树并将它们汇总在一起,最初构建了多层决策结构(MLD),该结构可以近似于具有高效率和高忠诚度目标神经网络模型的性能。在样本的局部解释方面,根据MLD结构提出了两种算法:提供样本决策的前向决策生成算法,以及用于递归提取样本规则映射的后向规则诱导算法。对于全球解释,提出了基于频率的和基于频率的方法来提取神经网络决策中的重要特征。此外,进行了MNIST和国家自由孕前检查(NFPC)数据集的实验,以证明MLD框架的有效性和解释性。在评估过程中,使用功能和人构的方法都用于确保信誉。

Deep learning models are favored in many research and industry areas and have reached the accuracy of approximating or even surpassing human level. However they've long been considered by researchers as black-box models for their complicated nonlinear property. In this paper, we propose a multi-level decision framework to provide comprehensive interpretation for the deep neural network model. In this multi-level decision framework, by fitting decision trees for each neuron and aggregate them together, a multi-level decision structure (MLD) is constructed at first, which can approximate the performance of the target neural network model with high efficiency and high fidelity. In terms of local explanation for sample, two algorithms are proposed based on MLD structure: forward decision generation algorithm for providing sample decisions, and backward rule induction algorithm for extracting sample rule-mapping recursively. For global explanation, frequency-based and out-of-bag based methods are proposed to extract important features in the neural network decision. Furthermore, experiments on the MNIST and National Free Pre-Pregnancy Check-up (NFPC) dataset are carried out to demonstrate the effectiveness and interpretability of MLD framework. In the evaluation process, both functionally-grounded and human-grounded methods are used to ensure credibility.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源