论文标题
在基于自动编码器的石灰中使用决策树作为本地可解释模型
Using Decision Tree as Local Interpretable Model in Autoencoder-based LIME
论文作者
论文摘要
如今,由于其准确性很高,因此在许多领域中使用了深层神经网络。但是,它们被认为是“黑匣子”,这意味着它们对人类无法解释。另一方面,在某些任务(例如医疗,经济和自动驾驶汽车)中,用户希望该模型可以解释以决定他们是否可以信任这些结果。在这项工作中,我们提出了一种基于自动编码器的方法的修改版本,用于局部解释性,称为Alime。 Alime本身的灵感来自一种著名的方法,称为局部可解释的模型不足的解释(Lime)。石灰通过在实例周围生成新数据并训练本地线性解释模型来生成单个实例级别的说明。 Alime使用自动编码器来称量样品周围的新数据。尽管如此,Alime还是使用线性模型作为可解释的模型,就像石灰一样。这项工作提出了一种新方法,该方法使用决策树而不是线性模型作为可解释的模型。在稳定性,本地保真度和不同数据集上的可解释性的情况下,我们评估了所提出的模型。与Alime相比,实验对稳定性和局部保真度显示出显着的结果,并改善了解释性的结果。
Nowadays, deep neural networks are being used in many domains because of their high accuracy results. However, they are considered as "black box", means that they are not explainable for humans. On the other hand, in some tasks such as medical, economic, and self-driving cars, users want the model to be interpretable to decide if they can trust these results or not. In this work, we present a modified version of an autoencoder-based approach for local interpretability called ALIME. The ALIME itself is inspired by a famous method called Local Interpretable Model-agnostic Explanations (LIME). LIME generates a single instance level explanation by generating new data around the instance and training a local linear interpretable model. ALIME uses an autoencoder to weigh the new data around the sample. Nevertheless, the ALIME uses a linear model as the interpretable model to be trained locally, just like the LIME. This work proposes a new approach, which uses a decision tree instead of the linear model, as the interpretable model. We evaluate the proposed model in case of stability, local fidelity, and interpretability on different datasets. Compared to ALIME, the experiments show significant results on stability and local fidelity and improved results on interpretability.