论文标题
建议:卷积神经网络的自适应功能相关性和视觉解释
ADVISE: ADaptive Feature Relevance and VISual Explanations for Convolutional Neural Networks
论文作者
论文摘要
要为卷积神经网络(CNN)配备解释性,必须解释不透明模型如何做出特定决策,了解导致错误,改善架构设计并确定分类器中的不道德偏见。本文介绍了一种新的解释方法,该方法量化和利用了功能图的每个单元的相关性,以提供更好的视觉解释。为此,我们建议使用自适应带宽内核密度估计为相对于预测类分配相关得分。我们还提出了一个评估协议,以定量评估CNN模型的可视性解释性。我们使用Alexnet,VGG16,Resnet50和Xception在Imagenet上预测的,在图像分类任务中广泛评估了我们的想法。我们将建议与最先进的视觉可解释方法进行比较,并表明所提出的方法在量化特征 - 相关性和视觉解释性的同时,在保持竞争性的时间复杂性的同时,优于竞争方法。我们的实验进一步表明,建议在通过理智检查时履行灵敏度和实施独立公理。该实现可在https://github.com/dehshibi/advise上访问。
To equip Convolutional Neural Networks (CNNs) with explainability, it is essential to interpret how opaque models take specific decisions, understand what causes the errors, improve the architecture design, and identify unethical biases in the classifiers. This paper introduces ADVISE, a new explainability method that quantifies and leverages the relevance of each unit of the feature map to provide better visual explanations. To this end, we propose using adaptive bandwidth kernel density estimation to assign a relevance score to each unit of the feature map with respect to the predicted class. We also propose an evaluation protocol to quantitatively assess the visual explainability of CNN models. We extensively evaluate our idea in the image classification task using AlexNet, VGG16, ResNet50, and Xception pretrained on ImageNet. We compare ADVISE with the state-of-the-art visual explainable methods and show that the proposed method outperforms competing approaches in quantifying feature-relevance and visual explainability while maintaining competitive time complexity. Our experiments further show that ADVISE fulfils the sensitivity and implementation independence axioms while passing the sanity checks. The implementation is accessible for reproducibility purposes on https://github.com/dehshibi/ADVISE.