论文标题

通过基于归因的输入采样和块范围特征聚合来解释卷积神经网络

Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation

论文作者

Sattarzadeh, Sam, Sudhakar, Mahesh, Lem, Anthony, Mehryar, Shervin, Plataniotis, K. N., Jang, Jongseong, Kim, Hyunwoo, Jeong, Yeonjeong, Lee, Sangmin, Bae, Kyunghoon

论文摘要

作为机器学习中的新兴领域,可解释的AI(XAI)在解释卷积神经网络(CNN)做出的决策方面一直表现出色。为了实现CNN的视觉解释,基于类激活映射和随机输入采样的方法已广受欢迎。但是,基于这些技术的归因方法提供了较低的分辨率和模糊的解释图,从而限制了其解释能力。为了解决这个问题,寻求基于各个层的可视化。在这项工作中,我们根据基于归因的输入采样技术从模型的多个层中收集可视化图,并将它们汇总以达到细粒度和完整的解释。我们还提出了一种适用于整个基于CNN的模型的层选择策略,该策略基于我们的提取框架可视化模型的每个卷积块的最后一层。此外,我们对派生的低级信息的功效进行经验分析以增强代表的归因。使用基于地面和模型真实性的评估指标对自然和工业数据集进行培训的浅层和深层模型进行的综合实验,通过与最先进的方法相遇或超越最先进的方法来验证我们提出的算法,以解释能力和视觉质量来证明我们的方法表明稳定性无论对象或实现对象的稳定性,都可以解释为了解释对象的稳定性。

As an emerging field in Machine Learning, Explainable AI (XAI) has been offering remarkable performance in interpreting the decisions made by Convolutional Neural Networks (CNNs). To achieve visual explanations for CNNs, methods based on class activation mapping and randomized input sampling have gained great popularity. However, the attribution methods based on these techniques provide lower resolution and blurry explanation maps that limit their explanation power. To circumvent this issue, visualization based on various layers is sought. In this work, we collect visualization maps from multiple layers of the model based on an attribution-based input sampling technique and aggregate them to reach a fine-grained and complete explanation. We also propose a layer selection strategy that applies to the whole family of CNN-based models, based on which our extraction framework is applied to visualize the last layers of each convolutional block of the model. Moreover, we perform an empirical analysis of the efficacy of derived lower-level information to enhance the represented attributions. Comprehensive experiments conducted on shallow and deep models trained on natural and industrial datasets, using both ground-truth and model-truth based evaluation metrics validate our proposed algorithm by meeting or outperforming the state-of-the-art methods in terms of explanation ability and visual quality, demonstrating that our method shows stability regardless of the size of objects or instances to be explained.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源