论文标题

用于视觉上解释视频识别网络的自适应闭塞灵敏度分析

Adaptive occlusion sensitivity analysis for visually explaining video recognition networks

论文作者

Uchiyama, Tomoki, Sogi, Naoya, Iizuka, Satoshi, Niinuma, Koichiro, Fukui, Kazuhiro

论文摘要

本文提出了一种通过视觉解释视频识别网络的决策过程的方法,该过程具有闭塞敏感性分析的时间扩展,称为自适应遮挡敏感性分析(AOSA)。这里的关键思想是在输入3D时间空间数据空间中通过3D掩码遮住特定的数据,然后测量输出评分中的变更程度。产生较大变化程度的封闭体积数据被认为是分类的更关键要素。但是,虽然遮挡敏感性分析通常用于分析单个图像分类,但将此想法应用于视频分类并不是那么简单,因为简单的固定核心无法处理复杂的运动。为了解决此问题,我们在参考动作时会自适应地设置3D遮挡面膜的形状。通过考虑从输入视频数据中提取的光流的时间连续性和空间共存在,我们的灵活面膜适应性是通过进行的。我们进一步提出了一种新的方法,以相对于输入视频的输出评分的一阶近似来降低所提出方法的计算成本。我们通过与删除/插入度量的常规方法以及UCF101数据集中的指向度量以及Kinetics-400和700个数据集的指向度量来证明我们方法的有效性。

This paper proposes a method for visually explaining the decision-making process of video recognition networks with a temporal extension of occlusion sensitivity analysis, called Adaptive Occlusion Sensitivity Analysis (AOSA). The key idea here is to occlude a specific volume of data by a 3D mask in an input 3D temporal-spatial data space and then measure the change degree in the output score. The occluded volume data that produces a larger change degree is regarded as a more critical element for classification. However, while the occlusion sensitivity analysis is commonly used to analyze single image classification, applying this idea to video classification is not so straightforward as a simple fixed cuboid cannot deal with complicated motions. To solve this issue, we adaptively set the shape of a 3D occlusion mask while referring to motions. Our flexible mask adaptation is performed by considering the temporal continuity and spatial co-occurrence of the optical flows extracted from the input video data. We further propose a novel method to reduce the computational cost of the proposed method with the first-order approximation of the output score with respect to an input video. We demonstrate the effectiveness of our method through various and extensive comparisons with the conventional methods in terms of the deletion/insertion metric and the pointing metric on the UCF101 dataset and the Kinetics-400 and 700 datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源