论文标题
视频语义细分的跨框架亲和力之间的采矿关系
Mining Relations among Cross-Frame Affinities for Video Semantic Segmentation
论文作者
论文摘要
视频语义细分(VSS)的本质是如何利用时间信息进行预测。先前的努力主要致力于开发新技术来计算诸如光学流和注意力之类的跨框架亲和力。取而代之的是,本文通过跨框架亲和力之间的采矿关系从不同的角度做出了贡献,可以在其上实现更好的时间信息聚合。我们在两个方面探索亲和力之间的关系:单尺度的内在相关性和多尺度关系。受传统功能处理的启发,我们提出了单尺度亲和力改进(SAR)和多尺度亲和力聚合(MAA)。为了使执行MAA可行,我们提出了一种选择性令牌掩蔽(STM)策略,以在计算亲和力时为不同量表选择一致的参考令牌子集,这也提高了我们方法的效率。最后,采用了SAR和MAA加强的跨框架亲和力,以自适应地汇总时间信息。我们的实验表明,所提出的方法对最新的与最新方法的方法有利。该代码可在https://github.com/guoleisun/vss-mrcfa上公开获取
The essence of video semantic segmentation (VSS) is how to leverage temporal information for prediction. Previous efforts are mainly devoted to developing new techniques to calculate the cross-frame affinities such as optical flow and attention. Instead, this paper contributes from a different angle by mining relations among cross-frame affinities, upon which better temporal information aggregation could be achieved. We explore relations among affinities in two aspects: single-scale intrinsic correlations and multi-scale relations. Inspired by traditional feature processing, we propose Single-scale Affinity Refinement (SAR) and Multi-scale Affinity Aggregation (MAA). To make it feasible to execute MAA, we propose a Selective Token Masking (STM) strategy to select a subset of consistent reference tokens for different scales when calculating affinities, which also improves the efficiency of our method. At last, the cross-frame affinities strengthened by SAR and MAA are adopted for adaptively aggregating temporal information. Our experiments demonstrate that the proposed method performs favorably against state-of-the-art VSS methods. The code is publicly available at https://github.com/GuoleiSun/VSS-MRCFA