论文标题
时空表示的双重对比学习
Dual Contrastive Learning for Spatio-temporal Representation
论文作者
论文摘要
对比学习表明,在自我监督时空表示学习中有希望的潜力。大多数作品天真地采样不同的剪辑以构建正面和负对。但是,我们观察到该公式将模型倾向于背景场景偏见。根本原因是双重的。首先,场景差异通常比运动差异更明显,更容易区分。其次,从同一视频中采样的剪辑通常具有相似的背景,但具有不同的动作。仅将它们作为积极对就可以将模型绘制为静态背景而不是运动模式。为了应对这一挑战,本文提出了一种新颖的双重对比配方。具体而言,我们将输入RGB视频序列分解为两种互补模式,静态场景和动态运动。然后,将原始的RGB功能分别靠近静态特征和对齐动态特征。这样,将静态场景和动态运动同时编码为紧凑的RGB表示。我们通过激活图进一步进行特征空间解耦,以提炼静态和动态相关的特征。我们将我们的方法称为\ textbf {d} ual \ textbf {c} intrastive \ textbf {l}为时空\ textbf {r} ePresentation(dclr)而获得。广泛的实验表明,DCLR学习有效的时空表示,并在UCF-101,HMDB-51和潜水-48数据集中获得最先进或可比性的性能。
Contrastive learning has shown promising potential in self-supervised spatio-temporal representation learning. Most works naively sample different clips to construct positive and negative pairs. However, we observe that this formulation inclines the model towards the background scene bias. The underlying reasons are twofold. First, the scene difference is usually more noticeable and easier to discriminate than the motion difference. Second, the clips sampled from the same video often share similar backgrounds but have distinct motions. Simply regarding them as positive pairs will draw the model to the static background rather than the motion pattern. To tackle this challenge, this paper presents a novel dual contrastive formulation. Concretely, we decouple the input RGB video sequence into two complementary modes, static scene and dynamic motion. Then, the original RGB features are pulled closer to the static features and the aligned dynamic features, respectively. In this way, the static scene and the dynamic motion are simultaneously encoded into the compact RGB representation. We further conduct the feature space decoupling via activation maps to distill static- and dynamic-related features. We term our method as \textbf{D}ual \textbf{C}ontrastive \textbf{L}earning for spatio-temporal \textbf{R}epresentation (DCLR). Extensive experiments demonstrate that DCLR learns effective spatio-temporal representations and obtains state-of-the-art or comparable performance on UCF-101, HMDB-51, and Diving-48 datasets.