论文标题
LEAN2AUGMENT:学习复合视频以进行数据增强行动识别
Learn2Augment: Learning to Composite Videos for Data Augmentation in Action Recognition
论文作者
论文摘要
我们解决了视频动作识别的数据增强问题。视频中的标准增强策略是手工设计的,并随机对可能的增强数据点的空间进行采样,而不知道哪些增强点会更好,或者是通过启发式方法。我们建议学习是什么使良好的视频供行动识别,并仅选择高质量的样本进行增强。特别是,我们选择前景和背景视频的视频合成作为数据增强过程,从而导致各种新样本。我们了解了哪些视频要增强,而不必实际综合它们。这降低了可能的增强空间,这具有两个优势:它节省了计算成本并提高了最终训练的分类器的准确性,因为增强对的质量高于平均水平。我们在整个训练环境中介绍了实验结果:几乎没有射击,半监督和完全监督。我们观察到所有这些都在先前的工作和动力学基准,UCF101,HMDB51方面的一致改进,并在具有有限数据的设置上实现了新的最新设置。在半监督环境中,我们看到高达8.6%的改善。
We address the problem of data augmentation for video action recognition. Standard augmentation strategies in video are hand-designed and sample the space of possible augmented data points either at random, without knowing which augmented points will be better, or through heuristics. We propose to learn what makes a good video for action recognition and select only high-quality samples for augmentation. In particular, we choose video compositing of a foreground and a background video as the data augmentation process, which results in diverse and realistic new samples. We learn which pairs of videos to augment without having to actually composite them. This reduces the space of possible augmentations, which has two advantages: it saves computational cost and increases the accuracy of the final trained classifier, as the augmented pairs are of higher quality than average. We present experimental results on the entire spectrum of training settings: few-shot, semi-supervised and fully supervised. We observe consistent improvements across all of them over prior work and baselines on Kinetics, UCF101, HMDB51, and achieve a new state-of-the-art on settings with limited data. We see improvements of up to 8.6% in the semi-supervised setting.