论文标题

通过学习行动识别的时间一致性,通过无源视频域的适应

Source-free Video Domain Adaptation by Learning Temporal Consistency for Action Recognition

论文作者

Xu, Yuecong, Yang, Jianfei, Cao, Haozhi, Wu, Keyu, Min, Wu, Chen, Zhenghua

论文摘要

基于视频的无监督域适应性(VUDA)方法提高了视频模型的鲁棒性,从而使它们能够应用于不同环境的动作识别任务。但是,这些方法需要在适应过程中不断访问源数据。然而,在许多实际应用中,源视频域中的主题和场景应该与目标视频域中的主题和场景无关。随着对数据隐私的越来越重视,需要源数据访问的方法会引起严重的隐私问题。因此,为了应对这种关注,更实用的域适应方案被提出为基于无源的视频域的适应性(SFVDA)。尽管图像数据上有几种无源域适应性(SFDA)的方法,但由于视频的多模式性质,这些方法在SFVDA中产生了退化性能,并且存在其他时间特征。在本文中,我们提出了一个新颖的专注时间一致网络(ATCON)来通过学习时间一致性来解决SFVDA,并由两个新型的一致性目标保证,即具有跨局部时间特征执行的特征一致性和源预测一致性。 ATCON通过基于预测置信度参加本地时间特征,进一步构建了有效的总体时间特征。经验结果表明,ATCON在各种跨域行动识别基准中的最先进表现。

Video-based Unsupervised Domain Adaptation (VUDA) methods improve the robustness of video models, enabling them to be applied to action recognition tasks across different environments. However, these methods require constant access to source data during the adaptation process. Yet in many real-world applications, subjects and scenes in the source video domain should be irrelevant to those in the target video domain. With the increasing emphasis on data privacy, such methods that require source data access would raise serious privacy issues. Therefore, to cope with such concern, a more practical domain adaptation scenario is formulated as the Source-Free Video-based Domain Adaptation (SFVDA). Though there are a few methods for Source-Free Domain Adaptation (SFDA) on image data, these methods yield degenerating performance in SFVDA due to the multi-modality nature of videos, with the existence of additional temporal features. In this paper, we propose a novel Attentive Temporal Consistent Network (ATCoN) to address SFVDA by learning temporal consistency, guaranteed by two novel consistency objectives, namely feature consistency and source prediction consistency, performed across local temporal features. ATCoN further constructs effective overall temporal features by attending to local temporal features based on prediction confidence. Empirical results demonstrate the state-of-the-art performance of ATCoN across various cross-domain action recognition benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源