论文标题

通过跨域情节学习重新识别领域广义的人

Domain Generalized Person Re-Identification via Cross-Domain Episodic Learning

论文作者

Lin, Ci-Siang, Cheng, Yuan-Chia, Wang, Yu-Chiang Frank

论文摘要

旨在通过不同的相机视图识别同一个人的图像,人重新识别(RE-ID)一直是计算机视觉中的积极研究主题。大多数现有的重新ID作品都需要从感兴趣的场景中收集大量标记的图像数据。当要识别的数据与源识别域培训不同时,已经提出了许多域的适应方法。然而,在培训期间,仍然需要收集标记或未标记的目标域数据。在本文中,我们解决了一个更具挑战性和实用的环境,域名(DG)人重新ID。也就是说,尽管有许多标记的源域数据集可用,但我们无法访问任何目标域培训数据。为了在不知道感兴趣的目标领域学习域不变特征,我们提出了一个情节学习方案,该方案可以推进元学习策略,以利用观察到的源域标记的数据。学到的功能将表现出足够的域不变属性,而不会过度拟合源域数据或ID标签。我们在四个基准数据集上的实验证实了我们方法比最新方法的优越性。

Aiming at recognizing images of the same person across distinct camera views, person re-identification (re-ID) has been among active research topics in computer vision. Most existing re-ID works require collection of a large amount of labeled image data from the scenes of interest. When the data to be recognized are different from the source-domain training ones, a number of domain adaptation approaches have been proposed. Nevertheless, one still needs to collect labeled or unlabelled target-domain data during training. In this paper, we tackle an even more challenging and practical setting, domain generalized (DG) person re-ID. That is, while a number of labeled source-domain datasets are available, we do not have access to any target-domain training data. In order to learn domain-invariant features without knowing the target domain of interest, we present an episodic learning scheme which advances meta learning strategies to exploit the observed source-domain labeled data. The learned features would exhibit sufficient domain-invariant properties while not overfitting the source-domain data or ID labels. Our experiments on four benchmark datasets confirm the superiority of our method over the state-of-the-arts.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源