论文标题

使用从数据集开采的辅助信息的高准确性无监督的人重新识别方法

A High-Accuracy Unsupervised Person Re-identification Method Using Auxiliary Information Mined from Datasets

论文作者

Teng, Hehan, He, Tao, Guo, Yuchen, Ding, Guiguang

论文摘要

监督人员的重新识别方法在很大程度上依赖于高质量的跨相机培训标签。这显着阻碍了现实世界应用程序中的重新ID模型的部署。无监督的人重新ID方法可以降低数据注释的成本,但其性能仍然远低于监督者。在本文中,我们充分利用了从数据集开采的辅助信息,以进行多模式功能学习,包括相机信息,时间信息和空间信息。通过分析摄像机的样式偏差,行人运动轨迹的特征和相机网络的位置,本文设计了三个模块:时间重叠的约束(TOC),时空相似性(STS)和相同的惩罚(SCP)来利用辅助信息。辅助信息可以通过构建关联约束或与视觉特征融合来提高模型性能和推理精度。此外,本文提出了三个有效的训练技巧,包括限制标签平滑横熵损失(RLSCE),体重自适应三胞胎损失(WATL)和动态训练迭代(DTI)。这些技巧分别在火星和DUKEMTMC-VEDEOREID上获得72.4%和81.1%的地图。结合利用模块的辅助信息,我们的方法在DUKEMTMC上获得了89.9%的地图,其中TOC,STS和SCP都贡献了相当大的性能提高。本文提出的方法优于大多数现有的无监督重新ID方法,并缩小了无监督和监督的重新ID方法之间的差距。我们的代码位于https://github.com/tenghehan/auxuslreid。

Supervised person re-identification methods rely heavily on high-quality cross-camera training label. This significantly hinders the deployment of re-ID models in real-world applications. The unsupervised person re-ID methods can reduce the cost of data annotation, but their performance is still far lower than the supervised ones. In this paper, we make full use of the auxiliary information mined from the datasets for multi-modal feature learning, including camera information, temporal information and spatial information. By analyzing the style bias of cameras, the characteristics of pedestrians' motion trajectories and the positions of camera network, this paper designs three modules: Time-Overlapping Constraint (TOC), Spatio-Temporal Similarity (STS) and Same-Camera Penalty (SCP) to exploit the auxiliary information. Auxiliary information can improve the model performance and inference accuracy by constructing association constraints or fusing with visual features. In addition, this paper proposes three effective training tricks, including Restricted Label Smoothing Cross Entropy Loss (RLSCE), Weight Adaptive Triplet Loss (WATL) and Dynamic Training Iterations (DTI). The tricks achieve mAP of 72.4% and 81.1% on MARS and DukeMTMC-VideoReID, respectively. Combined with auxiliary information exploiting modules, our methods achieve mAP of 89.9% on DukeMTMC, where TOC, STS and SCP all contributed considerable performance improvements. The method proposed by this paper outperforms most existing unsupervised re-ID methods and narrows the gap between unsupervised and supervised re-ID methods. Our code is at https://github.com/tenghehan/AuxUSLReID.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源