论文标题

姿势引导的可见零件匹配被阻塞的人reid

Pose-guided Visible Part Matching for Occluded Person ReID

论文作者

Gao, Shang, Wang, Jingya, Lu, Huchuan, Liu, Zimo

论文摘要

被遮挡的人重新识别是一项艰巨的任务,因为外观在各种障碍物方面有很大变化,尤其是在人群场景中。为了解决这个问题,我们提出了一种姿势引导的可见零件匹配(PVPM)方法,该方法共同学习了姿势引导的注意力和自我矿山的判别特征和端到端框架中的零件可见性。具体而言,所提出的PVPM包括两个关键组成部分:1)用于零件特征池的姿势引导的注意(PGA)方法,以利用更具歧视性的局部特征; 2)姿势引导的可见性预测因子(PVP),该预测指标估计零件是否受到阻塞。由于没有针对遮挡部分的地面真相训练注释,因此我们转向以正面对来利用零件对应的特征,并通过图形匹配来自我挖掘对应得分。然后,生成的对应得分被用作可见性预测因子(PVP)的伪标记。有关三个封闭基准的实验结果表明,所提出的方法可以实现最先进方法的竞争性能。源代码可在https://github.com/hh23333/pvpm上找到

Occluded person re-identification is a challenging task as the appearance varies substantially with various obstacles, especially in the crowd scenario. To address this issue, we propose a Pose-guided Visible Part Matching (PVPM) method that jointly learns the discriminative features with pose-guided attention and self-mines the part visibility in an end-to-end framework. Specifically, the proposed PVPM includes two key components: 1) pose-guided attention (PGA) method for part feature pooling that exploits more discriminative local features; 2) pose-guided visibility predictor (PVP) that estimates whether a part suffers the occlusion or not. As there are no ground truth training annotations for the occluded part, we turn to utilize the characteristic of part correspondence in positive pairs and self-mining the correspondence scores via graph matching. The generated correspondence scores are then utilized as pseudo-labels for visibility predictor (PVP). Experimental results on three reported occluded benchmarks show that the proposed method achieves competitive performance to state-of-the-art methods. The source codes are available at https://github.com/hh23333/PVPM

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源