论文标题
利用域自适应人员重新识别样本不确定性
Exploiting Sample Uncertainty for Domain Adaptive Person Re-Identification
论文作者
论文摘要
许多无监督的域自适应(UDA)人员重新识别(REID)方法将基于聚类的伪标签预测与特征微调结合在一起。但是,由于域间隙,伪标签并不总是可靠的,并且标签嘈杂/不正确。这会误导特征表示学习并使性能恶化。在本文中,我们建议通过抑制嘈杂样本的贡献来估计和利用每个样本分配的伪标签的信誉,以减轻嘈杂标签的影响。我们使用平均教师方法以及额外的对比损失来构建基线框架。我们已经观察到,通常通过聚类通过聚类的样本在平均教师模型的输出与学生模型之间的一致性较弱。基于这一发现,我们建议利用不确定性(通过一致性水平衡量)来评估样品的伪标签的可靠性,并结合了不确定性,以将其在各种REID损失中的贡献重新进行,包括每个样品的身份(ID)分类损失,三重态损失和相似的损失。我们的不确定性引导的优化带来了重大改进,并在基准数据集上实现了最先进的性能。
Many unsupervised domain adaptive (UDA) person re-identification (ReID) approaches combine clustering-based pseudo-label prediction with feature fine-tuning. However, because of domain gap, the pseudo-labels are not always reliable and there are noisy/incorrect labels. This would mislead the feature representation learning and deteriorate the performance. In this paper, we propose to estimate and exploit the credibility of the assigned pseudo-label of each sample to alleviate the influence of noisy labels, by suppressing the contribution of noisy samples. We build our baseline framework using the mean teacher method together with an additional contrastive loss. We have observed that a sample with a wrong pseudo-label through clustering in general has a weaker consistency between the output of the mean teacher model and the student model. Based on this finding, we propose to exploit the uncertainty (measured by consistency levels) to evaluate the reliability of the pseudo-label of a sample and incorporate the uncertainty to re-weight its contribution within various ReID losses, including the identity (ID) classification loss per sample, the triplet loss, and the contrastive loss. Our uncertainty-guided optimization brings significant improvement and achieves the state-of-the-art performance on benchmark datasets.