论文标题

在闭塞下重新识别的人重新识别的自适应深度度量嵌入

Adaptive Deep Metric Embeddings for Person Re-Identification under Occlusions

论文作者

Yang, Wanxiang, Yan, Yan, Chen, Si

论文摘要

在闭塞下,人重新识别(REID)是视频监视中的一个具有挑战性的问题。现有的大多数人REID方法利用当地功能来应对阻塞。但是,这些方法通常从图像的本地区域中独立提取特征,而无需考虑不同地方区域之间的关系。在本文中,我们提出了一种新颖的人REID方法,该方法学习了本地区域之间的空间依赖性,并根据长期短期记忆(LSTM)提取了行人图像的歧视性特征表示,并处理了闭塞问题。特别是,我们根据分类不确定性提出了一种新颖的损失(称为自适应最近的邻居损失),以有效地减少类内的变化,同时扩大样品自适应邻居内的阶层间差异。拟议的损失使深度神经网络能够适应地学习判别性度量嵌入,从而显着提高了识别看不见的人身份的概括能力。与几种最先进的方法相比,对具有挑战性的人REID数据集进行了广泛的比较评估表明,该方法的性能显着提高。

Person re-identification (ReID) under occlusions is a challenging problem in video surveillance. Most of existing person ReID methods take advantage of local features to deal with occlusions. However, these methods usually independently extract features from the local regions of an image without considering the relationship among different local regions. In this paper, we propose a novel person ReID method, which learns the spatial dependencies between the local regions and extracts the discriminative feature representation of the pedestrian image based on Long Short-Term Memory (LSTM), dealing with the problem of occlusions. In particular, we propose a novel loss (termed the adaptive nearest neighbor loss) based on the classification uncertainty to effectively reduce intra-class variations while enlarging inter-class differences within the adaptive neighborhood of the sample. The proposed loss enables the deep neural network to adaptively learn discriminative metric embeddings, which significantly improve the generalization capability of recognizing unseen person identities. Extensive comparative evaluations on challenging person ReID datasets demonstrate the significantly improved performance of the proposed method compared with several state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源