论文标题
混合注意力指导网络具有重新识别的多个分辨率特征
Hybrid-Attention Guided Network with Multiple Resolution Features for Person Re-Identification
论文作者
论文摘要
提取有效和歧视性特征对于解决具有挑战性的人重新识别(RE-ID)任务非常重要。盛行的深卷积神经网络(CNN)通常使用高级特征来识别行人。但是,由于在训练阶段进行广泛的填充和汇总操作,学习高级功能时,一些基本的空间信息属于诸如形状,纹理和颜色之类的低级特征,例如形状,纹理和颜色。此外,大多数现有的重新ID方法主要基于精确对齐图像的手工艺边界框。它在实际应用中是不现实的,因为被剥削的对象检测算法通常会产生不准确的边界框。这将不可避免地降低现有算法的性能。为了解决这些问题,我们提出了一个新颖的人重新ID模型,该模型融合了高级和低级嵌入,以减少学习高级特征造成的信息损失。然后,我们将融合的嵌入分为几个部分,然后重新连接以获得全球功能和更重要的本地功能,以减轻不准确的边界框引起的影响。此外,我们还在模型中介绍了空间和渠道注意机制,该机制旨在挖掘与目标相关的更具歧视性特征。最后,我们重建功能提取器,以确保我们的模型能够获得更丰富和强大的功能。与现有方法相比,广泛的实验表明了我们方法的优势。我们的代码可在https://github.com/libraflower/mutiplefeature-for-prid上找到。
Extracting effective and discriminative features is very important for addressing the challenging person re-identification (re-ID) task. Prevailing deep convolutional neural networks (CNNs) usually use high-level features for identifying pedestrian. However, some essential spatial information resided in low-level features such as shape, texture and color will be lost when learning the high-level features, due to extensive padding and pooling operations in the training stage. In addition, most existing person re-ID methods are mainly based on hand-craft bounding boxes where images are precisely aligned. It is unrealistic in practical applications, since the exploited object detection algorithms often produce inaccurate bounding boxes. This will inevitably degrade the performance of existing algorithms. To address these problems, we put forward a novel person re-ID model that fuses high- and low-level embeddings to reduce the information loss caused in learning high-level features. Then we divide the fused embedding into several parts and reconnect them to obtain the global feature and more significant local features, so as to alleviate the affect caused by the inaccurate bounding boxes. In addition, we also introduce the spatial and channel attention mechanisms in our model, which aims to mine more discriminative features related to the target. Finally, we reconstruct the feature extractor to ensure that our model can obtain more richer and robust features. Extensive experiments display the superiority of our approach compared with existing approaches. Our code is available at https://github.com/libraflower/MutipleFeature-for-PRID.