论文标题
一个基于注意的深度学习模型,用于多个行人属性识别
An Attention-Based Deep Learning Model for Multiple Pedestrian Attributes Recognition
论文作者
论文摘要
监视镜头中行人的自动表征是一个艰巨的挑战,尤其是当数据非常多样化,背景杂乱无章,并且受试者是在多个姿势下从变化的距离中捕获的,并部分遮挡。在观察到最先进的性能仍然不令人满意的情况下,本文为问题提供了一种新颖的解决方案,并具有两个折叠的贡献:1)考虑不同的全身属性之间的强烈语义相关性,我们提出了一个多任务深度模型,该模型使用元素范围的元素效应层来提取更全面的特征表示。实际上,该层是删除无关的背景特征的过滤器,并且对于处理复杂的,混乱的数据尤为重要。 2)我们对损失函数介绍了一个加权和术语,不仅相互促进每个任务的贡献(归因于归因),而且对于改善多重属性推理设置的性能至关重要。我们的实验是在两个著名的数据集(RAP和PETA)上进行的,并指出了有关最先进的方法的优越性。该代码可从https://github.com/ehsan-yaghoubi/man-par-获得。
The automatic characterization of pedestrians in surveillance footage is a tough challenge, particularly when the data is extremely diverse with cluttered backgrounds, and subjects are captured from varying distances, under multiple poses, with partial occlusion. Having observed that the state-of-the-art performance is still unsatisfactory, this paper provides a novel solution to the problem, with two-fold contributions: 1) considering the strong semantic correlation between the different full-body attributes, we propose a multi-task deep model that uses an element-wise multiplication layer to extract more comprehensive feature representations. In practice, this layer serves as a filter to remove irrelevant background features, and is particularly important to handle complex, cluttered data; and 2) we introduce a weighted-sum term to the loss function that not only relativizes the contribution of each task (kind of attributed) but also is crucial for performance improvement in multiple-attribute inference settings. Our experiments were performed on two well-known datasets (RAP and PETA) and point for the superiority of the proposed method with respect to the state-of-the-art. The code is available at https://github.com/Ehsan-Yaghoubi/MAN-PAR-.