论文标题
运动事项:跨视图步态功能学习的新型运动建模
Motion Matters: A Novel Motion Modeling For Cross-View Gait Feature Learning
论文作者
论文摘要
作为可以在远处感知的独特生物特征,步态在人身身份验证,社会保障等方面具有广泛的应用。现有的步态识别方法遭受了观点和衣服的变化,几乎不考虑从步态序列中提取各种运动特征,这是步态中的基本特征。本文提出了一种新型的运动建模方法,以提取歧视性和可靠的表示。具体而言,我们首先从浅层层中编码的运动序列中提取运动特征。然后,我们不断增强深层运动特征。这种运动建模方法与构建网络体系结构中的主流工作无关。结果,可以将此运动建模方法应用于任何主链以提高步态识别性能。在本文中,我们将运动建模与一种常用的主链〜(gaitgl)作为步态-M结合,以说明运动建模。对两个常用的跨视图步态数据集的广泛实验结果表明,步态M-M优于现有最新方法。
As a unique biometric that can be perceived at a distance, gait has broad applications in person authentication, social security, and so on. Existing gait recognition methods suffer from changes in viewpoint and clothing and barely consider extracting diverse motion features, a fundamental characteristic in gaits, from gait sequences. This paper proposes a novel motion modeling method to extract the discriminative and robust representation. Specifically, we first extract the motion features from the encoded motion sequences in the shallow layer. Then we continuously enhance the motion feature in deep layers. This motion modeling approach is independent of mainstream work in building network architectures. As a result, one can apply this motion modeling method to any backbone to improve gait recognition performance. In this paper, we combine motion modeling with one commonly used backbone~(GaitGL) as GaitGL-M to illustrate motion modeling. Extensive experimental results on two commonly-used cross-view gait datasets demonstrate the superior performance of GaitGL-M over existing state-of-the-art methods.