论文标题

通过相位空间中的联合依赖模型进行运动预测

Motion Prediction via Joint Dependency Modeling in Phase Space

论文作者

Su, Pengxiang, Liu, Zhenguang, Wu, Shuang, Zhu, Lei, Yin, Yifang, Shen, Xuanjing

论文摘要

运动预测是计算机视觉中的一个经典问题,鉴于观察到的姿势序列,它旨在预测未来的运动。已经提出了各种深度学习模型,以实现运动预测的最新表现。但是,现有方法通常着重于建模姿势空间中的时间动力学。不幸的是,人类运动的复杂和高维质为动态环境带来了固有的挑战。因此,我们摆脱了基于常规姿势的表示,并采用了采用各个关节的相空间轨迹表示的新方法。此外,当前方法倾向于仅考虑物理连接的关节之间的依赖性。在本文中,我们介绍了一种新型的卷积神经模型,以有效利用运动解剖学的显式知识,同时捕获联合轨迹动力学的空间和时间信息。然后,我们提出了一个全球优化模块,该模块了解各个联合特征之间的隐式关系。 从经验上讲,我们的方法是在大规模3D人体运动基准数据集(即Human3.6M,CMU MoCap)上评估的。这些结果表明,我们的方法在基准数据集上设置了新的最新技术。我们的代码将在https://github.com/pose-group/teid上找到。

Motion prediction is a classic problem in computer vision, which aims at forecasting future motion given the observed pose sequence. Various deep learning models have been proposed, achieving state-of-the-art performance on motion prediction. However, existing methods typically focus on modeling temporal dynamics in the pose space. Unfortunately, the complicated and high dimensionality nature of human motion brings inherent challenges for dynamic context capturing. Therefore, we move away from the conventional pose based representation and present a novel approach employing a phase space trajectory representation of individual joints. Moreover, current methods tend to only consider the dependencies between physically connected joints. In this paper, we introduce a novel convolutional neural model to effectively leverage explicit prior knowledge of motion anatomy, and simultaneously capture both spatial and temporal information of joint trajectory dynamics. We then propose a global optimization module that learns the implicit relationships between individual joint features. Empirically, our method is evaluated on large-scale 3D human motion benchmark datasets (i.e., Human3.6M, CMU MoCap). These results demonstrate that our method sets the new state-of-the-art on the benchmark datasets. Our code will be available at https://github.com/Pose-Group/TEID.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源