论文标题
接触感知的人类运动预测
Contact-aware Human Motion Forecasting
论文作者
论文摘要
在本文中,我们应对场景感知3D人类运动预测的任务,该任务包括预测未来的人类姿势,给定3D场景和过去的人类运动。这项任务的一个关键挑战是确保人类与现场之间的一致性,这考虑了人类场景的互动。先前尝试这样做的尝试仅是隐式建模的,因此由于局部姿势和全球运动之间缺乏明确的约束,因此倾向于产生诸如“幽灵运动”之类的伪像。相比之下,在这里,我们建议明确对人类习惯的接触进行建模。为此,我们介绍了基于距离的接触地图,以捕获每个关节和每个3D场景点之间的接触关系。然后,我们开发了一条两阶段的管道,该管道首先预测了过去的触点图和场景云的未来触点图,然后通过在预测的触点图上进行调节来预测未来的人类所构成的构图。在培训期间,我们通过使用接触地图和未来姿势的先验定义,明确鼓励全球运动与本地姿势之间的一致性。我们的方法表现优于合成和真实数据集上的最先进的人类运动预测和人类合成方法。我们的代码可在https://github.com/wei-mao-2019/contawaremotionpred上获得。
In this paper, we tackle the task of scene-aware 3D human motion forecasting, which consists of predicting future human poses given a 3D scene and a past human motion. A key challenge of this task is to ensure consistency between the human and the scene, accounting for human-scene interactions. Previous attempts to do so model such interactions only implicitly, and thus tend to produce artifacts such as "ghost motion" because of the lack of explicit constraints between the local poses and the global motion. Here, by contrast, we propose to explicitly model the human-scene contacts. To this end, we introduce distance-based contact maps that capture the contact relationships between every joint and every 3D scene point at each time instant. We then develop a two-stage pipeline that first predicts the future contact maps from the past ones and the scene point cloud, and then forecasts the future human poses by conditioning them on the predicted contact maps. During training, we explicitly encourage consistency between the global motion and the local poses via a prior defined using the contact maps and future poses. Our approach outperforms the state-of-the-art human motion forecasting and human synthesis methods on both synthetic and real datasets. Our code is available at https://github.com/wei-mao-2019/ContAwareMotionPred.