论文标题

铰接3D人类运动重建的可区分动力

Differentiable Dynamics for Articulated 3d Human Motion Reconstruction

论文作者

Gärtner, Erik, Andriluka, Mykhaylo, Coumans, Erwin, Sminchisescu, Cristian

论文摘要

我们介绍了Diffphy,这是一种基于可区分的物理学模型,用于从视频中阐明的3D人类运动重建。迄今为止,基于物理推理的应用在人类运动分析中的应用受到限制,这是由于构建足够的人类运动的足够物理模型的复杂性,以及通过对循环中物理学进行稳定有效的推断的巨大挑战。我们通过提出一种将物理上合理的身体表示与解剖联合限制,可区分的物理模拟器和优化技术相结合的方法来共同解决此类建模和推理挑战,并确保稳定性和稳健性,以使次优局部最佳Opta。与最近的几种方法相反,我们的方法很容易支持全身接触,包括与场景中对象的交互。最重要的是,我们的模型将端到端连接到图像,从而通过基于图像的损耗功能来支持基于直接梯度的物理优化。我们通过证明它可以从单眼视频中准确地重建物理上合理的3D人类运动来验证该模型,这既可以在公共基准测试中,都具有可用的3D地面真相,并在Internet上的视频上。

We introduce DiffPhy, a differentiable physics-based model for articulated 3d human motion reconstruction from video. Applications of physics-based reasoning in human motion analysis have so far been limited, both by the complexity of constructing adequate physical models of articulated human motion, and by the formidable challenges of performing stable and efficient inference with physics in the loop. We jointly address such modeling and inference challenges by proposing an approach that combines a physically plausible body representation with anatomical joint limits, a differentiable physics simulator, and optimization techniques that ensure good performance and robustness to suboptimal local optima. In contrast to several recent methods, our approach readily supports full-body contact including interactions with objects in the scene. Most importantly, our model connects end-to-end with images, thus supporting direct gradient-based physics optimization by means of image-based loss functions. We validate the model by demonstrating that it can accurately reconstruct physically plausible 3d human motion from monocular video, both on public benchmarks with available 3d ground-truth, and on videos from the internet.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源