论文标题

深NRSFM ++:野外无监督的2d-3d举重

Deep NRSfM++: Towards Unsupervised 2D-3D Lifting in the Wild

论文作者

Wang, Chaoyang, Lin, Chen-Hsuan, Lucey, Simon

论文摘要

从大量图像集合的2D地标中恢复3D形状和姿势,可以看作是运动(NRSFM)问题的非刚性结构。但是,经典的NRSFM方法是有问题的,因为它们依赖于3D结构(例如低级)上的启发式先导,这些先知无法很好地扩展到大型数据集。基于学习的方法表明,与经典方法相比,重建更广泛的3D结构集的潜力 - 大幅度扩展了NRSFM对出色的无监督2D至3D提升的重要性。迄今为止,这些学习方法无法有效地建模镜头摄像机或处理缺失/遮挡的点 - 限制了它们对野外数据集的适用性。在本文中,我们提出了一种普遍的策略,用于改善基于学习的NRSFM方法来解决上述问题。我们的方法是深度NRSFM ++,在众多大规模基准中实现了最先进的性能,表现优于经典和基于学习的2D-3D提升方法。

The recovery of 3D shape and pose from 2D landmarks stemming from a large ensemble of images can be viewed as a non-rigid structure from motion (NRSfM) problem. Classical NRSfM approaches, however, are problematic as they rely on heuristic priors on the 3D structure (e.g. low rank) that do not scale well to large datasets. Learning-based methods are showing the potential to reconstruct a much broader set of 3D structures than classical methods -- dramatically expanding the importance of NRSfM to atemporal unsupervised 2D to 3D lifting. Hitherto, these learning approaches have not been able to effectively model perspective cameras or handle missing/occluded points -- limiting their applicability to in-the-wild datasets. In this paper, we present a generalized strategy for improving learning-based NRSfM methods to tackle the above issues. Our approach, Deep NRSfM++, achieves state-of-the-art performance across numerous large-scale benchmarks, outperforming both classical and learning-based 2D-3D lifting methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源