论文标题
改善衣服的单位3D重建的多视图一致性损失
Multi-View Consistency Loss for Improved Single-Image 3D Reconstruction of Clothed People
论文作者
论文摘要
我们提出了一种新的方法,以提高单个图像的穿衣人形状的3D重建的准确性。最近的工作引入了体积,内隐和基于模型的形状学习框架,用于重建一个或多个图像的对象和人员。但是,由于衣服,头发,身体尺寸,姿势和摄像头的景色导致形状的差异很大,因此,穿衣人员重建的准确性和完整性受到限制。本文介绍了克服这一局限性的两个进步:首先是一个新的衣服人员的新合成数据集,3DVH;其次,用于训练单眼体积形状估计的新型多视图损失函数,这证明可以显着提高概括和重建精度。证明了具有多种自然背景的逼真的3D人类模型的3DVH数据集可允许从人的真实图像中转移到重建。对综合图像和真实图像的全面比较性能评估表明,所提出的方法显着优于先前基于最新的学习的单图3D人格估计方法,从而实现了重建准确性,完整性和质量的显着提高。一项消融研究表明,这既是提议的多视图培训和新的3DVH数据集归因于。代码和数据集可以在项目网站上找到:https://akincaliskan3d.github.io/mv3dh/。
We present a novel method to improve the accuracy of the 3D reconstruction of clothed human shape from a single image. Recent work has introduced volumetric, implicit and model-based shape learning frameworks for reconstruction of objects and people from one or more images. However, the accuracy and completeness for reconstruction of clothed people is limited due to the large variation in shape resulting from clothing, hair, body size, pose and camera viewpoint. This paper introduces two advances to overcome this limitation: firstly a new synthetic dataset of realistic clothed people, 3DVH; and secondly, a novel multiple-view loss function for training of monocular volumetric shape estimation, which is demonstrated to significantly improve generalisation and reconstruction accuracy. The 3DVH dataset of realistic clothed 3D human models rendered with diverse natural backgrounds is demonstrated to allows transfer to reconstruction from real images of people. Comprehensive comparative performance evaluation on both synthetic and real images of people demonstrates that the proposed method significantly outperforms the previous state-of-the-art learning-based single image 3D human shape estimation approaches achieving significant improvement of reconstruction accuracy, completeness, and quality. An ablation study shows that this is due to both the proposed multiple-view training and the new 3DVH dataset. The code and the dataset can be found at the project website: https://akincaliskan3d.github.io/MV3DH/.