论文标题
尖锐:穿着宽松衣服的人的形状意识重建
SHARP: Shape-Aware Reconstruction of People in Loose Clothing
论文作者
论文摘要
深度学习的最新进展使单眼图像中的3D人体重建能够重建,该图像在多个领域中具有广泛的应用。在本文中,我们提出了尖锐的(在宽松的衣服中对人的形状重建),这是一个新颖的端到端可训练网络,可准确地从单眼图像中恢复过宽松衣服的3D几何形状和外观。 Sharp使用稀疏有效的融合策略将参数体与穿衣人类的非参数2D表示相结合。参数体的先验在身体形状和姿势上实现了几何一致性,而非参数表示模型也使衣服放松并处理自我闭合。我们还利用非参数表示的稀疏度来更快地训练我们的网络,同时使用2D地图上的损失。另一个关键的贡献是3Dhumans,这是我们的新型人体扫描数据集,具有丰富的几何和质地细节。我们评估了3DHUMAN和其他公开可用数据集的Sharp,并比现有的最新方法表现出更高的定性和定量性能。
Recent advancements in deep learning have enabled 3D human body reconstruction from a monocular image, which has broad applications in multiple domains. In this paper, we propose SHARP (SHape Aware Reconstruction of People in loose clothing), a novel end-to-end trainable network that accurately recovers the 3D geometry and appearance of humans in loose clothing from a monocular image. SHARP uses a sparse and efficient fusion strategy to combine parametric body prior with a non-parametric 2D representation of clothed humans. The parametric body prior enforces geometrical consistency on the body shape and pose, while the non-parametric representation models loose clothing and handle self-occlusions as well. We also leverage the sparseness of the non-parametric representation for faster training of our network while using losses on 2D maps. Another key contribution is 3DHumans, our new life-like dataset of 3D human body scans with rich geometrical and textural details. We evaluate SHARP on 3DHumans and other publicly available datasets and show superior qualitative and quantitative performance than existing state-of-the-art methods.