论文标题

拱门:穿衣服人类的动画重建

ARCH: Animatable Reconstruction of Clothed Humans

论文作者

Huang, Zeng, Xu, Yuanlu, Lassner, Christoph, Li, Hao, Tung, Tony

论文摘要

在本文中,我们提出了Arch(穿衣服人类的动画重建),这是一个新颖的端到端框架,用于准确地从单眼图像中重新建立动画准备的3D服装的人类。数字化3D人类的现有方法难以处理姿势变化并恢复细节。另外,它们不会生产动画准备的模型。相比之下,Arch是一种学识渊博的姿势感知模型,可从单个无约束的RGB图像中产生详细的3D操纵全身人体化身。使用参数3D主体估计器创建语义空间和语义变形场。它们允许将2D/3D衣服的人类转化为规范的空间,从而减少了由于训练数据中的姿势变化和遮挡引起的几何歧义。使用带有空间局部特征的隐式函数表示,详细的表面几何形状和外观是学习的。此外,我们使用不透明度感知的可区分渲染提出了3D重建的每个像素监督。我们的实验表明,拱门增加了重建人类的忠诚度。与公共数据集中的最新方法相比,我们获得标准指标的重建错误的50%以上。到目前为止,我们还展示了许多定性的例子,这些例子是文献中未见的动画,高质量的重建化身。

In this paper, we propose ARCH (Animatable Reconstruction of Clothed Humans), a novel end-to-end framework for accurate reconstruction of animation-ready 3D clothed humans from a monocular image. Existing approaches to digitize 3D humans struggle to handle pose variations and recover details. Also, they do not produce models that are animation ready. In contrast, ARCH is a learned pose-aware model that produces detailed 3D rigged full-body human avatars from a single unconstrained RGB image. A Semantic Space and a Semantic Deformation Field are created using a parametric 3D body estimator. They allow the transformation of 2D/3D clothed humans into a canonical space, reducing ambiguities in geometry caused by pose variations and occlusions in training data. Detailed surface geometry and appearance are learned using an implicit function representation with spatial local features. Furthermore, we propose additional per-pixel supervision on the 3D reconstruction using opacity-aware differentiable rendering. Our experiments indicate that ARCH increases the fidelity of the reconstructed humans. We obtain more than 50% lower reconstruction errors for standard metrics compared to state-of-the-art methods on public datasets. We also show numerous qualitative examples of animated, high-quality reconstructed avatars unseen in the literature so far.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源