论文标题

从野外图像中学习动画详细的3D面部模型

Learning an Animatable Detailed 3D Face Model from In-The-Wild Images

论文作者

Feng, Yao, Feng, Haiwen, Black, Michael J., Bolkart, Timo

论文摘要

虽然当前的单眼3D面部重建方法可以恢复精细的几何细节,但它们遭受了几个局限性。某些方法会产生无法现实地动画的面孔,因为它们不会建模皱纹随表达变化的变化。其他方法接受了高质量的面部扫描培训,并且不能很好地概括为野外图像。我们提出了第一种回归3D面向形状和动画细节的方法,这些细节特定于个人,但会随着表达方式而变化。我们的模型DECA(详细的表达捕获和动画)经过训练,可以从低维的潜在表示中强稳定地生成UV位移图,该图由特定于人的细节参数和通用表达参数组成,而训练回归器可以预测来自单个图像的详细信息,形状,形状,反击,表达,表达,表达和照明参数。为了实现这一目标,我们介绍了一种新颖的细节一致性损失,该损失将特定于人的细节从表达依赖性皱纹中解脱出来。这种解开使我们能够通过控制表达参数,同时保持特定于人的细节不变,从而综合现实的人特异性皱纹。 DECA是从没有配对的3D监督的野外图像中学到的,并在两个基准上实现了最先进的重建精度。野外数据的定性结果表明,DECA的鲁棒性及其脱离身份和表达依赖性细节的能力,可实现重建面部的动画。该模型和代码可在https://deca.is.tue.mpg.de上公开获得。

While current monocular 3D face reconstruction methods can recover fine geometric details, they suffer several limitations. Some methods produce faces that cannot be realistically animated because they do not model how wrinkles vary with expression. Other methods are trained on high-quality face scans and do not generalize well to in-the-wild images. We present the first approach that regresses 3D face shape and animatable details that are specific to an individual but change with expression. Our model, DECA (Detailed Expression Capture and Animation), is trained to robustly produce a UV displacement map from a low-dimensional latent representation that consists of person-specific detail parameters and generic expression parameters, while a regressor is trained to predict detail, shape, albedo, expression, pose and illumination parameters from a single image. To enable this, we introduce a novel detail-consistency loss that disentangles person-specific details from expression-dependent wrinkles. This disentanglement allows us to synthesize realistic person-specific wrinkles by controlling expression parameters while keeping person-specific details unchanged. DECA is learned from in-the-wild images with no paired 3D supervision and achieves state-of-the-art shape reconstruction accuracy on two benchmarks. Qualitative results on in-the-wild data demonstrate DECA's robustness and its ability to disentangle identity- and expression-dependent details enabling animation of reconstructed faces. The model and code are publicly available at https://deca.is.tue.mpg.de.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源