论文标题
视频驱动的神经物理化面部资产用于生产
Video-driven Neural Physically-based Facial Asset for Production
论文作者
论文摘要
生产水平的工作流程用于产生令人信服的3D动态人体面孔长期以来依赖各种劳动密集型工具用于几何和纹理产生,运动捕获和索具以及表达合成。最近的神经方法可以使单个组件自动化,但是相应的潜在表示不能像常规工具一样为艺术家提供明确的控制。在本文中,我们提出了一种新的基于学习的,以视频为基础的方法,用于生成具有高质量基于物理资产的动态面部几何形状。对于数据收集,我们构建了一个混合多视频测量捕获阶段,与超快速摄像机结合以获得原始的3D面部资产。然后,我们着手使用单独的VAE对面部表达,几何形状和基于物理的纹理进行建模,我们在各个网络的潜在网络上强加了基于全局MLP的表达映射,以保护各个属性的特征。我们还将增量信息建模为基于物理的纹理的皱纹图,从而达到高质量的4K动态纹理。我们展示了我们在高保真表演者特定的面部捕获和跨认同面部运动重新定位方面的方法。此外,我们的基于多VAE的神经资产以及快速适应方案也可以部署以处理野外视频。此外,我们通过提供具有较高现实主义的各种有希望的基于身体的编辑结果来激励我们明确的面部解散策略的实用性。全面的实验表明,与以前的视频驱动的面部重建和动画方法相比,我们的技术提供了更高的准确性和视觉保真度。
Production-level workflows for producing convincing 3D dynamic human faces have long relied on an assortment of labor-intensive tools for geometry and texture generation, motion capture and rigging, and expression synthesis. Recent neural approaches automate individual components but the corresponding latent representations cannot provide artists with explicit controls as in conventional tools. In this paper, we present a new learning-based, video-driven approach for generating dynamic facial geometries with high-quality physically-based assets. For data collection, we construct a hybrid multiview-photometric capture stage, coupling with ultra-fast video cameras to obtain raw 3D facial assets. We then set out to model the facial expression, geometry and physically-based textures using separate VAEs where we impose a global MLP based expression mapping across the latent spaces of respective networks, to preserve characteristics across respective attributes. We also model the delta information as wrinkle maps for the physically-based textures, achieving high-quality 4K dynamic textures. We demonstrate our approach in high-fidelity performer-specific facial capture and cross-identity facial motion retargeting. In addition, our multi-VAE-based neural asset, along with the fast adaptation schemes, can also be deployed to handle in-the-wild videos. Besides, we motivate the utility of our explicit facial disentangling strategy by providing various promising physically-based editing results with high realism. Comprehensive experiments show that our technique provides higher accuracy and visual fidelity than previous video-driven facial reconstruction and animation methods.