论文标题
Facegan:面部属性可控重演gan
FACEGAN: Facial Attribute Controllable rEenactment GAN
论文作者
论文摘要
面部重演是一种流行的面部动画方法,该方法从源图像和驾驶图像中的面部运动中获取该人的身份。最近的工作通过将基于面部标志的运动表示与生成对抗网络相结合,证明了高质量的结果。如果来源和驾驶图像描绘了同一个人,或者面部结构在其他情况下非常相似,这些模型的性能最佳。但是,如果身份有所不同,则驾驶面部结构泄漏到输出扭曲重新制定结果。我们提出了一种新型的面部属性可控重新制作gan(facegan),该重新效果通过动作单元(AU)表示将面部运动从驾驶面转移。与面部地标不同,AUS独立于防止身份泄漏的面部结构。此外,AU提供了一种人类的可解释方式来控制重演。 Facegan分别处理背景和面部区域,以获得优化的输出质量。广泛的定量和定性比较表明,在单个源重新制定任务中,对最先进的比较有了明显的改善。在补充材料中提供的重演视频中最好说明结果。源代码将在本文发布后提供。
The face reenactment is a popular facial animation method where the person's identity is taken from the source image and the facial motion from the driving image. Recent works have demonstrated high quality results by combining the facial landmark based motion representations with the generative adversarial networks. These models perform best if the source and driving images depict the same person or if the facial structures are otherwise very similar. However, if the identity differs, the driving facial structures leak to the output distorting the reenactment result. We propose a novel Facial Attribute Controllable rEenactment GAN (FACEGAN), which transfers the facial motion from the driving face via the Action Unit (AU) representation. Unlike facial landmarks, the AUs are independent of the facial structure preventing the identity leak. Moreover, AUs provide a human interpretable way to control the reenactment. FACEGAN processes background and face regions separately for optimized output quality. The extensive quantitative and qualitative comparisons show a clear improvement over the state-of-the-art in a single source reenactment task. The results are best illustrated in the reenactment video provided in the supplementary material. The source code will be made available upon publication of the paper.