论文标题

一击面部重演的Far-Gan

FaR-GAN for One-Shot Face Reenactment

论文作者

Hao, Hanxiang, Baireddy, Sriram, Reibman, Amy R., Delp, Edward J.

论文摘要

在图像编辑和电影制作的领域中,用目标面部表情和动作对静态面部图像进行动画作用很重要。由于人脸的复杂几何形状和运动,这种面部重演过程具有挑战性。以前的工作通常需要从同一个人那里进行大量图像来对外观进行建模。在本文中,我们提出了一个单发的重新制作模型Far-gan,该模型仅拍摄任何给定源标识的一个面图像和目标表达式作为输入,然后产生相同源身份的面部图像,但具有目标表达式。所提出的方法对源身份,面部表达,姿势甚至图像背景没有任何假设。我们在Voxceleb1数据集上评估了我们的方法,并表明我们的方法能够生成比比较方法更高的面部图像。

Animating a static face image with target facial expressions and movements is important in the area of image editing and movie production. This face reenactment process is challenging due to the complex geometry and movement of human faces. Previous work usually requires a large set of images from the same person to model the appearance. In this paper, we present a one-shot face reenactment model, FaR-GAN, that takes only one face image of any given source identity and a target expression as input, and then produces a face image of the same source identity but with the target expression. The proposed method makes no assumptions about the source identity, facial expression, head pose, or even image background. We evaluate our method on the VoxCeleb1 dataset and show that our method is able to generate a higher quality face image than the compared methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源