论文标题
FSGANV2:改进的主题不可知论面部交换和重演
FSGANv2: Improved Subject Agnostic Face Swapping and Reenactment
论文作者
论文摘要
我们提出面部交换gan(fsgan)进行面部交换和重演。与以前的工作不同,我们提供了一种不可知论的交换计划,可以将其应用于一对面孔,而无需在这些面孔上进行培训。我们得出了一种新型的基于迭代的基于深度学习的面部重演方法,该方法调整了可以应用于单个图像或视频序列的显着姿势和表达变化。对于视频序列,我们引入了基于重新制定,Delaunay三角剖分和Barycentric坐标的面部视图的连续插值。遮挡的面部区域由面部完成网络处理。最后,我们使用一个面部混合网络将两个面无缝混合在一起,同时保留目标肤色和照明条件。该网络使用一种新型的泊松混合损失,将泊松优化与感知损失相结合。我们比较我们对现有最新系统的方法,并在定性和定量上表现出我们的结果既优于质量。这项工作描述了FSGAN方法的扩展,该方法在我们工作的早期会议版本中提出,以及其他实验和结果。
We present Face Swapping GAN (FSGAN) for face swapping and reenactment. Unlike previous work, we offer a subject agnostic swapping scheme that can be applied to pairs of faces without requiring training on those faces. We derive a novel iterative deep learning--based approach for face reenactment which adjusts significant pose and expression variations that can be applied to a single image or a video sequence. For video sequences, we introduce a continuous interpolation of the face views based on reenactment, Delaunay Triangulation, and barycentric coordinates. Occluded face regions are handled by a face completion network. Finally, we use a face blending network for seamless blending of the two faces while preserving the target skin color and lighting conditions. This network uses a novel Poisson blending loss combining Poisson optimization with a perceptual loss. We compare our approach to existing state-of-the-art systems and show our results to be both qualitatively and quantitatively superior. This work describes extensions of the FSGAN method, proposed in an earlier conference version of our work, as well as additional experiments and results.