论文标题

将面部交换迁移到移动设备:轻量级框架和监督培训解决方案

Migrating Face Swap to Mobile Devices: A lightweight Framework and A Supervised Training Solution

论文作者

Yu, Haiming, Zhu, Hao, Lu, Xiangju, Liu, Junhui

论文摘要

现有的面部交换方法在很大程度上依赖大型网络,以便能够产生视觉上合理的结果,从而抑制其在资源构成平台上的应用。在这项工作中,我们建议使用Mobilefsgan,这是一种用于面部交换的新颖轻巧的gan,可以在具有更少参数的移动设备上运行,同时实现竞争性能。轻巧的编码器解码器结构专为图像合成任务而设计,该任务仅为10.2MB,可以以实时速度在移动设备上运行。为了解决训练如此小的网络的不稳定,我们利用面部属性编辑技术构建了FSTRIPLET数据集。 FSTRIPLET提供了源目标训练三胞胎,从而产生像素级标签,因此首次监督了培训过程。我们还设计了多尺度的梯度损失,以进行有效的背部传播,从而更快,更好地收敛。实验结果表明,我们的模型对最先进的方法达到了可比的性能,同时显着减少了网络参数的数量。代码和数据集已发布。

Existing face swap methods rely heavily on large-scale networks for adequate capacity to generate visually plausible results, which inhibits its applications on resource-constraint platforms. In this work, we propose MobileFSGAN, a novel lightweight GAN for face swap that can run on mobile devices with much fewer parameters while achieving competitive performance. A lightweight encoder-decoder structure is designed especially for image synthesis tasks, which is only 10.2MB and can run on mobile devices at a real-time speed. To tackle the unstability of training such a small network, we construct the FSTriplets dataset utilizing facial attribute editing techniques. FSTriplets provides source-target-result training triplets, yielding pixel-level labels thus for the first time making the training process supervised. We also designed multi-scale gradient losses for efficient back-propagation, resulting in faster and better convergence. Experimental results show that our model reaches comparable performance towards state-of-the-art methods, while significantly reducing the number of network parameters. Codes and the dataset have been released.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源