论文标题

Slgan:样式和潜在引导的生成对抗网络,用于理想的化妆转移和去除

SLGAN: Style- and Latent-guided Generative Adversarial Network for Desirable Makeup Transfer and Removal

论文作者

Horita, Daichi, Aizawa, Kiyoharu

论文摘要

使用生成对抗网络将化妆涂在人脸的照片时,有五个功能要考虑。这些功能包括(1)面部成分,(2)交互式色彩调整,(3)化妆变化,(4)对姿势和表达式的鲁棒性以及(5)使用多个参考图像。已经提出了几项相关的工作,主要使用生成对抗网络(GAN)。不幸的是,它们都没有同时解决所有五个功能。本文以创新的风格和潜在的引导甘(Slgan)缩小了差距。我们提供了一种新颖的,感知的化妆损失和样式不变的解码器,可以根据直方图匹配转移化妆样式,以避免身份转移问题。在我们的实验中,我们表明我们的Slgan比最先进的方法更好或可比。此外,我们表明我们的建议可以插值面部化妆图像,以确定独特的功能,比较现有方法并帮助用户找到理想的化妆配置。

There are five features to consider when using generative adversarial networks to apply makeup to photos of the human face. These features include (1) facial components, (2) interactive color adjustments, (3) makeup variations, (4) robustness to poses and expressions, and the (5) use of multiple reference images. Several related works have been proposed, mainly using generative adversarial networks (GAN). Unfortunately, none of them have addressed all five features simultaneously. This paper closes the gap with an innovative style- and latent-guided GAN (SLGAN). We provide a novel, perceptual makeup loss and a style-invariant decoder that can transfer makeup styles based on histogram matching to avoid the identity-shift problem. In our experiments, we show that our SLGAN is better than or comparable to state-of-the-art methods. Furthermore, we show that our proposal can interpolate facial makeup images to determine the unique features, compare existing methods, and help users find desirable makeup configurations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源