论文标题

用于图像编辑的自我条件的生成对抗网络

Self-Conditioned Generative Adversarial Networks for Image Editing

论文作者

Liu, Yunzhe, Gal, Rinon, Bermano, Amit H., Chen, Baoquan, Cohen-Or, Daniel

论文摘要

生成的对抗网络(GAN)容易受到偏见的影响,从不平衡的数据中学到的学历或通过模式崩溃。网络专注于数据分布的核心,将尾巴或分布的边缘留在后面。我们认为,这种偏见不仅是出于公平关注的责任,而且在偏离分销核心时,它在潜在 - 传播编辑方法的崩溃中起着关键作用。在此观察结果的基础上,我们概述了一种通过自我调节过程来减轻生成偏置的方法,其中使用预训练的发电机的潜在空间距离为数据提供初始标签。通过从这些自标记的数据中绘制的重新采样分布上微采样的发电机,我们迫使发电机更好地使用罕见的语义属性来抗衡,并使这些属性更现实。我们将我们的模型与广泛的潜在编辑方法进行比较,并表明,通过减轻它们实现更优质的语义控制和通过更广泛的转换范围更好的身份保护的偏见。我们的代码和型号将在https://github.com/yzliu567/sc-gan上找到

Generative Adversarial Networks (GANs) are susceptible to bias, learned from either the unbalanced data, or through mode collapse. The networks focus on the core of the data distribution, leaving the tails - or the edges of the distribution - behind. We argue that this bias is responsible not only for fairness concerns, but that it plays a key role in the collapse of latent-traversal editing methods when deviating away from the distribution's core. Building on this observation, we outline a method for mitigating generative bias through a self-conditioning process, where distances in the latent-space of a pre-trained generator are used to provide initial labels for the data. By fine-tuning the generator on a re-sampled distribution drawn from these self-labeled data, we force the generator to better contend with rare semantic attributes and enable more realistic generation of these properties. We compare our models to a wide range of latent editing methods, and show that by alleviating the bias they achieve finer semantic control and better identity preservation through a wider range of transformations. Our code and models will be available at https://github.com/yzliu567/sc-gan

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源