论文标题
增强感知的自我审议,用于数据有效的GAN培训
Augmentation-Aware Self-Supervision for Data-Efficient GAN Training
论文作者
论文摘要
数据有限的培训生成对抗网络(GAN)具有挑战性,因为歧视者容易过度拟合。先前提出的可区分增强表明,培训剂的数据效率提高了。然而,增强隐式引入了不希望的不变性以增强歧视器,因为它忽略了数据转换引起的标签空间中语义的变化,这可能会限制歧视器的表示能力,并最终影响生成器的生成建模性能。为了减轻不变性的负面影响,同时遗传了数据增强的好处,我们提出了一种新颖的增强感知的自我监督歧视者,以预测增强数据的增强参数。特别是,由于训练期间,实际数据和生成的数据的预测目标必须有所不同。我们进一步鼓励发电机通过生成可预测的真实数据而不是伪造的数据来对抗自我监督的歧视者。该公式将发电机的学习目标与某些假设下的算术$ - 谐波平均分歧联系起来。我们将方法与最新方法(SOTA)方法进行了比较,其中使用数据限制的CIFAR-10,CIFAR-100,CIFAR-100,FFHQ,LSUN-CAT和五个低弹药数据集中的类条件BigGan和无条件的stylegan2架构进行比较。实验结果表明,在训练数据效率的gan中,我们的方法对SOTA方法的显着改善。
Training generative adversarial networks (GANs) with limited data is challenging because the discriminator is prone to overfitting. Previously proposed differentiable augmentation demonstrates improved data efficiency of training GANs. However, the augmentation implicitly introduces undesired invariance to augmentation for the discriminator since it ignores the change of semantics in the label space caused by data transformation, which may limit the representation learning ability of the discriminator and ultimately affect the generative modeling performance of the generator. To mitigate the negative impact of invariance while inheriting the benefits of data augmentation, we propose a novel augmentation-aware self-supervised discriminator that predicts the augmentation parameter of the augmented data. Particularly, the prediction targets of real data and generated data are required to be distinguished since they are different during training. We further encourage the generator to adversarially learn from the self-supervised discriminator by generating augmentation-predictable real and not fake data. This formulation connects the learning objective of the generator and the arithmetic $-$ harmonic mean divergence under certain assumptions. We compare our method with state-of-the-art (SOTA) methods using the class-conditional BigGAN and unconditional StyleGAN2 architectures on data-limited CIFAR-10, CIFAR-100, FFHQ, LSUN-Cat, and five low-shot datasets. Experimental results demonstrate significant improvements of our method over SOTA methods in training data-efficient GANs.