论文标题
改进了针对GAN调节的输入重新编程
Improved Input Reprogramming for GAN Conditioning
论文作者
论文摘要
我们研究了GAN调节问题,其目标是使用标记的数据将预处理的无条件GAN转换为有条件的GAN。我们首先识别并分析了解决此问题的三种方法 - 从头开始,微调和输入重编程的条件GAN训练。我们的分析表明,当标记的数据量较小时,输入重编程会表现最好。由现实世界中的情况稀缺的数据,我们专注于输入重编程方法,并仔细分析现有算法。在确定了先前输入重编程方法的一些关键问题之后,我们提出了一种称为INREP+的新算法。我们的算法INREP+解决了可逆神经网络和积极标记(PU)学习的新颖用途的现有问题。通过广泛的实验,我们表明INREP+的表现优于所有现有方法,尤其是当标签信息稀缺,嘈杂和/或不平衡时。例如,对于以1%标记的数据来调节CIFAR10 GAN的任务,INREP+达到了平均内部 - 固定功能为76.24,而第二好的方法则达到114.51。
We study the GAN conditioning problem, whose goal is to convert a pretrained unconditional GAN into a conditional GAN using labeled data. We first identify and analyze three approaches to this problem -- conditional GAN training from scratch, fine-tuning, and input reprogramming. Our analysis reveals that when the amount of labeled data is small, input reprogramming performs the best. Motivated by real-world scenarios with scarce labeled data, we focus on the input reprogramming approach and carefully analyze the existing algorithm. After identifying a few critical issues of the previous input reprogramming approach, we propose a new algorithm called InRep+. Our algorithm InRep+ addresses the existing issues with the novel uses of invertible neural networks and Positive-Unlabeled (PU) learning. Via extensive experiments, we show that InRep+ outperforms all existing methods, particularly when label information is scarce, noisy, and/or imbalanced. For instance, for the task of conditioning a CIFAR10 GAN with 1% labeled data, InRep+ achieves an average Intra-FID of 76.24, whereas the second-best method achieves 114.51.