论文标题

通过挤压和跨度从GAN发电机中提取表示

Distilling Representations from GAN Generator via Squeeze and Span

论文作者

Yang, Yu, Cheng, Xiaotian, Liu, Chang, Bilen, Hakan, Ji, Xiangyang

论文摘要

近年来,生成的对抗网络(GAN)已成为一个积极研究的主题,并证明可以成功地在各个领域中产生高质量的现实图像。 GAN发电机的可控合成能力表明,它们保持信息丰富,解开和可解释的图像表示,但是在很大程度上尚未探索将其表示和转移到下游任务的利用和转移。在本文中,我们建议通过挤压和跨越其表示,从而从GAN发电机中提取知识。我们将发电机的功能挤进了表示在语义传播转换之前,在将其蒸馏到学生网络之前,它们是不变的。我们还通过使用实际训练数据来纠正gan的模式并在真实域中提高学生网络的性能,从而跨越合成域的蒸馏表示为真实域。实验证明了我们方法的功效合理,并揭示了其在自我监督的表示学习中的重要意义。代码可在https://github.com/yangyu12/squeeze-and-pan上找到。

In recent years, generative adversarial networks (GANs) have been an actively studied topic and shown to successfully produce high-quality realistic images in various domains. The controllable synthesis ability of GAN generators suggests that they maintain informative, disentangled, and explainable image representations, but leveraging and transferring their representations to downstream tasks is largely unexplored. In this paper, we propose to distill knowledge from GAN generators by squeezing and spanning their representations. We squeeze the generator features into representations that are invariant to semantic-preserving transformations through a network before they are distilled into the student network. We span the distilled representation of the synthetic domain to the real domain by also using real training data to remedy the mode collapse of GANs and boost the student network performance in a real domain. Experiments justify the efficacy of our method and reveal its great significance in self-supervised representation learning. Code is available at https://github.com/yangyu12/squeeze-and-span.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源