论文标题

图像生成的对抗代码学习

Adversarial Code Learning for Image Generation

论文作者

Yuan, Jiangbo, Wu, Bing, Ding, Wanying, Ping, Qing, Yu, Zhendong

论文摘要

我们介绍了“对抗代码学习”(ACL)模块,该模块将整体图像生成性能提高到几种类型的深层模型。 ACL没有在发电机的像素空间中执行后验分布建模,而是旨在与另一个图像编码网/推理网络共同学习潜在代码,并以先验的噪声为其输入。我们在对抗性学习过程中进行学习,该过程与原始gan非常相似,但再次将学习从图像空间转移到了先验和潜在的代码空间。 ACL是一种便携式模块,在生成模型设计中提高了更多的灵活性和可能性。首先,它允许灵活性将自动编码器和标准分类模型等非生成模型转换为体面的生成模型。其次,它通过从上一部分的任何部分生成有意义的代码和图像来增强现有的GANS性能。我们已经将ACL模块与上述框架合并,并进行了有关合成,MNIST,CIFAR-10和Celeba数据集的实验。我们的模型已经取得了重大改进,证明了图像生成任务的一般性。

We introduce the "adversarial code learning" (ACL) module that improves overall image generation performance to several types of deep models. Instead of performing a posterior distribution modeling in the pixel spaces of generators, ACLs aim to jointly learn a latent code with another image encoder/inference net, with a prior noise as its input. We conduct the learning in an adversarial learning process, which bears a close resemblance to the original GAN but again shifts the learning from image spaces to prior and latent code spaces. ACL is a portable module that brings up much more flexibility and possibilities in generative model designs. First, it allows flexibility to convert non-generative models like Autoencoders and standard classification models to decent generative models. Second, it enhances existing GANs' performance by generating meaningful codes and images from any part of the prior. We have incorporated our ACL module with the aforementioned frameworks and have performed experiments on synthetic, MNIST, CIFAR-10, and CelebA datasets. Our models have achieved significant improvements which demonstrated the generality for image generation tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源