论文标题

无监督的多域图像到图像翻译的交叉域生成对抗网络

Crossing-Domain Generative Adversarial Networks for Unsupervised Multi-Domain Image-to-Image Translation

论文作者

Yang, Xuewen, Xie, Dongliang, Wang, Xin

论文摘要

生成对抗网络(GAN)中最新的技术在图像到图像的翻译中从对等图像转换从对等域X到域y,使用配对的图像数据取得了显着成功。但是,在大多数应用程序中,获得丰富的配对数据是一个非平凡且昂贵的过程。如果需要在每个域进行训练进行训练时,则需要在每个两个域之间进行训练,训练的复杂性将四次增加。此外,仅一次是来自两个领域的数据培训无法从其他域的数据中受益,这阻止了提取更有用的功能,并阻碍了该研究领域的进步。在这项工作中,我们为跨多个域的无监督图像到图像翻译提出了一个通用框架,该框架可以将图像从域x转换为任何域,而无需在图像翻译中涉及的两个域之间进行直接训练。该框架的副产品是减少计算时间和计算资源,因为它所需的时间比在最新工程中像成对训练域相比需要少的时间。我们提出的框架由一对编码器以及一对gan组成,它可以从不同域中学习高级特征,从而生成多样化和现实的样本。与最先进的技术相比,我们的框架在许多图像到图像任务上显示了竞争结果。

State-of-the-art techniques in Generative Adversarial Networks (GANs) have shown remarkable success in image-to-image translation from peer domain X to domain Y using paired image data. However, obtaining abundant paired data is a non-trivial and expensive process in the majority of applications. When there is a need to translate images across n domains, if the training is performed between every two domains, the complexity of the training will increase quadratically. Moreover, training with data from two domains only at a time cannot benefit from data of other domains, which prevents the extraction of more useful features and hinders the progress of this research area. In this work, we propose a general framework for unsupervised image-to-image translation across multiple domains, which can translate images from domain X to any a domain without requiring direct training between the two domains involved in image translation. A byproduct of the framework is the reduction of computing time and computing resources, since it needs less time than training the domains in pairs as is done in state-of-the-art works. Our proposed framework consists of a pair of encoders along with a pair of GANs which learns high-level features across different domains to generate diverse and realistic samples from. Our framework shows competing results on many image-to-image tasks compared with state-of-the-art techniques.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源