论文标题
自我监督的素描到图像合成
Self-Supervised Sketch-to-Image Synthesis
论文作者
论文摘要
从任意绘制的草图中想象彩色逼真的图像是我们渴望模仿机器的人类功能之一。与先前需要草图图像对或利用低测量的边缘作为草图的方法不同,我们以自我监督的学习方式研究了基于示例的素描到图像图像图像(S2I)合成任务,从而消除了配对草图数据的必要性。为此,我们首先提出了一种无监督的方法,以有效地合成一般RGB的仅数据集的线索。然后,使用合成配对数据,我们提出了一个自我监督的自动编码器(AE),以使素描和RGB图像中的内容/样式特征解脱出来,并合成对素描的内容信仰的构成图像,并符合RGB-Imimages的样式。虽然先前的作品采用周期矛盾的损失或专用的注意模块来实施内容/样式的保真度,但我们通过纯粹的自学表现出了AE的出色表现。为了进一步提高高分辨率的合成质量,我们还利用对抗网络来完善合成图像的细节。 1024*1024分辨率的广泛实验表明,在Celeba-HQ和Wiki-Art数据集上提出的模型的最先进性能。此外,借助提出的草图生成器,该模型在样式混合和样式转移方面表现出了令人鼓舞的性能,这需要合成的图像既具有样式的一致性又具有语义意义。我们的代码可在https://github.com/odegeasslbc/self-supervise-sketh-sketch-to-image-synthesis-pytorch上找到,请访问https://create.play.io/my-projects?mode=skets?
Imagining a colored realistic image from an arbitrarily drawn sketch is one of the human capabilities that we eager machines to mimic. Unlike previous methods that either requires the sketch-image pairs or utilize low-quantity detected edges as sketches, we study the exemplar-based sketch-to-image (s2i) synthesis task in a self-supervised learning manner, eliminating the necessity of the paired sketch data. To this end, we first propose an unsupervised method to efficiently synthesize line-sketches for general RGB-only datasets. With the synthetic paired-data, we then present a self-supervised Auto-Encoder (AE) to decouple the content/style features from sketches and RGB-images, and synthesize images that are both content-faithful to the sketches and style-consistent to the RGB-images. While prior works employ either the cycle-consistence loss or dedicated attentional modules to enforce the content/style fidelity, we show AE's superior performance with pure self-supervisions. To further improve the synthesis quality in high resolution, we also leverage an adversarial network to refine the details of synthetic images. Extensive experiments on 1024*1024 resolution demonstrate a new state-of-art-art performance of the proposed model on CelebA-HQ and Wiki-Art datasets. Moreover, with the proposed sketch generator, the model shows a promising performance on style mixing and style transfer, which require synthesized images to be both style-consistent and semantically meaningful. Our code is available on https://github.com/odegeasslbc/Self-Supervised-Sketch-to-Image-Synthesis-PyTorch, and please visit https://create.playform.io/my-projects?mode=sketch for an online demo of our model.