论文标题
用户指定内容的有条件图像生成和操纵
Conditional Image Generation and Manipulation for User-Specified Content
论文作者
论文摘要
近年来,生成的对抗网络(GAN)在产生越来越令人印象深刻的现实图像方面稳步改善。将图像生成过程用于诸如内容创建之类的目的是有用的。这可以通过根据其他信息调节模型来完成。但是,在基于其他信息的条件时,仍然存在与特定条件一致的大量图像。这使得生成的图像与用户的设想完全不可能,这对于实际内容创建方案(例如生成面部复合材料或库存照片)是有问题的。为了解决这个问题,我们提出了一条单一的管道,用于文本对图像生成和操纵。在管道的第一部分中,我们介绍了TextStylegan,该模型以文本为条件。在管道的第二部分中,我们利用TextStylegan的预先训练的权重执行语义面部图像操纵。该方法通过在潜在空间中找到语义方向来起作用。我们表明,该方法可用于操纵面部图像以获得各种属性。最后,我们介绍了CelebTD-HQ数据集,该数据集是Celeba-HQ的扩展名,由面孔和相应的文本描述组成。
In recent years, Generative Adversarial Networks (GANs) have improved steadily towards generating increasingly impressive real-world images. It is useful to steer the image generation process for purposes such as content creation. This can be done by conditioning the model on additional information. However, when conditioning on additional information, there still exists a large set of images that agree with a particular conditioning. This makes it unlikely that the generated image is exactly as envisioned by a user, which is problematic for practical content creation scenarios such as generating facial composites or stock photos. To solve this problem, we propose a single pipeline for text-to-image generation and manipulation. In the first part of our pipeline we introduce textStyleGAN, a model that is conditioned on text. In the second part of our pipeline we make use of the pre-trained weights of textStyleGAN to perform semantic facial image manipulation. The approach works by finding semantic directions in latent space. We show that this method can be used to manipulate facial images for a wide range of attributes. Finally, we introduce the CelebTD-HQ dataset, an extension to CelebA-HQ, consisting of faces and corresponding textual descriptions.