论文标题
通过无监督域的适应来编辑图像编辑的gan反演
GAN Inversion for Image Editing via Unsupervised Domain Adaptation
论文作者
论文摘要
现有的GAN反演方法在重建高质量(HQ)图像的同时,在实际应用中使用更常见的低质量输入(LQ)输入而擅长起作用。为了解决这个问题,我们在反转过程中提出了无监督的域适应性(UDA),即UDA Invermion,以有效地反转和编辑HQ和LQ图像。关于未划分的HQ图像作为源域和LQ图像作为未标记的目标域,我们引入了理论保证:目标域中的损耗值受源域中的损耗和一个新的差异函数的限制,以测量两个域之间的差异。在此之后,我们只能最大程度地减少该上限,以获得HQ和LQ图像的准确潜在代码。因此,可以自发地学习HQ图像的建设性表示,并在没有监督的情况下转换为LQ图像。在FFHQ数据集上,UDA Inversion获得了更好的PSNR,并且与监督方法相当。
Existing GAN inversion methods work brilliantly in reconstructing high-quality (HQ) images while struggling with more common low-quality (LQ) inputs in practical application. To address this issue, we propose Unsupervised Domain Adaptation (UDA) in the inversion process, namely UDA-inversion, for effective inversion and editing of both HQ and LQ images. Regarding unpaired HQ images as the source domain and LQ images as the unlabeled target domain, we introduce a theoretical guarantee: loss value in the target domain is upper-bounded by loss in the source domain and a novel discrepancy function measuring the difference between two domains. Following that, we can only minimize this upper bound to obtain accurate latent codes for HQ and LQ images. Thus, constructive representations of HQ images can be spontaneously learned and transformed into LQ images without supervision. UDA-Inversion achieves a better PSNR of 22.14 on FFHQ dataset and performs comparably to supervised methods.