论文标题
HyperDomainnet:生成对抗网络的通用域适应
HyperDomainNet: Universal Domain Adaptation for Generative Adversarial Networks
论文作者
论文摘要
在训练数据非常有限的情况下,近年来,GAN的领域适应框架近年来取得了巨大进展,作为培训当代gan的主要方法。在这项工作中,我们通过提出一个非常紧凑的参数空间来显着改善该框架,以微调发电机。我们引入了一种新型的域调节技术,该技术允许仅优化6,000次矢量,而不是3000万个stylegan2重量以适应目标域。我们将此参数化应用于最先进的域适应方法,并表明它具有与完整参数空间相同的表现力。此外,我们提出了一种新的正规化损失,可大大提高微型发电机的多样性。受到优化参数空间的大小的减小的启发,我们考虑了gan的多域适应性问题,即,在相同模型可以根据输入查询的不同之际适应多个域时进行设置。我们提出了一个超域内网络,这是一个超网,可以预测我们的参数化给定目标域。我们从经验上确认,它可以一次成功地学习许多域,甚至可能概括为看不见的域。可以在https://github.com/macderru/hyperdomainnet上找到源代码
Domain adaptation framework of GANs has achieved great progress in recent years as a main successful approach of training contemporary GANs in the case of very limited training data. In this work, we significantly improve this framework by proposing an extremely compact parameter space for fine-tuning the generator. We introduce a novel domain-modulation technique that allows to optimize only 6 thousand-dimensional vector instead of 30 million weights of StyleGAN2 to adapt to a target domain. We apply this parameterization to the state-of-art domain adaptation methods and show that it has almost the same expressiveness as the full parameter space. Additionally, we propose a new regularization loss that considerably enhances the diversity of the fine-tuned generator. Inspired by the reduction in the size of the optimizing parameter space we consider the problem of multi-domain adaptation of GANs, i.e. setting when the same model can adapt to several domains depending on the input query. We propose the HyperDomainNet that is a hypernetwork that predicts our parameterization given the target domain. We empirically confirm that it can successfully learn a number of domains at once and may even generalize to unseen domains. Source code can be found at https://github.com/MACderRu/HyperDomainNet