论文标题
针对反问题的生成模型的中间层的正规化培训
Regularized Training of Intermediate Layers for Generative Models for Inverse Problems
论文作者
论文摘要
在解决逆问题时,生成的对抗网络(GAN)已被证明是功能强大且灵活的先验。使用它们的一个挑战是克服表示误差,即表示网络表示任何特定信号的基本限制。最近,多个提出的反演算法通过优化中间层表示来减少表示误差。这些方法通常应用于经过下游反转算法不可知论的生成模型。在我们的工作中,我们介绍了一个原则,即如果使用基于中间层的优化算法将生成模型用于反转,则应以使这些中间层正规化的方式进行培训。我们将此原理实例化,以实现两种最新的反转算法:中间层优化和多代码GAN先验。对于这两种反转算法,我们都引入了一种新的正则化GAN训练算法,并证明学习生成模型在求解压缩感测,介入和超分辨率问题时会导致较大范围的采样率的重建误差较低。
Generative Adversarial Networks (GANs) have been shown to be powerful and flexible priors when solving inverse problems. One challenge of using them is overcoming representation error, the fundamental limitation of the network in representing any particular signal. Recently, multiple proposed inversion algorithms reduce representation error by optimizing over intermediate layer representations. These methods are typically applied to generative models that were trained agnostic of the downstream inversion algorithm. In our work, we introduce a principle that if a generative model is intended for inversion using an algorithm based on optimization of intermediate layers, it should be trained in a way that regularizes those intermediate layers. We instantiate this principle for two notable recent inversion algorithms: Intermediate Layer Optimization and the Multi-Code GAN prior. For both of these inversion algorithms, we introduce a new regularized GAN training algorithm and demonstrate that the learned generative model results in lower reconstruction errors across a wide range of under sampling ratios when solving compressed sensing, inpainting, and super-resolution problems.