论文标题
图像插入图像的像素密集检测器
Pixel-wise Dense Detector for Image Inpainting
论文作者
论文摘要
最近的基于GAN的图像介入方法采用了平均策略来区分生成的图像并输出标量,这不可避免地会失去视觉伪像的位置信息。此外,对抗性损失和重建损失(例如L1损失)与权衡重量相结合,这也很难调节。在本文中,我们提出了一个基于检测的新型生成框架,用于图像介入,该框架在对抗过程中采用了最低限制策略。发电机遵循编码器架构以填充缺失的区域,使用弱监督的学习检测器以像素的方式定位了工件的位置。这样的位置信息使发电机注意工件并进一步增强它们。更重要的是,我们通过加权标准将检测器的输出明确插入重建损失,从而平衡对抗性损失和自动重建损失的权重,而不是手动操作。多个公共数据集的实验显示了所提出的框架的出色性能。源代码可在https://github.com/evergrow/gdn_inpainting上找到。
Recent GAN-based image inpainting approaches adopt an average strategy to discriminate the generated image and output a scalar, which inevitably lose the position information of visual artifacts. Moreover, the adversarial loss and reconstruction loss (e.g., l1 loss) are combined with tradeoff weights, which are also difficult to tune. In this paper, we propose a novel detection-based generative framework for image inpainting, which adopts the min-max strategy in an adversarial process. The generator follows an encoder-decoder architecture to fill the missing regions, and the detector using weakly supervised learning localizes the position of artifacts in a pixel-wise manner. Such position information makes the generator pay attention to artifacts and further enhance them. More importantly, we explicitly insert the output of the detector into the reconstruction loss with a weighting criterion, which balances the weight of the adversarial loss and reconstruction loss automatically rather than manual operation. Experiments on multiple public datasets show the superior performance of the proposed framework. The source code is available at https://github.com/Evergrow/GDN_Inpainting.