论文标题
图像增强的预计分销损失
Projected Distribution Loss for Image Enhancement
论文作者
论文摘要
从对象识别CNN获得的特征已被广泛用于测量图像之间的知觉相似性。这种可区分的指标可以用作感知学习损失来训练图像增强模型。但是,输入和目标特征之间距离函数的选择可能会对训练的模型的性能产生影响。虽然提取特征之间的差异规范会导致细节的幻觉有限,但测量特征分布之间的距离可能会产生更多的纹理。还有更多不切实际的细节和文物。在本文中,我们证明了CNN激活之间的1D-Wasserstein距离比现有方法更可靠,并且可以显着提高增强模型的感知性能。更明确地,我们表明,在成像应用中,诸如denoing,超分辨率,表演,脱皮和jpeg伪像的删除之类的应用中,提议的学习损失优于基于参考的感知损失的最新目前。这意味着建议的学习损失可以插入不同的成像框架中并产生感知现实的结果。
Features obtained from object recognition CNNs have been widely used for measuring perceptual similarities between images. Such differentiable metrics can be used as perceptual learning losses to train image enhancement models. However, the choice of the distance function between input and target features may have a consequential impact on the performance of the trained model. While using the norm of the difference between extracted features leads to limited hallucination of details, measuring the distance between distributions of features may generate more textures; yet also more unrealistic details and artifacts. In this paper, we demonstrate that aggregating 1D-Wasserstein distances between CNN activations is more reliable than the existing approaches, and it can significantly improve the perceptual performance of enhancement models. More explicitly, we show that in imaging applications such as denoising, super-resolution, demosaicing, deblurring and JPEG artifact removal, the proposed learning loss outperforms the current state-of-the-art on reference-based perceptual losses. This means that the proposed learning loss can be plugged into different imaging frameworks and produce perceptually realistic results.