论文标题

Noings2Kernel:使用扩张的卷积内核建筑自适应自我监督的盲目denoising

Noise2Kernel: Adaptive Self-Supervised Blind Denoising using a Dilated Convolutional Kernel Architecture

论文作者

Lee, Kanggeun, Jeong, Won-Ki

论文摘要

随着无监督学习的最新进展的出现,对不嘈杂且干净的图像的深度网络进行了有效的培训变得可行。但是,大多数当前无监督的非授权方法都是基于信号无关的条件下的零均值噪声的假设。这种假设导致盲目的剥夺技术在图像上遭受亮度转移问题,这些图像被极端噪音(例如盐和辣椒噪声)损坏。此外,大多数盲目的denoising方法都需要一个随机掩盖方案进行训练,以确保脱索过程的不变性。在本文中,我们提出了一个满足不变特性的扩张卷积网络,可以在不随机掩盖的情况下进行有效的基于内核的训练。我们还提出了自适应的自主损失,以规避零均值约束的需求,这对于消除盐和辣椒或混合噪声特别有效,而在不容易获得噪声统计信息的情况下。我们通过使用各种示例将提出方法与最先进的去核方法进行比较来证明该方法的疗效。

With the advent of recent advances in unsupervised learning, efficient training of a deep network for image denoising without pairs of noisy and clean images has become feasible. However, most current unsupervised denoising methods are built on the assumption of zero-mean noise under the signal-independent condition. This assumption causes blind denoising techniques to suffer brightness shifting problems on images that are greatly corrupted by extreme noise such as salt-and-pepper noise. Moreover, most blind denoising methods require a random masking scheme for training to ensure the invariance of the denoising process. In this paper, we propose a dilated convolutional network that satisfies an invariant property, allowing efficient kernel-based training without random masking. We also propose an adaptive self-supervision loss to circumvent the requirement of zero-mean constraint, which is specifically effective in removing salt-and-pepper or hybrid noise where a prior knowledge of noise statistics is not readily available. We demonstrate the efficacy of the proposed method by comparing it with state-of-the-art denoising methods using various examples.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源