论文标题

学习使用生成的光场和真实的散焦图像来脱布

Learning to Deblur using Light Field Generated and Real Defocus Images

论文作者

Ruan, Lingyan, Chen, Bin, Li, Jizhou, Lam, Miuling

论文摘要

由于散焦模糊的空间变化性质,Defocus Deblurring是一项具有挑战性的任务。虽然深度学习方法在解决图像恢复问题方面表现出了巨大的希望,但DeFocus Deblurring需求是由全焦点和Defocus图像对组成的准确培训数据,这很难收集。幼稚的两射击捕获无法在散焦和全焦点图像对之间实现像素的对应关系。暗示光场的合成孔被认为是生成准确图像对的一种更可靠的方法。但是,从光场数据产生的散焦模糊与使用传统数码相机捕获的图像不同。在本文中,我们提出了一个新型的深层散热网络,该网络利用强度并克服了光场的缺点。我们首先在光场生成的数据集上训练网络,以实现其高度准确的图像对应关系。然后,我们使用两击方法收集的另一个数据集上的特征损失来微调网络,以减轻两个域中存在的散焦模糊之间的差异。事实证明,该策略是非常有效的,并且能够在多个测试集上进行定量和质量上的最新性能。已经进行了广泛的消融研究,以分析每个网络模块对最终性能的影响。

Defocus deblurring is a challenging task due to the spatially varying nature of defocus blur. While deep learning approach shows great promise in solving image restoration problems, defocus deblurring demands accurate training data that consists of all-in-focus and defocus image pairs, which is difficult to collect. Naive two-shot capturing cannot achieve pixel-wise correspondence between the defocused and all-in-focus image pairs. Synthetic aperture of light fields is suggested to be a more reliable way to generate accurate image pairs. However, the defocus blur generated from light field data is different from that of the images captured with a traditional digital camera. In this paper, we propose a novel deep defocus deblurring network that leverages the strength and overcomes the shortcoming of light fields. We first train the network on a light field-generated dataset for its highly accurate image correspondence. Then, we fine-tune the network using feature loss on another dataset collected by the two-shot method to alleviate the differences between the defocus blur exists in the two domains. This strategy is proved to be highly effective and able to achieve the state-of-the-art performance both quantitatively and qualitatively on multiple test sets. Extensive ablation studies have been conducted to analyze the effect of each network module to the final performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源