论文标题
利用低光成像的光场的多视图视角
Harnessing Multi-View Perspective of Light Fields for Low-Light Imaging
论文作者
论文摘要
光场(LF)提供了独特的优势,例如捕获后重新集中和深度估计,但低光条件限制了这些功能。为了恢复低光LFS,我们应该利用不同LF视图中存在的几何提示,使用单帧低光增强技术是不可能的。因此,我们提出了一个深层光场(L3F)恢复的深神经网络,我们称之为L3FNET。所提出的L3FNET不仅可以对每个LF视图进行必要的视觉增强,而且还可以保留跨视图的两极几何形状。我们通过为L3FNET采用两阶段的体系结构来实现这一目标。阶段 - 查看所有LF视图以编码LF几何形状。然后,该编码的信息在阶段II中用于重建每个LF视图。为了促进低光LF成像的基于学习的技术,我们收集了各种场景的全面LF数据集。对于每个场景,我们捕获了四个LF,一个LFS几乎是最佳的曝光和ISO设置,而在不同级别的低光条件下的其他LF则从低到极端的低光设置不等。该数据集上的视觉和数值比较支持所提出的L3FNET的有效性。为了进一步分析低光重构方法的性能,我们还提出了一个L3F野生数据集,该数据集包含在深夜捕获的LF,几乎为零lux值。该数据集中没有地面真相可用。要在L3F野生数据集上表现良好,任何方法都必须适应捕获场景的光级。为此,我们提出了一个新颖的预处理块,该块使L3FNET在各种程度的低光条件下稳健。最后,我们证明L3FNET也可以用于单帧图像的低光增强,尽管它是针对LF数据进行了设计的。我们通过将单帧DSLR图像转换为适合L3FNET的形式来做到这一点,我们称之为伪LF。
Light Field (LF) offers unique advantages such as post-capture refocusing and depth estimation, but low-light conditions limit these capabilities. To restore low-light LFs we should harness the geometric cues present in different LF views, which is not possible using single-frame low-light enhancement techniques. We, therefore, propose a deep neural network for Low-Light Light Field (L3F) restoration, which we refer to as L3Fnet. The proposed L3Fnet not only performs the necessary visual enhancement of each LF view but also preserves the epipolar geometry across views. We achieve this by adopting a two-stage architecture for L3Fnet. Stage-I looks at all the LF views to encode the LF geometry. This encoded information is then used in Stage-II to reconstruct each LF view. To facilitate learning-based techniques for low-light LF imaging, we collected a comprehensive LF dataset of various scenes. For each scene, we captured four LFs, one with near-optimal exposure and ISO settings and the others at different levels of low-light conditions varying from low to extreme low-light settings. The effectiveness of the proposed L3Fnet is supported by both visual and numerical comparisons on this dataset. To further analyze the performance of low-light reconstruction methods, we also propose an L3F-wild dataset that contains LF captured late at night with almost zero lux values. No ground truth is available in this dataset. To perform well on the L3F-wild dataset, any method must adapt to the light level of the captured scene. To do this we propose a novel pre-processing block that makes L3Fnet robust to various degrees of low-light conditions. Lastly, we show that L3Fnet can also be used for low-light enhancement of single-frame images, despite it being engineered for LF data. We do so by converting the single-frame DSLR image into a form suitable to L3Fnet, which we call as pseudo-LF.