论文标题
通用光场重建的生成模型
A Generative Model for Generic Light Field Reconstruction
论文作者
论文摘要
最近,深层生成模型在建模训练数据的分布方面取得了令人印象深刻的进步。在这项工作中,我们首次使用各种自动编码器捕获光场贴剂的数据分布,首次提出4D光场斑块的生成模型。我们开发了一个以光场的中央视图为条件的生成模型,并将其作为能量最小化框架的先验,以解决各种光场重建任务。虽然基于纯学习的方法确实在此类问题的每个实例上都取得了出色的成果,但它们的适用性仅限于他们接受过的特定观察模型。相反,我们训练有素的光场生成模型可以作为先验的任何基于模型的优化方法纳入任何基于模型的优化方法,因此可以扩展到各种重建任务,包括光场视图合成,空间角度超级分辨率和编码预测的重建。我们提出的方法证明了良好的重建,性能接近端到端训练的网络,同时在合成场景和真实场景上都优于基于模型的传统方法。此外,我们表明,尽管输入中有扭曲,但我们的方法仍可以可靠的光场恢复。
Recently deep generative models have achieved impressive progress in modeling the distribution of training data. In this work, we present for the first time a generative model for 4D light field patches using variational autoencoders to capture the data distribution of light field patches. We develop a generative model conditioned on the central view of the light field and incorporate this as a prior in an energy minimization framework to address diverse light field reconstruction tasks. While pure learning-based approaches do achieve excellent results on each instance of such a problem, their applicability is limited to the specific observation model they have been trained on. On the contrary, our trained light field generative model can be incorporated as a prior into any model-based optimization approach and therefore extend to diverse reconstruction tasks including light field view synthesis, spatial-angular super resolution and reconstruction from coded projections. Our proposed method demonstrates good reconstruction, with performance approaching end-to-end trained networks, while outperforming traditional model-based approaches on both synthetic and real scenes. Furthermore, we show that our approach enables reliable light field recovery despite distortions in the input.