论文标题

学习本地统一的3D点云以查看合成

Learning A Locally Unified 3D Point Cloud for View Synthesis

论文作者

You, Meng, Guo, Mantang, Lyu, Xianqiang, Liu, Hui, Hou, Junhui

论文摘要

在本文中,我们从一组稀疏的源视图中探讨了基于3D点云表示综合的问题。为了解决这个具有挑战性的问题,我们提出了一种新的基于深度学习的视图综合范式,该范式从源视图中学习了本地统一的3D点云。具体而言,我们首先通过根据其深度图将源视图投影到3D空间来构建子点云。然后,我们通过在子点云结合的本地邻域中自适应地融合点来学习本地统一的3D点云。此外,我们还提出了一个3D几何引导的图像恢复模块,以填充孔并恢复渲染的新型视图的高频细节。与最先进的视图合成方法相比,三个基准数据集的实验结果表明,我们的方法可以将平均PSNR提高超过4 dB,同时保留更准确的视觉细节。

In this paper, we explore the problem of 3D point cloud representation-based view synthesis from a set of sparse source views. To tackle this challenging problem, we propose a new deep learning-based view synthesis paradigm that learns a locally unified 3D point cloud from source views. Specifically, we first construct sub-point clouds by projecting source views to 3D space based on their depth maps. Then, we learn the locally unified 3D point cloud by adaptively fusing points at a local neighborhood defined on the union of the sub-point clouds. Besides, we also propose a 3D geometry-guided image restoration module to fill the holes and recover high-frequency details of the rendered novel views. Experimental results on three benchmark datasets demonstrate that our method can improve the average PSNR by more than 4 dB while preserving more accurate visual details, compared with state-of-the-art view synthesis methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源