论文标题
PVSERF:单位图像合成的关节像素,体素和表面对齐的辐射场
PVSeRF: Joint Pixel-, Voxel- and Surface-Aligned Radiance Field for Single-Image Novel View Synthesis
论文作者
论文摘要
我们提出PVSERF,这是一个学习框架,可从单视RGB图像中重建神经辐射场,以进行新的视图综合。以前的解决方案(例如Pixelnerf)仅依赖于像素对齐的功能,并遭受了歧义问题的困扰。结果,他们在分离几何形状和外观上挣扎,从而导致了难以置信的几何形状和模糊结果。为了应对这一挑战,我们建议将显式的几何推理结合起来,并将其与像素对齐的特征相结合,以进行辐射率字段预测。具体而言,除了与像素对齐的功能外,我们还进一步限制了从i)从粗糙的体积网格和ii)从回归点云中提取的精细表面对准特征来划分的辐射场。我们表明,这种几何感知特征的引入有助于实现外观和几何形状之间的更好分离,即恢复更准确的几何形状并综合新型视图的更高质量的图像。针对塑形基准的最先进方法进行的广泛实验证明了我们方法对单像新颖的视图合成的优越性。
We present PVSeRF, a learning framework that reconstructs neural radiance fields from single-view RGB images, for novel view synthesis. Previous solutions, such as pixelNeRF, rely only on pixel-aligned features and suffer from feature ambiguity issues. As a result, they struggle with the disentanglement of geometry and appearance, leading to implausible geometries and blurry results. To address this challenge, we propose to incorporate explicit geometry reasoning and combine it with pixel-aligned features for radiance field prediction. Specifically, in addition to pixel-aligned features, we further constrain the radiance field learning to be conditioned on i) voxel-aligned features learned from a coarse volumetric grid and ii) fine surface-aligned features extracted from a regressed point cloud. We show that the introduction of such geometry-aware features helps to achieve a better disentanglement between appearance and geometry, i.e. recovering more accurate geometries and synthesizing higher quality images of novel views. Extensive experiments against state-of-the-art methods on ShapeNet benchmarks demonstrate the superiority of our approach for single-image novel view synthesis.