论文标题

S $^3 $ -NERF:从阴影和阴影下的神经反射率字段

S$^3$-NeRF: Neural Reflectance Field from Shading and Shadow under a Single Viewpoint

论文作者

Yang, Wenqi, Chen, Guanying, Chen, Chaofeng, Chen, Zhenfang, Wong, Kwan-Yee K.

论文摘要

在本文中,我们讨论了多视图场景重建的“双重问题”,其中我们利用在不同点灯下捕获的单视图像来学习神经场景表示。与现有的单视图方法不同,该方法只能恢复2.5D场景表示(即可见表面的正常 /深度图),我们的方法学习了一个神经反射率字段,以表示场景的3D几何和BRDF。我们的方法不依靠多视图的照片一致性,而是利用了两个信息丰富的单眼线索,即阴影和阴影,以推断场景几何形状。多个挑战性数据集的实验表明,我们的方法能够从单视图像中恢复场景的3D几何形状,包括可见的部分和看不见的部分。得益于神经反射场的表示,我们的方法对深度不连续性是强大的。它支持新视图综合和重新确定的应用。我们的代码和模型可以在https://ywq.github.io/s3nerf上找到。

In this paper, we address the "dual problem" of multi-view scene reconstruction in which we utilize single-view images captured under different point lights to learn a neural scene representation. Different from existing single-view methods which can only recover a 2.5D scene representation (i.e., a normal / depth map for the visible surface), our method learns a neural reflectance field to represent the 3D geometry and BRDFs of a scene. Instead of relying on multi-view photo-consistency, our method exploits two information-rich monocular cues, namely shading and shadow, to infer scene geometry. Experiments on multiple challenging datasets show that our method is capable of recovering 3D geometry, including both visible and invisible parts, of a scene from single-view images. Thanks to the neural reflectance field representation, our method is robust to depth discontinuities. It supports applications like novel-view synthesis and relighting. Our code and model can be found at https://ywq.github.io/s3nerf.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源