论文标题

书呆子:图像收集的神经反射分解

NeRD: Neural Reflectance Decomposition from Image Collections

论文作者

Boss, Mark, Braun, Raphael, Jampani, Varun, Barron, Jonathan T., Liu, Ce, Lensch, Hendrik P. A.

论文摘要

将场景分解成其形状,反射率和照明是一个具有挑战性但重要的问题,但在计算机视觉和图形中。当照明不是实验室条件下的单一光源,而是一种不受约束的环境照明时,这个问题本质上更具挑战性。尽管最近的工作表明,隐式表示可以用于对象的辐射场进行建模,但这些技术中的大多数仅启用视图综合而不是重新构成。此外,评估这些光辉领域是资源和时间密集型的。我们提出了一种神经反射分解(NERD)技术,该技术使用基于物理的渲染将场景分解为空间变化的BRDF材料特性。与现有技术相反,我们的输入图像可以在不同的照明条件下捕获。此外,我们还提出了将学习的反射率量转换为可重新的纹理网格的技术,从而可以通过新颖的照明来快速实时渲染。我们通过在合成数据集和真实数据集上进行实验证明了所提出的方法的潜力,在那里我们能够从图像集合中获得高质量可靠的3D资产。数据集和代码可在项目页面上找到:https://markboss.me/publication/2021-nerd/

Decomposing a scene into its shape, reflectance, and illumination is a challenging but important problem in computer vision and graphics. This problem is inherently more challenging when the illumination is not a single light source under laboratory conditions but is instead an unconstrained environmental illumination. Though recent work has shown that implicit representations can be used to model the radiance field of an object, most of these techniques only enable view synthesis and not relighting. Additionally, evaluating these radiance fields is resource and time-intensive. We propose a neural reflectance decomposition (NeRD) technique that uses physically-based rendering to decompose the scene into spatially varying BRDF material properties. In contrast to existing techniques, our input images can be captured under different illumination conditions. In addition, we also propose techniques to convert the learned reflectance volume into a relightable textured mesh enabling fast real-time rendering with novel illuminations. We demonstrate the potential of the proposed approach with experiments on both synthetic and real datasets, where we are able to obtain high-quality relightable 3D assets from image collections. The datasets and code is available on the project page: https://markboss.me/publication/2021-nerd/

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源