论文标题
Points2nerf:从3D点云中生成神经辐射场
Points2NeRF: Generating Neural Radiance Fields from 3D point cloud
论文作者
论文摘要
3D视觉信息的当代注册设备,例如LIDAR和各种深度摄像机,将数据捕获为3D点云。反过来,由于其大小和复杂性,因此要处理这些云。现有方法通过将网格拟合到点云并将其渲染来解决此问题。但是,这种方法导致了所得可视化的忠诚度降低,并错过了计算机图形应用程序中对象的颜色信息。在这项工作中,我们建议通过将3D对象表示为神经辐射场(NERFS)来减轻这一挑战。我们利用超网范式并训练模型以带有相关颜色值的3D点云,并返回NERF网络的权重,该权重从输入2D图像重建3D对象。我们的方法提供了有效的3D对象表示形式,并在现有方法中提供了几个优势,包括调节NERF的能力和改进的概括,超出了训练中的对象。我们在经验评估的结果中也证实了后者。
Contemporary registration devices for 3D visual information, such as LIDARs and various depth cameras, capture data as 3D point clouds. In turn, such clouds are challenging to be processed due to their size and complexity. Existing methods address this problem by fitting a mesh to the point cloud and rendering it instead. This approach, however, leads to the reduced fidelity of the resulting visualization and misses color information of the objects crucial in computer graphics applications. In this work, we propose to mitigate this challenge by representing 3D objects as Neural Radiance Fields (NeRFs). We leverage a hypernetwork paradigm and train the model to take a 3D point cloud with the associated color values and return a NeRF network's weights that reconstruct 3D objects from input 2D images. Our method provides efficient 3D object representation and offers several advantages over the existing approaches, including the ability to condition NeRFs and improved generalization beyond objects seen in training. The latter we also confirmed in the results of our empirical evaluation.