论文标题
基于可分离点的辐射字段,用于有效视图合成
Differentiable Point-Based Radiance Fields for Efficient View Synthesis
论文作者
论文摘要
我们提出了一种可区分的渲染算法,以进行有效的新型视图合成。通过从基于音量的表示方面偏爱学习点表示,我们在记忆和运行时的训练和推理方面的数量级不仅仅是改进现有方法。该方法从均匀采样的随机点云开始,并使用基于可区分的SPLAT的渲染器来发展模型以匹配一组输入图像。在训练和推理中,我们的方法比NERF快300倍,质量只有边缘牺牲,而在静态场景中使用少于10 〜MB的内存。对于动态场景,我们的方法比Stnerf训练两个数量级,并以几乎交互的速率渲染,同时即使在不施加任何时间固定的正规化器的情况下保持较高的图像质量和时间连贯性。
We propose a differentiable rendering algorithm for efficient novel view synthesis. By departing from volume-based representations in favor of a learned point representation, we improve on existing methods more than an order of magnitude in memory and runtime, both in training and inference. The method begins with a uniformly-sampled random point cloud and learns per-point position and view-dependent appearance, using a differentiable splat-based renderer to evolve the model to match a set of input images. Our method is up to 300x faster than NeRF in both training and inference, with only a marginal sacrifice in quality, while using less than 10~MB of memory for a static scene. For dynamic scenes, our method trains two orders of magnitude faster than STNeRF and renders at near interactive rate, while maintaining high image quality and temporal coherence even without imposing any temporal-coherency regularizers.