论文标题
nerfocus:3D合成散焦的神经辐射场
NeRFocus: Neural Radiance Field for 3D Synthetic Defocus
论文作者
论文摘要
神经辐射场(NERF)为3D互动体验带来了新的浪潮。但是,作为沉浸式体验的重要组成部分,在NERF中尚未充分探索散焦效应。一些最近基于NERF的方法通过利用多平台技术以后制过程产生3D散焦效应。尽管如此,它们还是耗时的,或者是记忆力消费。本文提出了一种新型的基于薄镜的NERF框架,该框架可以直接呈现出各种3D散焦效应,称为nerfocus。与针孔不同,薄镜头折射了场景点的射线,因此其在传感器平面上的成像被散布为混乱圆(COC)。直接的解决方案采样足够的射线以近似此过程在计算上很昂贵。取而代之的是,我们建议将薄镜头成像倒数,以明确对传感器平面上每个点的光束路径进行建模,并将此范式推广到每个像素的梁路径,然后使用基于Froustum的体积渲染以渲染每个像素的梁路径。我们进一步设计了有效的概率培训(P培训)策略,以大大简化培训过程。广泛的实验表明,我们的nerfocus可以通过可调节的相机姿势,聚焦距离和光圈大小实现各种3D散焦效应。通过将孔径大小设置为零,可以将现有的NERF视为我们的特殊情况,以渲染大量的视野图像。尽管有这样的优点,但Nerfocus并未牺牲Nerf的原始表现(例如培训和推理时间,参数消费,渲染质量),这意味着其具有更广泛应用和进一步改进的巨大潜力。代码和视频可在https://github.com/wyhuai/nerfocus上找到。
Neural radiance fields (NeRF) bring a new wave for 3D interactive experiences. However, as an important part of the immersive experiences, the defocus effects have not been fully explored within NeRF. Some recent NeRF-based methods generate 3D defocus effects in a post-process fashion by utilizing multiplane technology. Still, they are either time-consuming or memory-consuming. This paper proposes a novel thin-lens-imaging-based NeRF framework that can directly render various 3D defocus effects, dubbed NeRFocus. Unlike the pinhole, the thin lens refracts rays of a scene point, so its imaging on the sensor plane is scattered as a circle of confusion (CoC). A direct solution sampling enough rays to approximate this process is computationally expensive. Instead, we propose to inverse the thin lens imaging to explicitly model the beam path for each point on the sensor plane and generalize this paradigm to the beam path of each pixel, then use the frustum-based volume rendering to render each pixel's beam path. We further design an efficient probabilistic training (p-training) strategy to simplify the training process vastly. Extensive experiments demonstrate that our NeRFocus can achieve various 3D defocus effects with adjustable camera pose, focus distance, and aperture size. Existing NeRF can be regarded as our special case by setting aperture size as zero to render large depth-of-field images. Despite such merits, NeRFocus does not sacrifice NeRF's original performance (e.g., training and inference time, parameter consumption, rendering quality), which implies its great potential for broader application and further improvement. Code and video are available at https://github.com/wyhuai/NeRFocus.