论文标题
自由视频视频的时空神经辐照领域
Space-time Neural Irradiance Fields for Free-Viewpoint Video
论文作者
论文摘要
我们提出了一种方法,该方法可以从单个视频中学习一个时空神经辐照度领域。我们学到的表示形式可实现输入视频的免费观看点渲染。我们的方法基于隐性表示的最新进展。从单个视频中学习时空辐照度字段会带来重大挑战,因为该视频在任何时间点只包含一个对场景的观察。场景的3D几何形状可以通过多种方式合法地表示,因为可以用不同的外观来解释不同的几何形状(运动),反之亦然。我们通过使用视频深度估计方法估算的场景深度来限制我们动态场景表示的时间变化几何形状来解决这种歧义,从而将各个帧的内容汇总到单个全局表示形式中。我们提供了广泛的定量评估,并证明了引人注目的自由视点渲染结果。
We present a method that learns a spatiotemporal neural irradiance field for dynamic scenes from a single video. Our learned representation enables free-viewpoint rendering of the input video. Our method builds upon recent advances in implicit representations. Learning a spatiotemporal irradiance field from a single video poses significant challenges because the video contains only one observation of the scene at any point in time. The 3D geometry of a scene can be legitimately represented in numerous ways since varying geometry (motion) can be explained with varying appearance and vice versa. We address this ambiguity by constraining the time-varying geometry of our dynamic scene representation using the scene depth estimated from video depth estimation methods, aggregating contents from individual frames into a single global representation. We provide an extensive quantitative evaluation and demonstrate compelling free-viewpoint rendering results.