论文标题

SVS:稀疏新颖视图合成的对抗性精炼

SVS: Adversarial refinement for sparse novel view synthesis

论文作者

González, Violeta Menéndez, Gilbert, Andrew, Phillipson, Graeme, Jolly, Stephen, Hadfield, Simon

论文摘要

本文提出了稀疏视图合成。这是参考视图数量有限的视图综合问题,目标和参考视图之间的基线很重要。在这些条件下,由于不可避免的伪像,当前的辐射场方法在灾难性上失败,每当参考视图的数量受到限制,或者目标视图与参考视图显着差异时,这种3D浮动斑点,模糊和结构重复。 网络体系结构和损失正规化的进步无法令人满意地删除这些工件。场景中的遮挡确保这些区域的真实内容根本不可用。在这项工作中,我们专注于在此类地区幻觉中幻觉。为此,我们将辐射现场模型与对抗性学习和感知损失统一。与此问题上当前的最新辐射场模型相比,最终的系统可提供高达60%的感知精度。

This paper proposes Sparse View Synthesis. This is a view synthesis problem where the number of reference views is limited, and the baseline between target and reference view is significant. Under these conditions, current radiance field methods fail catastrophically due to inescapable artifacts such 3D floating blobs, blurring and structural duplication, whenever the number of reference views is limited, or the target view diverges significantly from the reference views. Advances in network architecture and loss regularisation are unable to satisfactorily remove these artifacts. The occlusions within the scene ensure that the true contents of these regions is simply not available to the model. In this work, we instead focus on hallucinating plausible scene contents within such regions. To this end we unify radiance field models with adversarial learning and perceptual losses. The resulting system provides up to 60% improvement in perceptual accuracy compared to current state-of-the-art radiance field models on this problem.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源