论文标题
语义视图综合
Semantic View Synthesis
论文作者
论文摘要
我们解决了一个新的语义视图综合问题 - 使用语义标签映射作为输入来生成合成场景的免费视点。我们基于语义图像综合的最新进展,并查看综合处理摄影图像内容的生成和观察外推。但是,直接应用现有图像/视图合成方法会导致严重的hothing/模糊伪影。为了解决缺点,我们提出了一种两步方法。首先,我们专注于综合3D场景的可见表面的颜色和深度。然后,我们使用合成的颜色和深度来对多面图像(MPI)表示预测过程施加明确的约束。我们的方法在原始视图中产生鲜明的内容,并在新的观点上呈现几何效果。对众多室内和室外图像的实验对几种强大的基准表示了有利的结果,并验证了我们方法的有效性。
We tackle a new problem of semantic view synthesis -- generating free-viewpoint rendering of a synthesized scene using a semantic label map as input. We build upon recent advances in semantic image synthesis and view synthesis for handling photographic image content generation and view extrapolation. Direct application of existing image/view synthesis methods, however, results in severe ghosting/blurry artifacts. To address the drawbacks, we propose a two-step approach. First, we focus on synthesizing the color and depth of the visible surface of the 3D scene. We then use the synthesized color and depth to impose explicit constraints on the multiple-plane image (MPI) representation prediction process. Our method produces sharp contents at the original view and geometrically consistent renderings across novel viewpoints. The experiments on numerous indoor and outdoor images show favorable results against several strong baselines and validate the effectiveness of our approach.