论文标题

新视图综合的自我提出的多层到层图像

Self-improving Multiplane-to-layer Images for Novel View Synthesis

论文作者

Solovev, Pavel, Khakhulin, Taras, Korzhenkov, Denis

论文摘要

我们提出了一种新方法,用于轻巧的小型视图合成,该合成概括为任意的前瞻性场景。最近的方法在计算上很昂贵,需要每场景优化或产生内存廉价的表示。我们首先使用一组额叶平行的半透明平面来表示场景,然后以端到端的方式将它们转换为可变形层。此外,我们采用了一个馈入精炼程序,通过从输入视图中汇总信息来纠正估计的表示形式。当处理新场景时,我们的方法不需要微调,并且可以在没有限制的情况下处理任意数量的视图。实验结果表明,我们的方法在公共指标和人类评估方面超过了最新模型,在推理速度和紧凑的推理层几何形状方面具有明显优势,请参见https://samsunglabs.github.io/mli

We present a new method for lightweight novel-view synthesis that generalizes to an arbitrary forward-facing scene. Recent approaches are computationally expensive, require per-scene optimization, or produce a memory-expensive representation. We start by representing the scene with a set of fronto-parallel semitransparent planes and afterward convert them to deformable layers in an end-to-end manner. Additionally, we employ a feed-forward refinement procedure that corrects the estimated representation by aggregating information from input views. Our method does not require fine-tuning when a new scene is processed and can handle an arbitrary number of views without restrictions. Experimental results show that our approach surpasses recent models in terms of common metrics and human evaluation, with the noticeable advantage in inference speed and compactness of the inferred layered geometry, see https://samsunglabs.github.io/MLI

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源