论文标题

学习的多视图纹理超分辨率

Learned Multi-View Texture Super-Resolution

论文作者

Richard, Audrey, Cherabier, Ian, Oswald, Martin R., Tsiminaki, Vagia, Pollefeys, Marc, Schindler, Konrad

论文摘要

我们提出了一种能够从该对象的一组低分辨率图像为虚拟3D对象创建高分辨率纹理图的超分辨率方法。我们的体系结构基于重叠视图的冗余和(ii)基于高分辨率(HR)图像结构的先验的单视图,将(i)多视图超分辨率的概念统一。多视图超分辨率的原理是将图像形成过程反转并从多个下分辨率投影中恢复潜在的HR纹理。我们将该逆问题映射到适当设计的神经网络层的一块,并将其与标准的编码器码头网络相结合,用于学习的单片超级分辨率。将图像形成模型接线到网络中避免了必须学习从纹理到图像的透视图映射,并优雅地处理了不同数量的输入视图。实验表明,多视图观测和先前学到的收益率的组合改善了纹理图。

We present a super-resolution method capable of creating a high-resolution texture map for a virtual 3D object from a set of lower-resolution images of that object. Our architecture unifies the concepts of (i) multi-view super-resolution based on the redundancy of overlapping views and (ii) single-view super-resolution based on a learned prior of high-resolution (HR) image structure. The principle of multi-view super-resolution is to invert the image formation process and recover the latent HR texture from multiple lower-resolution projections. We map that inverse problem into a block of suitably designed neural network layers, and combine it with a standard encoder-decoder network for learned single-image super-resolution. Wiring the image formation model into the network avoids having to learn perspective mapping from textures to images, and elegantly handles a varying number of input views. Experiments demonstrate that the combination of multi-view observations and learned prior yields improved texture maps.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源