论文标题

使用双域特征流和多视图幻觉生成纹理

Texture Generation Using Dual-Domain Feature Flow with Multi-View Hallucinations

论文作者

Chang, Seunggyu, Cho, Jungchan, Oh, Songhwai

论文摘要

我们提出了一个双域生成模型,以从单个图像中估算一个纹理图,以使3D人类模型着色。当估计纹理图时,单个图像不足,因为它仅显示3D对象的一个​​方面。为了提供足够的信息来估计完整的纹理图,提出的模型同时在图像域中生成多视图幻觉,并在纹理域中生成估计的纹理图。在生成过程中,每个域发生器通过基于流的本地注意机制将特征交换到另一个特征。通过这种方式,提出的模型可以估算利用丰富的多视图图像特征的纹理图,从而从中生成多视幻觉。结果,估计的纹理图在整个区域中包含一致的颜色和图案。实验显示了我们模型对直接渲染纹理图的优越性,该图适用于3D动画渲染。此外,我们的模型还提高了姿势和观点转移任务的图像域的整体发电质量。

We propose a dual-domain generative model to estimate a texture map from a single image for colorizing a 3D human model. When estimating a texture map, a single image is insufficient as it reveals only one facet of a 3D object. To provide sufficient information for estimating a complete texture map, the proposed model simultaneously generates multi-view hallucinations in the image domain and an estimated texture map in the texture domain. During the generating process, each domain generator exchanges features to the other by a flow-based local attention mechanism. In this manner, the proposed model can estimate a texture map utilizing abundant multi-view image features from which multiview hallucinations are generated. As a result, the estimated texture map contains consistent colors and patterns over the entire region. Experiments show the superiority of our model for estimating a directly render-able texture map, which is applicable to 3D animation rendering. Furthermore, our model also improves an overall generation quality in the image domain for pose and viewpoint transfer tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源