论文标题

2D甘斯会遇到无监督的单视3D重建

2D GANs Meet Unsupervised Single-view 3D Reconstruction

论文作者

Liu, Feng, Liu, Xiaoming

论文摘要

最近的研究表明,基于预训练的gan的可控图像生成可以使广泛的计算机视觉任务受益。但是,更少的关注专门用于3D视觉任务。鉴于此,我们提出了一个新颖的图像条件神经隐式领域,该领域可以利用GAN生成的多视图图像的2D监督,并执行通用对象的单视图重建。首先,提出了一种新型的基于Offline Stylegan的发电机,以生成具有对观点的完全控制的合理伪图像。然后,我们建议利用神经隐式函数,以及可区分的渲染器,从带有对象遮罩和粗糙姿势初始化的伪图像中学习3D几何形状。为了进一步检测不可靠的监督,我们引入了一个新型的不确定性模块来预测不确定性图,这可以补救伪图像中不确定区域的负面影响,从而导致更好的重建性能。我们方法的有效性通过了通用对象的出色单视3D重建结果证明。

Recent research has shown that controllable image generation based on pre-trained GANs can benefit a wide range of computer vision tasks. However, less attention has been devoted to 3D vision tasks. In light of this, we propose a novel image-conditioned neural implicit field, which can leverage 2D supervisions from GAN-generated multi-view images and perform the single-view reconstruction of generic objects. Firstly, a novel offline StyleGAN-based generator is presented to generate plausible pseudo images with full control over the viewpoint. Then, we propose to utilize a neural implicit function, along with a differentiable renderer to learn 3D geometry from pseudo images with object masks and rough pose initializations. To further detect the unreliable supervisions, we introduce a novel uncertainty module to predict uncertainty maps, which remedy the negative effect of uncertain regions in pseudo images, leading to a better reconstruction performance. The effectiveness of our approach is demonstrated through superior single-view 3D reconstruction results of generic objects.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源