论文标题

FVOR:稳健的关节形状和姿势优化,用于重建几个对象

FvOR: Robust Joint Shape and Pose Optimization for Few-view Object Reconstruction

论文作者

Yang, Zhenpei, Ren, Zhile, Bautista, Miguel Angel, Zhang, Zaiwei, Shan, Qi, Huang, Qixing

论文摘要

从一些图像观测值中重建准确的3D对象模型仍然是一个具有挑战性的问题。最先进的方法通常假设准确的相机姿势是输入,在现实的设置中可能很难获得。在本文中,我们提出了FVOR,这是一种基于学习的对象重建方法,可预测准确的3D模型,并给出一些具有嘈杂输入姿势的图像。我们方法的核心是使用可学习的神经网络模块共同完善3D几何形状和相机姿势估计的快速,强大的多视图重建算法。我们为Shapenet上的此问题提供了最新方法的详尽基准。我们的方法取得了一流的结果。它也比最近基于优化的方法IDR快两个数量级。我们的代码在\ url {https://github.com/zhenpeiyang/fvor/}发行

Reconstructing an accurate 3D object model from a few image observations remains a challenging problem in computer vision. State-of-the-art approaches typically assume accurate camera poses as input, which could be difficult to obtain in realistic settings. In this paper, we present FvOR, a learning-based object reconstruction method that predicts accurate 3D models given a few images with noisy input poses. The core of our approach is a fast and robust multi-view reconstruction algorithm to jointly refine 3D geometry and camera pose estimation using learnable neural network modules. We provide a thorough benchmark of state-of-the-art approaches for this problem on ShapeNet. Our approach achieves best-in-class results. It is also two orders of magnitude faster than the recent optimization-based approach IDR. Our code is released at \url{https://github.com/zhenpeiyang/FvOR/}

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源