论文标题

Panodepth:单眼全向深度估计的两阶段方法

PanoDepth: A Two-Stage Approach for Monocular Omnidirectional Depth Estimation

论文作者

Li, Yuyan, Yan, Zhixin, Duan, Ye, Ren, Liu

论文摘要

全向3D信息对于虚拟现实,自动驾驶,机器人技术等广泛应用至关重要。在本文中,我们提出了一种新颖的,模型的无关,两阶段的管道,用于全向单程深度估算。我们提出的框架panodepth将一个360图像作为输入,在第一阶段产生一个或多个合成的视图,并将原始图像和合成的图像馈入随后的立体声匹配阶段。在第二阶段,我们提出了一个可区分的球形翘曲层,以有效有效地处理全向立体几何形状。通过在立体声匹配阶段使用明确的基于立体声的几何约束,PanoDepth可以产生密集的高质量深度。我们进行了广泛的实验和消融研究,以通过整个管道以及每个阶段的单个模块评估Panodepth。我们的结果表明,PanoDepth的表现优于最先进的方法,以较大的边距进行360个单眼深度估计。

Omnidirectional 3D information is essential for a wide range of applications such as Virtual Reality, Autonomous Driving, Robotics, etc. In this paper, we propose a novel, model-agnostic, two-stage pipeline for omnidirectional monocular depth estimation. Our proposed framework PanoDepth takes one 360 image as input, produces one or more synthesized views in the first stage, and feeds the original image and the synthesized images into the subsequent stereo matching stage. In the second stage, we propose a differentiable Spherical Warping Layer to handle omnidirectional stereo geometry efficiently and effectively. By utilizing the explicit stereo-based geometric constraints in the stereo matching stage, PanoDepth can generate dense high-quality depth. We conducted extensive experiments and ablation studies to evaluate PanoDepth with both the full pipeline as well as the individual modules in each stage. Our results show that PanoDepth outperforms the state-of-the-art approaches by a large margin for 360 monocular depth estimation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源