论文标题

高分辨率场景模型的范围和立体声数据融合

Fusion of Range and Stereo Data for High-Resolution Scene-Modeling

论文作者

Evangelidis, Georgios D., Hansard, Miles, Horaud, Radu

论文摘要

本文解决了用于构建高分辨率深度图的范围融合的问题。特别是,我们将低分辨率深度数据与高分辨率立体数据结合在一起,最大值后验(MAP)公式。与基于MRF优化器的现有方案不同,我们通过从深度数据中获得的稀疏初始差异来推断出一系列局部能量最小化问题的差异图。该方法的准确性不受损害,这是由于能量函数中数据学的三种特性。首先,它结合了一个新的相关函数,该函数能够通过子像素校正来提供精致的相关性和差异。其次,相关得分取决于深度数据的自适应成本汇总步骤。第三,根据场景纹理和相机几何形状,立体声和深度可能性是自适应融合的。这些特性导致了更具选择性的增长过程,该过程与以前的种子生长方法不同,它避免了传播不正确差异的趋势。所提出的方法产生了一种内在有效的算法,该算法在标准台式计算机上以2.0MP图像在3fps上运行。通过与最新方法的定量比较,以及使用实际深度 - stereo数据集的定性比较来确定新方法的强劲性能。

This paper addresses the problem of range-stereo fusion, for the construction of high-resolution depth maps. In particular, we combine low-resolution depth data with high-resolution stereo data, in a maximum a posteriori (MAP) formulation. Unlike existing schemes that build on MRF optimizers, we infer the disparity map from a series of local energy minimization problems that are solved hierarchically, by growing sparse initial disparities obtained from the depth data. The accuracy of the method is not compromised, owing to three properties of the data-term in the energy function. Firstly, it incorporates a new correlation function that is capable of providing refined correlations and disparities, via subpixel correction. Secondly, the correlation scores rely on an adaptive cost aggregation step, based on the depth data. Thirdly, the stereo and depth likelihoods are adaptively fused, based on the scene texture and camera geometry. These properties lead to a more selective growing process which, unlike previous seed-growing methods, avoids the tendency to propagate incorrect disparities. The proposed method gives rise to an intrinsically efficient algorithm, which runs at 3FPS on 2.0MP images on a standard desktop computer. The strong performance of the new method is established both by quantitative comparisons with state-of-the-art methods, and by qualitative comparisons using real depth-stereo data-sets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源