论文标题
单图像深度预测使功能匹配更加容易
Single-Image Depth Prediction Makes Feature Matching Easier
论文作者
论文摘要
良好的本地特征提高了许多3D重新定位和多视图重建管道的鲁棒性。问题在于,视角和距离严重影响了当地特征的可识别性。尝试通过选择更好的本地特征点或利用外部信息来提高外观不变性的尝试,并带有先决条件,使其中一些不切实际。在本文中,我们提出了令人惊讶的有效增强局部特征提取,从而改善了匹配。我们表明,尽管存在缺陷,但从单个RGB图像推断出的基于CNN的深度仍然很有帮助。它们使我们能够预先进行搏动图像并纠正透视扭曲,从而显着增强筛分和轻快的特征,即使相机看着相同的场景,但在相反的方向上,也可以实现更多的匹配。
Good local features improve the robustness of many 3D re-localization and multi-view reconstruction pipelines. The problem is that viewing angle and distance severely impact the recognizability of a local feature. Attempts to improve appearance invariance by choosing better local feature points or by leveraging outside information, have come with pre-requisites that made some of them impractical. In this paper, we propose a surprisingly effective enhancement to local feature extraction, which improves matching. We show that CNN-based depths inferred from single RGB images are quite helpful, despite their flaws. They allow us to pre-warp images and rectify perspective distortions, to significantly enhance SIFT and BRISK features, enabling more good matches, even when cameras are looking at the same scene but in opposite directions.