论文标题
在视觉上模棱两可的极端环境中的6D摄像头重新定位
6D Camera Relocalization in Visually Ambiguous Extreme Environments
论文作者
论文摘要
我们提出了一种新颖的方法,可以可靠地估计相机的姿势,并在极端环境中获得的一系列图像,例如深海或外星地形。在这些挑战性条件下获得的数据被无纹理表面,图像退化以及重复性和高度模棱两可的结构所破坏。当天真地部署时,最先进的方法可能会在我们的经验分析确认的那些情况下失败。在本文中,我们试图在这些极端情况下使摄像头重新定位起作用。为此,我们提出:(i)一个分层定位系统,我们利用时间信息以及(ii)一种新颖的环境感知图像增强方法来提高鲁棒性和准确性。我们广泛的实验结果表明,在两种极端环境下的方法中有利于我们的方法:将自动驾驶水下车辆定位,并将行星漫游者定位在火星样的沙漠中。此外,我们的方法仅使用20%的培训数据就可以使用室内基准(7片数据集)上的最先进方法(7片尺寸数据集)实现可比性的性能。
We propose a novel method to reliably estimate the pose of a camera given a sequence of images acquired in extreme environments such as deep seas or extraterrestrial terrains. Data acquired under these challenging conditions are corrupted by textureless surfaces, image degradation, and presence of repetitive and highly ambiguous structures. When naively deployed, the state-of-the-art methods can fail in those scenarios as confirmed by our empirical analysis. In this paper, we attempt to make camera relocalization work in these extreme situations. To this end, we propose: (i) a hierarchical localization system, where we leverage temporal information and (ii) a novel environment-aware image enhancement method to boost the robustness and accuracy. Our extensive experimental results demonstrate superior performance in favor of our method under two extreme settings: localizing an autonomous underwater vehicle and localizing a planetary rover in a Mars-like desert. In addition, our method achieves comparable performance with state-of-the-art methods on the indoor benchmark (7-Scenes dataset) using only 20% training data.