论文标题
MD-SLAM:多提示直接大满贯
MD-SLAM: Multi-cue Direct SLAM
论文作者
论文摘要
同时本地化和映射(SLAM)系统是在未知环境中导航的任何自动机器人的基本构建块。大满贯实现在很大程度上取决于移动平台上使用的传感器方式。因此,对场景结构的假设通常是为了最大程度地提高估计准确性。本文提出了一种新颖的直接3D猛击管道,可独立于RGB-D和LIDAR传感器。我们提出的方法在先前在多提示光度框架到框架对齐的工作的基础上,提供了易于扩展和通用的大满贯系统。我们的管道只需要在投影模型中进行较小的适应来处理不同的传感器方式。我们将位置跟踪系统与基于外观的重新定位机制融为一体,该机制处理大型环路封闭。循环封闭通过用于探测仪估计的相同直接注册算法验证。我们介绍了使用RGB-D摄像头和3D激光雷达的最先进方法的最先进方法进行比较实验。与其他特定于传感器的方法相比,我们的系统在异质数据集中的性能很好,同时对环境没有任何假设。最后,我们发布了系统的开源C ++实现。
Simultaneous Localization and Mapping (SLAM) systems are fundamental building blocks for any autonomous robot navigating in unknown environments. The SLAM implementation heavily depends on the sensor modality employed on the mobile platform. For this reason, assumptions on the scene's structure are often made to maximize estimation accuracy. This paper presents a novel direct 3D SLAM pipeline that works independently for RGB-D and LiDAR sensors. Building upon prior work on multi-cue photometric frame-to-frame alignment, our proposed approach provides an easy-to-extend and generic SLAM system. Our pipeline requires only minor adaptations within the projection model to handle different sensor modalities. We couple a position tracking system with an appearance-based relocalization mechanism that handles large loop closures. Loop closures are validated by the same direct registration algorithm used for odometry estimation. We present comparative experiments with state-of-the-art approaches on publicly available benchmarks using RGB-D cameras and 3D LiDARs. Our system performs well in heterogeneous datasets compared to other sensor-specific methods while making no assumptions about the environment. Finally, we release an open-source C++ implementation of our system.