论文标题
统一的多模式地标跟踪紧密耦合的激光射击惯性探测器
Unified Multi-Modal Landmark Tracking for Tightly Coupled Lidar-Visual-Inertial Odometry
论文作者
论文摘要
我们为移动平台提出了一个有效的多传感器循环系统系统,该系统在单个集成因子图中共同优化了视觉,激光和惯性信息。使用固定的滞后平滑,这将在完全帧速率上实时运行。为了执行如此紧密的集成,提出了一种从激光雷德点云中提取3D线和平面图的新方法。这种方法通过将原语作为地标并在多次扫描中对其进行跟踪,从而克服了典型的框架到框架跟踪方法的次优。通过通过LIDAR和相机框架的微妙的被动同步使LiDar特征与标准视觉功能和IMU的真正集成在一起。 3D功能的轻巧公式允许在单个CPU上实时执行。我们提出的系统已在各种平台和场景上进行了测试,包括带有腿部机器人和室外扫描的地下勘探,并带有动态移动的手持设备,总持续时间为96分钟和2.4公里的行驶距离。在这些测试序列中,仅使用一个外部感受性传感器会导致由于造成的几何形状(影响激光雷达)或由侵略性照明变化(影响视觉)引起的无纹理区域而导致故障。在这些情况下,我们的因子图自然使用了每个传感器模式可用的最佳信息,而无需任何硬开关。
We present an efficient multi-sensor odometry system for mobile platforms that jointly optimizes visual, lidar, and inertial information within a single integrated factor graph. This runs in real-time at full framerate using fixed lag smoothing. To perform such tight integration, a new method to extract 3D line and planar primitives from lidar point clouds is presented. This approach overcomes the suboptimality of typical frame-to-frame tracking methods by treating the primitives as landmarks and tracking them over multiple scans. True integration of lidar features with standard visual features and IMU is made possible using a subtle passive synchronization of lidar and camera frames. The lightweight formulation of the 3D features allows for real-time execution on a single CPU. Our proposed system has been tested on a variety of platforms and scenarios, including underground exploration with a legged robot and outdoor scanning with a dynamically moving handheld device, for a total duration of 96 min and 2.4 km traveled distance. In these test sequences, using only one exteroceptive sensor leads to failure due to either underconstrained geometry (affecting lidar) or textureless areas caused by aggressive lighting changes (affecting vision). In these conditions, our factor graph naturally uses the best information available from each sensor modality without any hard switches.