论文标题

深度IMU偏向推断,用于使用因子图的鲁棒视觉惯性探光仪

Deep IMU Bias Inference for Robust Visual-Inertial Odometry with Factor Graphs

论文作者

Buchanan, Russell, Agrawal, Varun, Camurri, Marco, Dellaert, Frank, Fallon, Maurice

论文摘要

视觉惯性进程(VIO)是移动平台最确定的状态估计方法之一。但是,当视觉跟踪失败时,VIO算法由于惯性数据整合过程中的快速误差而迅速差异。该误差通常被建模为加性高斯噪声和缓慢变化的偏差的组合,该偏差会随机行走。在这项工作中,我们建议训练神经网络以学习真正的偏见演变。我们实施并比较了两个常见的顺序深度学习体系结构:LSTM和变压器。我们的方法源于最近基于学习的惯性估计器,但是,我们没有学习运动模型,而是明确地针对IMU偏见,这使我们能够概括到训练中看不见的运动模式。我们表明,我们提出的方法改善了四足动物,步行人类和无人机在广泛动作的视觉挑战性情况下的状态估计。我们的实验表明,流动率平均降低了15%,当总视力失败时,降低了得多。重要的是,我们还证明了经过一种运动模式(人行走)训练的模型可以应用于另一个(四倍的机器人小跑)而无需重新训练。

Visual Inertial Odometry (VIO) is one of the most established state estimation methods for mobile platforms. However, when visual tracking fails, VIO algorithms quickly diverge due to rapid error accumulation during inertial data integration. This error is typically modeled as a combination of additive Gaussian noise and a slowly changing bias which evolves as a random walk. In this work, we propose to train a neural network to learn the true bias evolution. We implement and compare two common sequential deep learning architectures: LSTMs and Transformers. Our approach follows from recent learning-based inertial estimators, but, instead of learning a motion model, we target IMU bias explicitly, which allows us to generalize to locomotion patterns unseen in training. We show that our proposed method improves state estimation in visually challenging situations across a wide range of motions by quadrupedal robots, walking humans, and drones. Our experiments show an average 15% reduction in drift rate, with much larger reductions when there is total vision failure. Importantly, we also demonstrate that models trained with one locomotion pattern (human walking) can be applied to another (quadruped robot trotting) without retraining.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源