论文标题
RALL:使用可区分的测量模型在激光雷达图上的端到端雷达本地化
RaLL: End-to-end Radar Localization on Lidar Map Using Differentiable Measurement Model
论文作者
论文摘要
与板载摄像头和激光扫描仪相比,雷达传感器提供照明和天气不变感应,这自然适合在不利条件下长期定位。但是,雷达数据稀疏且嘈杂,导致雷达映射的挑战。另一方面,目前最受欢迎的地图是由LiDAR构建的。在本文中,我们提出了一个在LiDAR地图(RALL)上进行雷达定位的端到端深度学习框架,以弥合间隙,该框架不仅实现了可靠的雷达定位,而且还利用了成熟的激光映射技术,从而降低了雷达映射的成本。我们首先将两种传感器模式嵌入神经网络中的共同特征空间中。然后将多个偏移添加到MAP模态中,以与当前雷达模态相对于当前的雷达模态进行详尽的相似性评估,从而产生了当前姿势的回归。最后,我们将此可区分的测量模型应用于卡尔曼过滤器(KF),以端到端的方式学习整个顺序定位过程。 \ textIt {整个学习系统与后端的基于网络的测量模型是可区分的。}以验证可行性和有效性,我们采用了从现实世界中收集的多主题多娱乐性数据,结果,结果表明,我们所提出的系统表明,在ker drive ofer nuk中,在sermitape seals中,我们所提出的绩效在sermiose中均可在一般情况下进行培训。我们还公开发布源代码。
Compared to the onboard camera and laser scanner, radar sensor provides lighting and weather invariant sensing, which is naturally suitable for long-term localization under adverse conditions. However, radar data is sparse and noisy, resulting in challenges for radar mapping. On the other hand, the most popular available map currently is built by lidar. In this paper, we propose an end-to-end deep learning framework for Radar Localization on Lidar Map (RaLL) to bridge the gap, which not only achieves the robust radar localization but also exploits the mature lidar mapping technique, thus reducing the cost of radar mapping. We first embed both sensor modals into a common feature space by a neural network. Then multiple offsets are added to the map modal for exhaustive similarity evaluation against the current radar modal, yielding the regression of the current pose. Finally, we apply this differentiable measurement model to a Kalman Filter (KF) to learn the whole sequential localization process in an end-to-end manner. \textit{The whole learning system is differentiable with the network based measurement model at the front-end and KF at the back-end.} To validate the feasibility and effectiveness, we employ multi-session multi-scene datasets collected from the real world, and the results demonstrate that our proposed system achieves superior performance over $90km$ driving, even in generalization scenarios where the model training is in UK, while testing in South Korea. We also release the source code publicly.