论文标题

使用运动提示进行室内布局估算的2D激光雷达和摄像头融合

2D LiDAR and Camera Fusion Using Motion Cues for Indoor Layout Estimation

论文作者

Li, Jieyu, Stevenson, Robert

论文摘要

本文根据2D激光雷达和强度摄像头数据的融合提出了一种新型的室内布局估计系统。地面机器人探索带有单层和垂直墙壁的室内空间,并收集一系列强度图像和2D激光雷达数据集。 LIDAR提供了准确的深度信息,而相机则捕获了用于语义解释的高分辨率数据。传感器输出和图像分割的对齐方式是通过将激光雷达点作为房间轮廓的样品与图像中的地下界边界共同计算的。对齐问题被解耦到自上而下的视图投影和2D相似性变换估计,可以根据两个传感器的垂直消失点和运动来解决。递归随机样品共识算法被实现,以生成,评估和优化顺序测量的多个假设。该系统允许共同分析不同传感器的几何解释,而无需离线校准。借助激光观测,消除了地下壁边界提取图像的歧义,从而提高了语义分割的准确性。使用融合数据来完善本地化和映射,这使系统能够在质地低或低几何特征的场景中可靠地工作。

This paper presents a novel indoor layout estimation system based on the fusion of 2D LiDAR and intensity camera data. A ground robot explores an indoor space with a single floor and vertical walls, and collects a sequence of intensity images and 2D LiDAR datasets. The LiDAR provides accurate depth information, while the camera captures high-resolution data for semantic interpretation. The alignment of sensor outputs and image segmentation are computed jointly by aligning LiDAR points, as samples of the room contour, to ground-wall boundaries in the images. The alignment problem is decoupled into a top-down view projection and a 2D similarity transformation estimation, which can be solved according to the vertical vanishing point and motion of two sensors. The recursive random sample consensus algorithm is implemented to generate, evaluate and optimize multiple hypotheses with the sequential measurements. The system allows jointly analyzing the geometric interpretation from different sensors without offline calibration. The ambiguity in images for ground-wall boundary extraction is removed with the assistance of LiDAR observations, which improves the accuracy of semantic segmentation. The localization and mapping is refined using the fused data, which enables the system to work reliably in scenes with low texture or low geometric features.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源