论文标题
laptnet:激光艾迪德的透视转换网络
LAPTNet: LiDAR-Aided Perspective Transform Network
论文作者
论文摘要
语义网格是机器人周围环境的有用表示。它们可以在自动驾驶汽车中使用,以简洁地代表汽车周围的场景,从而捕获重要的信息,以实现导航或碰撞评估等下游任务。来自不同传感器的信息可用于生成这些网格。某些方法仅依靠RGB图像,而另一些方法选择合并其他传感器的信息,例如雷达或激光雷达。在本文中,我们提出了一种融合激光雷达和相机信息以生成语义网格的体系结构。通过使用LiDar Point Cloud的3D信息,LiDar ADED的透视转换网络(LaptNet)能够将相机平面中的功能与鸟类的眼视图相关联,而无需预测有关场景的任何深度信息。与最先进的摄像头方法相比,LaptNet的提高比在NUSCENES数据集验证拆分中提出的类别的竞争性方法相比,提高了8.8点(或38.13%)。
Semantic grids are a useful representation of the environment around a robot. They can be used in autonomous vehicles to concisely represent the scene around the car, capturing vital information for downstream tasks like navigation or collision assessment. Information from different sensors can be used to generate these grids. Some methods rely only on RGB images, whereas others choose to incorporate information from other sensors, such as radar or LiDAR. In this paper, we present an architecture that fuses LiDAR and camera information to generate semantic grids. By using the 3D information from a LiDAR point cloud, the LiDAR-Aided Perspective Transform Network (LAPTNet) is able to associate features in the camera plane to the bird's eye view without having to predict any depth information about the scene. Compared to state-of-theart camera-only methods, LAPTNet achieves an improvement of up to 8.8 points (or 38.13%) over state-of-art competing approaches for the classes proposed in the NuScenes dataset validation split.