论文标题
MSMDFusion:在多个尺度上与多深度种子进行融合,以3D对象检测
MSMDFusion: Fusing LiDAR and Camera at Multiple Scales with Multi-Depth Seeds for 3D Object Detection
论文作者
论文摘要
融合LiDAR和相机信息对于在自动驾驶系统中实现准确可靠的3D对象检测至关重要。由于难以结合两个截然不同的方式的多晶格几何和语义特征,因此这是具有挑战性的。最近的方法旨在通过2D摄像机图像中的提升点(称为种子)探索相机功能的语义密度,然后通过跨模式相互作用或融合技术结合2D语义。但是,在将点提升到3D空间时,深度信息在这些方法中不足,因此2D语义不能与3D点可靠地融合。此外,其多模式融合策略被以串联或注意力实现,要么无法有效地融合2D和3D信息,要么无法在Voxel空间中执行细粒度的相互作用。为此,我们提出了一个新颖的框架,可以更好地利用深度信息,并在LiDAR和相机之间进行细粒度的跨模式相互作用,该框架由两个重要组成部分组成。首先,使用深度感知设计的多深度不注射(MDU)方法用于提高每个相互作用水平上提升点的深度质量。其次,应用封闭式的模态感知卷积(GMA-CONV)块以细粒度的方式调节与摄像机模态有关的体素,然后将多模式的多模式特征汇总到统一的空间中。他们一起为检测头提供了LiDAR和相机的更全面的功能。在Nuscenes测试基准上,我们提出的方法缩写为MSMDFUSION,获得了最先进的3D对象检测结果,其MAP和74.0%NDS和74.0%的AMOTA无需使用测试时间启动和结束技术,并具有74.0%的AMOTA。该代码可在https://github.com/sxjyjay/msmdfusion上找到。
Fusing LiDAR and camera information is essential for achieving accurate and reliable 3D object detection in autonomous driving systems. This is challenging due to the difficulty of combining multi-granularity geometric and semantic features from two drastically different modalities. Recent approaches aim at exploring the semantic densities of camera features through lifting points in 2D camera images (referred to as seeds) into 3D space, and then incorporate 2D semantics via cross-modal interaction or fusion techniques. However, depth information is under-investigated in these approaches when lifting points into 3D space, thus 2D semantics can not be reliably fused with 3D points. Moreover, their multi-modal fusion strategy, which is implemented as concatenation or attention, either can not effectively fuse 2D and 3D information or is unable to perform fine-grained interactions in the voxel space. To this end, we propose a novel framework with better utilization of the depth information and fine-grained cross-modal interaction between LiDAR and camera, which consists of two important components. First, a Multi-Depth Unprojection (MDU) method with depth-aware designs is used to enhance the depth quality of the lifted points at each interaction level. Second, a Gated Modality-Aware Convolution (GMA-Conv) block is applied to modulate voxels involved with the camera modality in a fine-grained manner and then aggregate multi-modal features into a unified space. Together they provide the detection head with more comprehensive features from LiDAR and camera. On the nuScenes test benchmark, our proposed method, abbreviated as MSMDFusion, achieves state-of-the-art 3D object detection results with 71.5% mAP and 74.0% NDS, and strong tracking results with 74.0% AMOTA without using test-time-augmentation and ensemble techniques. The code is available at https://github.com/SxJyJay/MSMDFusion.