论文标题

MaskRange:基于范围视图的LIDAR分割的掩模分类模型

MaskRange: A Mask-classification Model for Range-view based LiDAR Segmentation

论文作者

Gu, Yi, Huang, Yuming, Xu, Chengzhong, Kong, Hui

论文摘要

基于范围视图的LIDAR分割方法对于实际应用而具有有效的2D CNN体系结构的直接继承,对实际应用具有吸引力。在文献中,大多数基于范围的方法都遵循每个像素分类范式。最近,在图像分割域中,另一个范式将分割作为面具分类问题,并实现了出色的性能。这提出了一个有趣的问题:掩码分类范式是否可以使基于范围的激光雷达细分受益并获得比每个像素范式对应的更好的性能?为了回答这个问题,我们为基于范围视图的LIDAR语义和泛型分段提出了一个统一的面膜分类模型MaskRange。除了新的范式外,我们还提出了一种新型的数据增强方法,以处理过度拟合,上下文依赖和班级不足问题。在Semantickitti基准测试中进行了广泛的实验。在所有已发布的基于范围视图的方法中,我们的MaskRange在语义细分方面以$ 66.10 $ MIOU的价格实现了最先进的性能,并以高效率的泛型细分中的$ 53.10 $ PQ进行了$ 53.10 $ PQ。我们的代码将发布。

Range-view based LiDAR segmentation methods are attractive for practical applications due to their direct inheritance from efficient 2D CNN architectures. In literature, most range-view based methods follow the per-pixel classification paradigm. Recently, in the image segmentation domain, another paradigm formulates segmentation as a mask-classification problem and has achieved remarkable performance. This raises an interesting question: can the mask-classification paradigm benefit the range-view based LiDAR segmentation and achieve better performance than the counterpart per-pixel paradigm? To answer this question, we propose a unified mask-classification model, MaskRange, for the range-view based LiDAR semantic and panoptic segmentation. Along with the new paradigm, we also propose a novel data augmentation method to deal with overfitting, context-reliance, and class-imbalance problems. Extensive experiments are conducted on the SemanticKITTI benchmark. Among all published range-view based methods, our MaskRange achieves state-of-the-art performance with $66.10$ mIoU on semantic segmentation and promising results with $53.10$ PQ on panoptic segmentation with high efficiency. Our code will be released.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源