论文标题
SegGroup:3D实例和语义分段的SEG级监督
SegGroup: Seg-Level Supervision for 3D Instance and Semantic Segmentation
论文作者
论文摘要
大多数现有的点云实例和语义分割方法在很大程度上依赖于强大的监督信号,这需要场景中每个点的点级标签。但是,这种强大的监督遭受了巨大的注释成本,引起了研究有效注释的需求。在本文中,我们发现实例的位置对实例和语义3D场景细分都很重要。通过充分利用位置,我们设计了一种弱监督的点云分割方法,该方法仅需要单击每个实例以指示其注释的位置。通过进行过度分割以进行预处理,我们将这些位置注释扩展到seg级标签中。我们进一步设计了一个细分组网络(SEGGROUP),以在SEG级标签下生成点级伪标签,通过将未标记的段分组分组到相关的附近段中,以便现有的点级监督分段模型可以直接消耗这些pseudo pseudo标签进行培训。实验结果表明,我们的SEG级监督方法(SEGGROUP)通过完全注释的点级监督方法实现了可比的结果。此外,在固定的注释预算下,它的表现要优于最近弱监督的方法。代码可从https://github.com/antao97/seggroup获得。
Most existing point cloud instance and semantic segmentation methods rely heavily on strong supervision signals, which require point-level labels for every point in the scene. However, such strong supervision suffers from large annotation costs, arousing the need to study efficient annotating. In this paper, we discover that the locations of instances matter for both instance and semantic 3D scene segmentation. By fully taking advantage of locations, we design a weakly-supervised point cloud segmentation method that only requires clicking on one point per instance to indicate its location for annotation. With over-segmentation for pre-processing, we extend these location annotations into segments as seg-level labels. We further design a segment grouping network (SegGroup) to generate point-level pseudo labels under seg-level labels by hierarchically grouping the unlabeled segments into the relevant nearby labeled segments, so that existing point-level supervised segmentation models can directly consume these pseudo labels for training. Experimental results show that our seg-level supervised method (SegGroup) achieves comparable results with the fully annotated point-level supervised methods. Moreover, it outperforms the recent weakly-supervised methods given a fixed annotation budget. Code is available at https://github.com/AnTao97/SegGroup.