论文标题
Occuseg:占用感知3D实例细分
OccuSeg: Occupancy-aware 3D Instance Segmentation
论文作者
论文摘要
如今,3D实例细分在机器人技术和增强现实中都有各种应用。与对环境的投影观察结果的2D图像不同,3D模型提供了场景的度量重建,而无需遮挡或规模歧义。在本文中,我们将“ 3D占用大小”定义为每个实例占用的体素数。它拥有鲁棒性在预测中的优势,在该预测中,提出了一种占用3D实例细分方案的基础。我们的多任务学习既会产生占用信号,又产生嵌入表示形式,在这种情况下,空间和特征嵌入的训练随着比例意识的差异而变化。我们的聚类方案受益于预测的占用量和聚类占用量之间的可靠比较,这鼓励了正确的样本被正确聚类并避免过度分割。所提出的方法可以在3个现实世界数据集(即ScannETV2,S3DIS和Scenenn)上实现最先进的性能,同时保持高效率。
3D instance segmentation, with a variety of applications in robotics and augmented reality, is in large demands these days. Unlike 2D images that are projective observations of the environment, 3D models provide metric reconstruction of the scenes without occlusion or scale ambiguity. In this paper, we define "3D occupancy size", as the number of voxels occupied by each instance. It owns advantages of robustness in prediction, on which basis, OccuSeg, an occupancy-aware 3D instance segmentation scheme is proposed. Our multi-task learning produces both occupancy signal and embedding representations, where the training of spatial and feature embeddings varies with their difference in scale-aware. Our clustering scheme benefits from the reliable comparison between the predicted occupancy size and the clustered occupancy size, which encourages hard samples being correctly clustered and avoids over segmentation. The proposed approach achieves state-of-the-art performance on 3 real-world datasets, i.e. ScanNetV2, S3DIS and SceneNN, while maintaining high efficiency.