论文标题
综合器:语义和全景细分中的持续学习
CoMFormer: Continual Learning in Semantic and Panoptic Segmentation
论文作者
论文摘要
持续的细分学习最近越来越兴趣。但是,所有先前的作品都集中在狭窄的语义细分和无视泛型分割上,这是现实世界影响的重要任务。 %a在本文中,我们介绍了第一个连续学习模型,能够在语义和全景分段上进行操作。灵感来自最新的变压器方法,这些方法将细分视为面具分类问题,我们设计了组合器。我们的方法仔细利用了变压器体系结构的属性,以随着时间的推移学习新课程。具体而言,我们提出了一种新型的自适应蒸馏损失以及一种基于面具的伪标记技术,以有效防止忘记。为了评估我们的方法,我们在具有挑战性的ADE20K数据集上介绍了一种新型的持续泛型分割基准。我们的组合者通过忘记较少的旧课程,但也更有效地学习新课程来优于所有现有基线。此外,我们还报告了大规模连续的语义分割场景中的广泛评估,表明Comformer还显着超过了最先进的方法。
Continual learning for segmentation has recently seen increasing interest. However, all previous works focus on narrow semantic segmentation and disregard panoptic segmentation, an important task with real-world impacts. %a In this paper, we present the first continual learning model capable of operating on both semantic and panoptic segmentation. Inspired by recent transformer approaches that consider segmentation as a mask-classification problem, we design CoMFormer. Our method carefully exploits the properties of transformer architectures to learn new classes over time. Specifically, we propose a novel adaptive distillation loss along with a mask-based pseudo-labeling technique to effectively prevent forgetting. To evaluate our approach, we introduce a novel continual panoptic segmentation benchmark on the challenging ADE20K dataset. Our CoMFormer outperforms all the existing baselines by forgetting less old classes but also learning more effectively new classes. In addition, we also report an extensive evaluation in the large-scale continual semantic segmentation scenario showing that CoMFormer also significantly outperforms state-of-the-art methods.