论文标题

plop:学习而不忘记持续的语义细分

PLOP: Learning without Forgetting for Continual Semantic Segmentation

论文作者

Douillard, Arthur, Chen, Yifu, Dapogny, Arnaud, Cord, Matthieu

论文摘要

如今,深度学习方法无处不在,用于处理计算机视觉任务,例如语义分割,需要大型数据集和实质性的计算能力。语义细分(CSS)的持续学习是一种新兴趋势,它包括通过依次添加新类来更新旧模型。但是,持续的学习方法通​​常容易出现灾难性的遗忘。在CSS中,此问题进一步加剧了,在每个步骤中,从先前迭代中的旧类崩溃了。在本文中,我们提出了本地POD,这是一种多尺度的合并蒸馏方案,可在特征级别保留长距离和短距离的空间关系。此外,我们设计了基于熵的伪标签W.R.T.旧模型预测的课程可以处理背景转移,并避免对旧课程的灾难性忘记。我们的方法称为PLOP,在现有CSS方案以及新提出的挑战性基准中的最先进方法都大大优于最先进的方法。

Deep learning approaches are nowadays ubiquitously used to tackle computer vision tasks such as semantic segmentation, requiring large datasets and substantial computational power. Continual learning for semantic segmentation (CSS) is an emerging trend that consists in updating an old model by sequentially adding new classes. However, continual learning methods are usually prone to catastrophic forgetting. This issue is further aggravated in CSS where, at each step, old classes from previous iterations are collapsed into the background. In this paper, we propose Local POD, a multi-scale pooling distillation scheme that preserves long- and short-range spatial relationships at feature level. Furthermore, we design an entropy-based pseudo-labelling of the background w.r.t. classes predicted by the old model to deal with background shift and avoid catastrophic forgetting of the old classes. Our approach, called PLOP, significantly outperforms state-of-the-art methods in existing CSS scenarios, as well as in newly proposed challenging benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源