论文标题

通过弹性响应蒸馏,克服灾难性遗忘在渐进对象检测中

Overcoming Catastrophic Forgetting in Incremental Object Detection via Elastic Response Distillation

论文作者

Feng, Tao, Wang, Mang, Yuan, Hangjie

论文摘要

传统的对象探测器缺乏逐步学习的能力。但是,直接在训练有素的检测模型上进行微调会导致灾难性遗忘。知识蒸馏是减轻灾难性遗忘的灵活方法。在增量对象检测(IOD)中,以前的工作主要集中于提炼特征和响应的组合。但是,它们探索了响应中包含的信息。在本文中,我们提出了一种称为弹性响应蒸馏(ERD)的基于响应的增量蒸馏方法,该方法的重点是从分类头和回归头的弹性学习响应。首先,我们的方法传递类别知识,同时为学生探测器提供了在增量学习过程中保留本地化信息的能力。此外,我们进一步评估了所有位置的质量,并通过弹性响应选择(ERS)策略提供了有价值的响应。最后,我们阐明,在增量蒸馏过程中应分配不同响应的知识。对Coco女士进行的广泛实验证明了我们的方法达到了最新的结果,这实质上缩小了针对全面训练的绩效差距。

Traditional object detectors are ill-equipped for incremental learning. However, fine-tuning directly on a well-trained detection model with only new data will lead to catastrophic forgetting. Knowledge distillation is a flexible way to mitigate catastrophic forgetting. In Incremental Object Detection (IOD), previous work mainly focuses on distilling for the combination of features and responses. However, they under-explore the information that contains in responses. In this paper, we propose a response-based incremental distillation method, dubbed Elastic Response Distillation (ERD), which focuses on elastically learning responses from the classification head and the regression head. Firstly, our method transfers category knowledge while equipping student detector with the ability to retain localization information during incremental learning. In addition, we further evaluate the quality of all locations and provide valuable responses by the Elastic Response Selection (ERS) strategy. Finally, we elucidate that the knowledge from different responses should be assigned with different importance during incremental distillation. Extensive experiments conducted on MS COCO demonstrate our method achieves state-of-the-art result, which substantially narrows the performance gap towards full training.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源