论文标题
基本 - 新颖的共同点的多面蒸馏,用于少量射击对象检测
Multi-Faceted Distillation of Base-Novel Commonality for Few-shot Object Detection
论文作者
论文摘要
几次射击对象检测的大多数现有方法都遵循微调范式,该范式可能假设可以通过这种两阶段的训练策略从具有丰富的样本的基本类别中学习和将类似于有限的类样品的基本类隐含地学习,并将其隐式转移到具有有限样本的新颖类中。但是,这不一定是正确的,因为对象检测器几乎无法在没有明确建模的情况下自动区分班级知识和特定于类的知识。在这项工作中,我们建议在基础和新颖类之间学习三种类型的类不足的共同点:与识别相关的语义共同点,与定位相关的语义共同点和分布共同点。我们基于内存库设计了一个统一的蒸馏框架,该框架能够共同有效地对所有三种类型的共同点进行蒸馏。广泛的实验表明,我们的方法可以很容易地集成到大多数现有基于微调的方法中,并始终如一地通过大幅度提高性能。
Most of existing methods for few-shot object detection follow the fine-tuning paradigm, which potentially assumes that the class-agnostic generalizable knowledge can be learned and transferred implicitly from base classes with abundant samples to novel classes with limited samples via such a two-stage training strategy. However, it is not necessarily true since the object detector can hardly distinguish between class-agnostic knowledge and class-specific knowledge automatically without explicit modeling. In this work we propose to learn three types of class-agnostic commonalities between base and novel classes explicitly: recognition-related semantic commonalities, localization-related semantic commonalities and distribution commonalities. We design a unified distillation framework based on a memory bank, which is able to perform distillation of all three types of commonalities jointly and efficiently. Extensive experiments demonstrate that our method can be readily integrated into most of existing fine-tuning based methods and consistently improve the performance by a large margin.