论文标题
跨空间聚类和受控转移的课堂学习学习
Class-Incremental Learning with Cross-Space Clustering and Controlled Transfer
论文作者
论文摘要
在课堂学习学习中,预计该模型将在保持以前课程的知识的同时,不断学习新课程。这里的挑战在于保留该模型在功能空间中有效代表先前类的能力,同时调整其代表传入的新类。我们提出了两个基于蒸馏的目标,用于集体增量学习,以利用特征空间的结构来维持以前的课程的准确性,并使学习新课程。在我们的第一个目标,称为跨空间聚类(CSC)中,我们建议使用先前模型的特征空间结构来表征优化的方向,这些方向最大地保留了类别:特定类别的所有实例应集体优化的方向,以及他们应该集体优化的方向。除了最大程度地减少遗忘之外,这种间接鼓励模型将班级的所有实例聚集在当前的特征空间中,并引起了牛群免疫的感觉,从而使班级的所有样本都可以共同与模型抗衡,从而忘记了班级。我们的第二个目标被称为受控转移(CT)从研究层间转移的研究的研究角度来解决增量学习。 CT明确近似于和调节当前模型的语义相似性和先验类之间的语义相似性。这使模型可以学习类,以使其最大化从相似的先前类中的正向转移,从而提高可塑性,并最大程度地减少不同先前类的负向后转移,从而增强稳定性。我们在两个基准数据集上执行了广泛的实验,并在三种突出的课堂学习方法的基础上添加了我们的方法(CSCCT)。我们观察到各种实验环境的性能一致。
In class-incremental learning, the model is expected to learn new classes continually while maintaining knowledge on previous classes. The challenge here lies in preserving the model's ability to effectively represent prior classes in the feature space, while adapting it to represent incoming new classes. We propose two distillation-based objectives for class incremental learning that leverage the structure of the feature space to maintain accuracy on previous classes, as well as enable learning the new classes. In our first objective, termed cross-space clustering (CSC), we propose to use the feature space structure of the previous model to characterize directions of optimization that maximally preserve the class: directions that all instances of a specific class should collectively optimize towards, and those that they should collectively optimize away from. Apart from minimizing forgetting, this indirectly encourages the model to cluster all instances of a class in the current feature space, and gives rise to a sense of herd-immunity, allowing all samples of a class to jointly combat the model from forgetting the class. Our second objective termed controlled transfer (CT) tackles incremental learning from an understudied perspective of inter-class transfer. CT explicitly approximates and conditions the current model on the semantic similarities between incrementally arriving classes and prior classes. This allows the model to learn classes in such a way that it maximizes positive forward transfer from similar prior classes, thus increasing plasticity, and minimizes negative backward transfer on dissimilar prior classes, whereby strengthening stability. We perform extensive experiments on two benchmark datasets, adding our method (CSCCT) on top of three prominent class-incremental learning methods. We observe consistent performance improvement on a variety of experimental settings.