论文标题
通过公制学习更好地保持知识
Better Knowledge Retention through Metric Learning
论文作者
论文摘要
在持续学习中,随着时间的推移,可能会引入新类别,理想的学习系统应在原始类别和新类别上表现良好。尽管深度神经网已经在经典的监督环境中取得了巨大的成功,但众所周知,他们忘记了在学习中获得的知识,是否与当前学习剧集中遇到的示例与以前的情节中遇到的示例有很大不同。在本文中,我们提出了一种新方法,既可以利用深神经网的表达能力,又可以忘记何时引入新类别。我们发现,与现有方法相比,CIFAR-10上提出的方法可以将忘记降低2.3倍至6.9倍,与Oracle基线相比,ImageNet上的遗忘可以将遗忘降低1.8倍至2.7倍。
In continual learning, new categories may be introduced over time, and an ideal learning system should perform well on both the original categories and the new categories. While deep neural nets have achieved resounding success in the classical supervised setting, they are known to forget about knowledge acquired in prior episodes of learning if the examples encountered in the current episode of learning are drastically different from those encountered in prior episodes. In this paper, we propose a new method that can both leverage the expressive power of deep neural nets and is resilient to forgetting when new categories are introduced. We found the proposed method can reduce forgetting by 2.3x to 6.9x on CIFAR-10 compared to existing methods and by 1.8x to 2.7x on ImageNet compared to an oracle baseline.