论文标题

持续学习的产生负面重播

Generative Negative Replay for Continual Learning

论文作者

Graffieti, Gabriele, Maltoni, Davide, Pellegrini, Lorenzo, Lomonaco, Vincenzo

论文摘要

不断学习是智力的关键方面,也是解决许多现实生活问题的必要能力。控制灾难性遗忘的最有效的策略之一,是阿喀琉斯的持续学习的脚跟,它是存储旧数据的一部分,并与新体验相交(也称为重播方法)。生成性重播使用生成模型来按需提供重播模式,但是它尤其引人入胜,但是,它被证明主要是在简化的假设下有效的,例如简单的方案和低维数据。在本文中,我们表明,尽管生成的数据通常无法提高旧课程的分类准确性,但它们可以作为负面示例(或对抗者)有效地更好地学习新课程,尤其是当学习经验很小并且只有一个或几个类别的示例时。提出的方法在复杂的类吸收和数据收入持续学习方案(Core50和Imagenet-1000)上进行了验证,该方案由高维数据和大量培训经验组成:现有生成重播方法通常失败的设置。

Learning continually is a key aspect of intelligence and a necessary ability to solve many real-life problems. One of the most effective strategies to control catastrophic forgetting, the Achilles' heel of continual learning, is storing part of the old data and replaying them interleaved with new experiences (also known as the replay approach). Generative replay, which is using generative models to provide replay patterns on demand, is particularly intriguing, however, it was shown to be effective mainly under simplified assumptions, such as simple scenarios and low-dimensional data. In this paper, we show that, while the generated data are usually not able to improve the classification accuracy for the old classes, they can be effective as negative examples (or antagonists) to better learn the new classes, especially when the learning experiences are small and contain examples of just one or few classes. The proposed approach is validated on complex class-incremental and data-incremental continual learning scenarios (CORe50 and ImageNet-1000) composed of high-dimensional data and a large number of training experiences: a setup where existing generative replay approaches usually fail.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源