论文标题
渐进学习而不忘记
Progressive Learning without Forgetting
论文作者
论文摘要
从不断变化的任务和连续经验中学习而不忘记获得的知识是人工神经网络的一个挑战性问题。在这项工作中,我们专注于持续学习范式(CL)的两个具有挑战性的问题,而没有任何旧数据:(i)灾难性遗忘的积累是由于模型从中逐渐消失的知识空间逐渐消失的知识; (ii)在学习新任务期间的稳定性和可塑性平衡的不受控制的拔河动力学。为了解决这些问题,我们在优化器中提出了渐进式学习(PLWF)和信用分配制度。 PLWF密集地从以前的任务中引入模型功能来构建一个知识空间,以便它包含每个任务上最可靠的知识和不同任务的分布信息,而信用分配通过通过投影消除梯度冲突来控制拖船的动态。广泛的消融实验证明了PLWF和信用分配的有效性。与其他CL方法相比,即使不依赖任何原始数据,我们也报告了更好的结果。
Learning from changing tasks and sequential experience without forgetting the obtained knowledge is a challenging problem for artificial neural networks. In this work, we focus on two challenging problems in the paradigm of Continual Learning (CL) without involving any old data: (i) the accumulation of catastrophic forgetting caused by the gradually fading knowledge space from which the model learns the previous knowledge; (ii) the uncontrolled tug-of-war dynamics to balance the stability and plasticity during the learning of new tasks. In order to tackle these problems, we present Progressive Learning without Forgetting (PLwF) and a credit assignment regime in the optimizer. PLwF densely introduces model functions from previous tasks to construct a knowledge space such that it contains the most reliable knowledge on each task and the distribution information of different tasks, while credit assignment controls the tug-of-war dynamics by removing gradient conflict through projection. Extensive ablative experiments demonstrate the effectiveness of PLwF and credit assignment. In comparison with other CL methods, we report notably better results even without relying on any raw data.