论文标题
克服灾难性遗忘,通过方向约束优化
Overcoming Catastrophic Forgetting via Direction-Constrained Optimization
论文作者
论文摘要
本文研究了在连续学习框架中使用分类网络固定架构培训深度学习模型的优化算法的新设计。训练数据是非平稳的,非平稳性是由一系列不同任务施加的。我们首先分析了一个仅在隔离的学习任务的深层模型,并在网络参数空间中识别一个区域,其中模型性能接近恢复的最佳。我们提供的经验证据表明该区域类似于沿融合方向扩展的锥体。我们研究了融合后优化器轨迹的主要方向,并表明沿着一些顶级主要方向旅行可以迅速将参数带到锥体之外,但其余方向并非如此。我们认为,当参数被限制以保持在训练期间迄今为止遇到的单个任务的相交中,可以缓解持续学习环境中的灾难性遗忘。基于此观察结果,我们介绍了我们的方向约束优化(DCO)方法,其中每个任务我们引入一个线性自动编码器以近似其相应的顶部禁止主要方向。然后将它们以正规化术语的形式合并到损失函数中,目的是在不忘记的情况下学习即将到来的任务。此外,为了随着任务数量的增加而控制内存的增长,我们提出了一种称为压缩DCO(DCO-comp)的算法的内存效率版本,该版本为存储所有自动编码器的固定大小分配了存储器。我们从经验上证明,与其他基于最新正规化的持续学习方法相比,我们的算法表现出色。
This paper studies a new design of the optimization algorithm for training deep learning models with a fixed architecture of the classification network in a continual learning framework. The training data is non-stationary and the non-stationarity is imposed by a sequence of distinct tasks. We first analyze a deep model trained on only one learning task in isolation and identify a region in network parameter space, where the model performance is close to the recovered optimum. We provide empirical evidence that this region resembles a cone that expands along the convergence direction. We study the principal directions of the trajectory of the optimizer after convergence and show that traveling along a few top principal directions can quickly bring the parameters outside the cone but this is not the case for the remaining directions. We argue that catastrophic forgetting in a continual learning setting can be alleviated when the parameters are constrained to stay within the intersection of the plausible cones of individual tasks that were so far encountered during training. Based on this observation we present our direction-constrained optimization (DCO) method, where for each task we introduce a linear autoencoder to approximate its corresponding top forbidden principal directions. They are then incorporated into the loss function in the form of a regularization term for the purpose of learning the coming tasks without forgetting. Furthermore, in order to control the memory growth as the number of tasks increases, we propose a memory-efficient version of our algorithm called compressed DCO (DCO-COMP) that allocates a memory of fixed size for storing all autoencoders. We empirically demonstrate that our algorithm performs favorably compared to other state-of-art regularization-based continual learning methods.