论文标题

有效的元学习,用于持续学习,泰勒膨胀近似

Efficient Meta-Learning for Continual Learning with Taylor Expansion Approximation

论文作者

Zou, Xiaohan, Lin, Tong

论文摘要

持续的学习旨在减轻在非平稳分布下连续任务时灾难性遗忘。基于梯度的元学习算法已经表明能够隐式解决不同示例之间的转移干扰权权衡问题。但是,由于不再可用的过去数据,他们仍然遭受灾难性遗忘问题的困扰。在这项工作中,我们提出了一种新型有效的元学习算法,用于解决在线持续学习问题,其中正规化术语和学习率适应了泰勒参数对减轻遗忘的重要性的近似。所提出的方法表示闭合形式中元损失的梯度,因此避免计算二阶导数,该二阶导数在计算上可抑制。我们还使用近端梯度下降来进一步提高计算效率和准确性。关于不同基准的实验表明,与最先进的方法相比,我们的方法的表现更好或更高的性能和更高的效率。

Continual learning aims to alleviate catastrophic forgetting when handling consecutive tasks under non-stationary distributions. Gradient-based meta-learning algorithms have shown the capability to implicitly solve the transfer-interference trade-off problem between different examples. However, they still suffer from the catastrophic forgetting problem in the setting of continual learning, since the past data of previous tasks are no longer available. In this work, we propose a novel efficient meta-learning algorithm for solving the online continual learning problem, where the regularization terms and learning rates are adapted to the Taylor approximation of the parameter's importance to mitigate forgetting. The proposed method expresses the gradient of the meta-loss in closed-form and thus avoid computing second-order derivative which is computationally inhibitable. We also use Proximal Gradient Descent to further improve computational efficiency and accuracy. Experiments on diverse benchmarks show that our method achieves better or on-par performance and much higher efficiency compared to the state-of-the-art approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源