论文标题
LA-MAML:持续学习的元学习
La-MAML: Look-ahead Meta Learning for Continual Learning
论文作者
论文摘要
持续学习问题涉及能力有限的培训模型,无法在一组未知数量的顺序到达任务上表现良好。尽管元学习表现出减少新任务之间干扰的巨大潜力,但当前的训练程序往往很慢或离线,并且对许多超参数敏感。在这项工作中,我们提出了look-ahead MAML(LA-MAML),这是一种基于快速优化的元学习算法,用于在线学习,并在一个小的情节记忆的帮助下。我们在元学习更新中对每参数学习率的调制调节使我们能够与先前的高度级别和元时间进行连接。与传统的基于先前的方法相比,这提供了一种更灵活,更有效的方法来减轻灾难性遗忘。 LA-MAML的性能优于其他基于基于重播的,基于先前的基于元学习的方法,用于对现实世界视觉分类基准的持续学习。可以在此处找到源代码:https://github.com/montrealrobotics/la-maml
The continual learning problem involves training models with limited capacity to perform well on a set of an unknown number of sequentially arriving tasks. While meta-learning shows great potential for reducing interference between old and new tasks, the current training procedures tend to be either slow or offline, and sensitive to many hyper-parameters. In this work, we propose Look-ahead MAML (La-MAML), a fast optimisation-based meta-learning algorithm for online-continual learning, aided by a small episodic memory. Our proposed modulation of per-parameter learning rates in our meta-learning update allows us to draw connections to prior work on hypergradients and meta-descent. This provides a more flexible and efficient way to mitigate catastrophic forgetting compared to conventional prior-based methods. La-MAML achieves performance superior to other replay-based, prior-based and meta-learning based approaches for continual learning on real-world visual classification benchmarks. Source code can be found here: https://github.com/montrealrobotics/La-MAML