论文标题
使用变压器的记忆有效持续学习
Memory Efficient Continual Learning with Transformers
论文作者
论文摘要
在许多实际情况下,随着时间的推移,用于训练机器学习模型的数据将获得。不幸的是,这些模型难以在不忘记过去学到的东西的情况下不断学习新概念。这种现象被称为灾难性遗忘,由于实际限制,很难预防。例如,可以存储的数据量或可以使用的计算资源可能会受到限制。此外,应用程序越来越依赖于大型预训练的神经网络,例如预训练的变压器,因为资源或数据可能无法以足够大量的数量来获得从业者来从头开始训练模型。在本文中,我们设计了一种方法,可以使用预训练的变压器逐步训练模型,并使用适配器扩展它们。与现有方法不同,我们的方法能够扩展到大量任务,而无需大量开销,并允许跨任务共享信息。在图像和文本分类任务上,我们从经验上证明,我们的方法在不进行模型或随着时间的推移会增加模型参数的数量而保持良好的预测性能。与基于适配器的最新方法相比,推理时间的最终模型也明显更快。
In many real-world scenarios, data to train machine learning models becomes available over time. Unfortunately, these models struggle to continually learn new concepts without forgetting what has been learnt in the past. This phenomenon is known as catastrophic forgetting and it is difficult to prevent due to practical constraints. For instance, the amount of data that can be stored or the computational resources that can be used might be limited. Moreover, applications increasingly rely on large pre-trained neural networks, such as pre-trained Transformers, since the resources or data might not be available in sufficiently large quantities to practitioners to train the model from scratch. In this paper, we devise a method to incrementally train a model on a sequence of tasks using pre-trained Transformers and extending them with Adapters. Different than the existing approaches, our method is able to scale to a large number of tasks without significant overhead and allows sharing information across tasks. On both image and text classification tasks, we empirically demonstrate that our method maintains a good predictive performance without retraining the model or increasing the number of model parameters over time. The resulting model is also significantly faster at inference time compared to Adapter-based state-of-the-art methods.