论文标题

持续的预训练减轻语言和远见

Continual Pre-Training Mitigates Forgetting in Language and Vision

论文作者

Cossu, Andrea, Tuytelaars, Tinne, Carta, Antonio, Passaro, Lucia, Lomonaco, Vincenzo, Bacciu, Davide

论文摘要

如今,预训练的模型是机器学习研究的基本组成部分。在持续学习中,它们通常用于在非平稳数据流进行训练之前初始化模型。但是,在持续学习过程中很少应用预训练。我们在语言和视觉环境中正式化并研究了连续预训练场景的特征,在语言和视觉环境中,模型经常在传入数据流中进行预训练,然后随后对不同的下游任务进行了微调。我们表明,不断训练的模型对灾难性的遗忘具有强大的态度,我们提供了有力的经验证据,支持以下事实:自我监督的预培训比监督方案更有效地保留先前的知识。代码可在https://github.com/andreacossu/continual-pretrataining-nlp-vision中提供。

Pre-trained models are nowadays a fundamental component of machine learning research. In continual learning, they are commonly used to initialize the model before training on the stream of non-stationary data. However, pre-training is rarely applied during continual learning. We formalize and investigate the characteristics of the continual pre-training scenario in both language and vision environments, where a model is continually pre-trained on a stream of incoming data and only later fine-tuned to different downstream tasks. We show that continually pre-trained models are robust against catastrophic forgetting and we provide strong empirical evidence supporting the fact that self-supervised pre-training is more effective in retaining previous knowledge than supervised protocols. Code is provided at https://github.com/AndreaCossu/continual-pretraining-nlp-vision .

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源