论文标题

评估生成参数有效学习

Evaluating Parameter Efficient Learning for Generation

论文作者

Xu, Peng, Patwary, Mostofa, Prabhumoye, Shrimai, Adams, Virginia, Prenger, Ryan J., Ping, Wei, Lee, Nayeon, Shoeybi, Mohammad, Catanzaro, Bryan

论文摘要

参数有效学习方法(PERM)最近引起了人们的重大关注,因为它们为预训练的语言模型(PLM)提供了适应下游任务的有效方法。但是,这些结论主要来自在完整的培训集中的内域评估中得出的。在本文中,我们从三个新的角度介绍了PERMS与填充之间的比较:(1)样本和模型大小对内域评估的影响,(2)对看不见的域和新数据集的概括,以及(3)世代的忠诚。我们的结果表明,对于内域设置(a),样本量的横点比较少样品训练时的perms的性能要比芬太尼更好,并且(b)较大的PLM具有较大的跨点。对于跨域和跨数据集案例,我们表明(a)适配器(Houlsby等,2019)在此处研究的所有perms中表现最好,并且(b)如果任务数据集低于一定尺寸,则表现优于Finetuns。我们还比较了几代人的忠诚,并表明,佩尔斯可以取得更好的忠诚得分,尤​​其是对于小型培训,高达6%。最后,我们将适配器应用于MT-NLG 530B(Smith等,2022),并在所有鲁日分数上(Rouge-1 49.17,Rouge-2 27.20,Rouge-l 40.98),在Xsum(Narayan等,2018)上实现新的最新结果(Narayan等,2018)。

Parameter efficient learning methods (PERMs) have recently gained significant attention as they provide an efficient way for pre-trained language models (PLMs) to adapt to a downstream task. However, these conclusions are mostly drawn from in-domain evaluations over the full training set. In this paper, we present comparisons between PERMs and finetuning from three new perspectives: (1) the effect of sample and model size to in-domain evaluations, (2) generalization to unseen domains and new datasets, and (3) the faithfulness of generations. Our results show that for in-domain settings (a) there is a cross point of sample size for which PERMs will perform better than finetuning when training with fewer samples, and (b) larger PLMs have larger cross points. For cross-domain and cross-dataset cases, we show that (a) Adapter (Houlsby et al., 2019) performs the best amongst all the PERMs studied here, and (b) it outperforms finetuning if the task dataset is below a certain size. We also compare the faithfulness of generations and show that PERMs can achieve better faithfulness score than finetuning, especially for small training set, by as much as 6%. Finally, we apply Adapter to MT-NLG 530b (Smith et al., 2022) and achieve new state-of-the-art results on Xsum (Narayan et al., 2018) for all ROUGE scores (ROUGE-1 49.17, ROUGE-2 27.20, ROUGE-L 40.98).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源