论文标题

Adamix:用于参数有效模型调整的适应性混合物

AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning

论文作者

Wang, Yaqing, Agarwal, Sahaj, Mukherjee, Subhabrata, Liu, Xiaodong, Gao, Jing, Awadallah, Ahmed Hassan, Gao, Jianfeng

论文摘要

用于下游任务的大型预训练语言模型(PLM)的标准微调需要更新数亿美元到数十亿个参数,并为每项任务存储大量的PLM权重副本,从而增加存储,共享和服务模型的成本增加。为了解决这个问题,引入了参数有效的微调(PEFT)技术,其中将小型训练组件注入PLM中并在微调过程中进行更新。我们将ADAMIX作为一种通用的PEFT方法,它调整了适应模块的混合物(给定基本的PEFT方法)在每个变压器层中引入了基础PEFT方法,同时使大多数PLM权重冷冻。例如,Adamix可以利用Houlsby等适配器的混合物或Lora等低等级分解矩阵的混合物来改善下游任务性能,而不是相应的PEFT方法,用于完全监督和少数射击NLU和NLG任务。此外,我们设计了Adamix,以使其与基础PEFT方法相匹配的计算成本和可调参数的数量。通过仅调整PLM参数的0.1-0.2%,我们表明ADAMIX的表现优于NLU和NLG任务的SOTA参数效率微调和完整的模型微调。

Standard fine-tuning of large pre-trained language models (PLMs) for downstream tasks requires updating hundreds of millions to billions of parameters, and storing a large copy of the PLM weights for every task resulting in increased cost for storing, sharing and serving the models. To address this, parameter-efficient fine-tuning (PEFT) techniques were introduced where small trainable components are injected in the PLM and updated during fine-tuning. We propose AdaMix as a general PEFT method that tunes a mixture of adaptation modules -- given the underlying PEFT method of choice -- introduced in each Transformer layer while keeping most of the PLM weights frozen. For instance, AdaMix can leverage a mixture of adapters like Houlsby or a mixture of low rank decomposition matrices like LoRA to improve downstream task performance over the corresponding PEFT methods for fully supervised and few-shot NLU and NLG tasks. Further, we design AdaMix such that it matches the same computational cost and the number of tunable parameters as the underlying PEFT method. By only tuning 0.1-0.2% of PLM parameters, we show that AdaMix outperforms SOTA parameter-efficient fine-tuning and full model fine-tuning for both NLU and NLG tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源