论文标题

适应或微调:关于抽象性摘要的案例研究

To Adapt or to Fine-tune: A Case Study on Abstractive Summarization

论文作者

Zhao, Zheng, Chen, Pinzhen

论文摘要

抽象摘要领域的最新进展利用了预训练的语言模型,而不是从头开始训练模型。但是,这样的模型训练和伴随着大量的高架训练。研究人员提出了一些轻巧的选择,例如较小的适配器来减轻缺点。尽管如此,就提高效率而没有绩效牺牲的效率提高,但使用适配器是否有利于总结的任务。在这项工作中,我们对具有不同复杂性的摘要任务进行了多方面的调查:语言,域和任务转移。在我们的实验中,对预训练的语言模型进行微调通常比使用适配器更好。性能差距与所使用的培训数据数量正相关。值得注意的是,在极低的资源条件下,适配器超过微调。我们进一步提供了有关多语言,模型收敛性和鲁棒性的见解,希望能阐明抽象性摘要中微调或适配器的实用选择。

Recent advances in the field of abstractive summarization leverage pre-trained language models rather than train a model from scratch. However, such models are sluggish to train and accompanied by a massive overhead. Researchers have proposed a few lightweight alternatives such as smaller adapters to mitigate the drawbacks. Nonetheless, it remains uncertain whether using adapters benefits the task of summarization, in terms of improved efficiency without an unpleasant sacrifice in performance. In this work, we carry out multifaceted investigations on fine-tuning and adapters for summarization tasks with varying complexity: language, domain, and task transfer. In our experiments, fine-tuning a pre-trained language model generally attains a better performance than using adapters; the performance gap positively correlates with the amount of training data used. Notably, adapters exceed fine-tuning under extremely low-resource conditions. We further provide insights on multilinguality, model convergence, and robustness, hoping to shed light on the pragmatic choice of fine-tuning or adapters in abstractive summarization.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源