论文标题

生物医学领域中基于变压器的语言模型的实验评估

An Experimental Evaluation of Transformer-based Language Models in the Biomedical Domain

论文作者

Grouchy, Paul, Jain, Shobhit, Liu, Michael, Wang, Kuhan, Tian, Max, Arora, Nidhi, Ngai, Hillary, Khattak, Faiza Khan, Dolatabadi, Elham, Kocak, Sedef Akinli

论文摘要

随着健康数据中文本量的越来越多,大型预培训模型的迅速进步可以应用于具有最小特定任务修改的各种生物医学任务。本文强调了这些模型的成本,这使技术复制具有挑战性,总结了在复制生物Biobert和进一步的预培训和在生物医学领域进行仔细的微调实验。我们还研究了下游生物医学NLP任务中域特异性和域敏捷训练模型的有效性。我们的发现证实,在生物医学领域的某些下游NLP任务(QA和NER)中,预训练的模型可能会产生影响。但是,这种改进可能不能证明特定领域特定的预训练的高成本是合理的。

With the growing amount of text in health data, there have been rapid advances in large pre-trained models that can be applied to a wide variety of biomedical tasks with minimal task-specific modifications. Emphasizing the cost of these models, which renders technical replication challenging, this paper summarizes experiments conducted in replicating BioBERT and further pre-training and careful fine-tuning in the biomedical domain. We also investigate the effectiveness of domain-specific and domain-agnostic pre-trained models across downstream biomedical NLP tasks. Our finding confirms that pre-trained models can be impactful in some downstream NLP tasks (QA and NER) in the biomedical domain; however, this improvement may not justify the high cost of domain-specific pre-training.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源