论文标题

会融合吗?混合培训范例并提示争论质量预测

Will It Blend? Mixing Training Paradigms & Prompting for Argument Quality Prediction

论文作者

van der Meer, Michiel, Reuver, Myrthe, Khurana, Urja, Krause, Lea, Santamaría, Selene Báez

论文摘要

本文描述了我们对第9届论证挖掘研讨会共同任务的贡献(2022)。我们的方法使用大型语言模型来进行论证质量预测的任务。我们使用GPT-3进行及时的工程,并研究培训范式多任务学习,对比学习和中任务培训。我们发现混合的预测设置优于单个模型。提示GPT-3最适合预测论点有效性,而论证新颖性最好通过使用所有三个训练范式训练的模型来估算。

This paper describes our contributions to the Shared Task of the 9th Workshop on Argument Mining (2022). Our approach uses Large Language Models for the task of Argument Quality Prediction. We perform prompt engineering using GPT-3, and also investigate the training paradigms multi-task learning, contrastive learning, and intermediate-task training. We find that a mixed prediction setup outperforms single models. Prompting GPT-3 works best for predicting argument validity, and argument novelty is best estimated by a model trained using all three training paradigms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源