论文标题
关于使用BERT进行自动论文评分:多尺度论文表示的联合学习
On the Use of BERT for Automated Essay Scoring: Joint Learning of Multi-Scale Essay Representation
论文作者
论文摘要
近年来,预训练的模型在大多数自然语言处理(NLP)任务中已成为占主导地位。但是,在自动论文评分(AES)领域,诸如BERT之类的预培训模型尚未适当地用于超越其他深度学习模型,例如LSTM。在本文中,我们介绍了可以共同学习的新型多尺度论文表示。我们还采用多种损失,并从跨域论文中转移学习,以进一步提高绩效。实验结果表明,我们的方法从多尺度论文表示的联合学习中获得了很大的好处,并在ASAP任务中所有深度学习模型中获得了几乎最先进的结果。我们的多尺度论文表示也可以很好地推广到CommanLit可读性奖品集,这表明本文提出的新文本表示可能是长文任务的新有效选择。
In recent years, pre-trained models have become dominant in most natural language processing (NLP) tasks. However, in the area of Automated Essay Scoring (AES), pre-trained models such as BERT have not been properly used to outperform other deep learning models such as LSTM. In this paper, we introduce a novel multi-scale essay representation for BERT that can be jointly learned. We also employ multiple losses and transfer learning from out-of-domain essays to further improve the performance. Experiment results show that our approach derives much benefit from joint learning of multi-scale essay representation and obtains almost the state-of-the-art result among all deep learning models in the ASAP task. Our multi-scale essay representation also generalizes well to CommonLit Readability Prize data set, which suggests that the novel text representation proposed in this paper may be a new and effective choice for long-text tasks.