论文标题

杯子:基于课程学习的及时调整隐式事件参数提取

CUP: Curriculum Learning based Prompt Tuning for Implicit Event Argument Extraction

论文作者

Lin, Jiaju, Chen, Qin, Zhou, Jie, Jin, Jian, He, Liang

论文摘要

隐式事件参数提取(EAE)旨在确定可以散布在文档上的参数。以前的大多数工作都着重于学习参数与给定触发因素之间的直接关系,而与远程依赖关系的隐式关系尚未得到很好的研究。此外,最近的基于神经网络的方法依赖大量的标记数据进行培训,这是由于高标签成本而无法获得的。在本文中,我们提出了一种基于课程学习的及时调整(CUP)方法,该方法通过四个学习阶段解决了隐式EAE。阶段是根据语义图中与触发节点的关系定义的,该阶段很好地捕获了参数和触发器之间的长距离依赖关系。此外,我们将基于及时的编码器模型集成在一起,以从每个阶段从预训练的语言模型(PLM)中获取相关的知识,在该阶段,及时的模板适应了学习进度以增强参数的推理。两个众所周知的基准数据集的实验结果显示了我们提出的方法的巨大优势。特别是,我们在完全监督和低数据的场景中胜过最先进的模型。

Implicit event argument extraction (EAE) aims to identify arguments that could scatter over the document. Most previous work focuses on learning the direct relations between arguments and the given trigger, while the implicit relations with long-range dependency are not well studied. Moreover, recent neural network based approaches rely on a large amount of labeled data for training, which is unavailable due to the high labelling cost. In this paper, we propose a Curriculum learning based Prompt tuning (CUP) approach, which resolves implicit EAE by four learning stages. The stages are defined according to the relations with the trigger node in a semantic graph, which well captures the long-range dependency between arguments and the trigger. In addition, we integrate a prompt-based encoder-decoder model to elicit related knowledge from pre-trained language models (PLMs) in each stage, where the prompt templates are adapted with the learning progress to enhance the reasoning for arguments. Experimental results on two well-known benchmark datasets show the great advantages of our proposed approach. In particular, we outperform the state-of-the-art models in both fully-supervised and low-data scenarios.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源