论文标题
用于语义角色标签的结构性调整
Structured Tuning for Semantic Role Labeling
论文作者
论文摘要
最近,神经网络驱动的语义角色标签(SRL)系统在F1分数方面表现出了令人印象深刻的改善。这些改进是由于表达性输入表示,至少在表面上,它们与有助于线性SRL模型的知识富含约束的解码机制正交。引入结构的好处以告知神经模型,提出了一种方法论挑战。在本文中,我们提出了一个结构化的调整框架,以仅在训练时间使用软化约束来改善模型。我们的框架利用了神经网络的表现力,并提供了结构化损失组件的监督。我们从强大的基线(罗伯塔)开始验证我们的方法的影响,并表明我们的框架通过学习遵守声明性约束来优于基线。此外,我们对较小训练大小的实验表明,我们可以在低资源场景下实现一致的改进。
Recent neural network-driven semantic role labeling (SRL) systems have shown impressive improvements in F1 scores. These improvements are due to expressive input representations, which, at least at the surface, are orthogonal to knowledge-rich constrained decoding mechanisms that helped linear SRL models. Introducing the benefits of structure to inform neural models presents a methodological challenge. In this paper, we present a structured tuning framework to improve models using softened constraints only at training time. Our framework leverages the expressiveness of neural networks and provides supervision with structured loss components. We start with a strong baseline (RoBERTa) to validate the impact of our approach, and show that our framework outperforms the baseline by learning to comply with declarative constraints. Additionally, our experiments with smaller training sizes show that we can achieve consistent improvements under low-resource scenarios.