论文标题

AMR通过图形迭代推断进行解析

AMR Parsing via Graph-Sequence Iterative Inference

论文作者

Cai, Deng, Lam, Wai

论文摘要

我们提出了一个新的端到端模型,该模型将AMR解析视为输入序列和增量构造图的一系列双重决策。在每个时间步骤中,我们的模型都会执行多轮关注,推理和组成,旨在回答两个关键问题:(1)输入\ textIt {sequence}的哪一部分到摘要; (2)在输出\ textit {graph}中构建新概念的位置。我们表明,这两个问题的答案是相互因果关系。我们设计了一个基于迭代推断的模型,该模型在这两个观点上有助于获得更好的答案,从而大大提高了解析精度。我们的实验结果显着胜过所有先前报道的\ textsc {smatch}的得分。值得注意的是,没有任何大规模的预训练语言模型(例如BERT)的帮助,我们的模型已经超过了使用BERT的先前最先进的。在BERT的帮助下,我们可以将最新结果推向LDC2017T10(AMR 2.0)的80.2 \%,在LDC2014T12(AMR 1.0)上将75.4 \%推向75.4 \%。

We propose a new end-to-end model that treats AMR parsing as a series of dual decisions on the input sequence and the incrementally constructed graph. At each time step, our model performs multiple rounds of attention, reasoning, and composition that aim to answer two critical questions: (1) which part of the input \textit{sequence} to abstract; and (2) where in the output \textit{graph} to construct the new concept. We show that the answers to these two questions are mutually causalities. We design a model based on iterative inference that helps achieve better answers in both perspectives, leading to greatly improved parsing accuracy. Our experimental results significantly outperform all previously reported \textsc{Smatch} scores by large margins. Remarkably, without the help of any large-scale pre-trained language model (e.g., BERT), our model already surpasses previous state-of-the-art using BERT. With the help of BERT, we can push the state-of-the-art results to 80.2\% on LDC2017T10 (AMR 2.0) and 75.4\% on LDC2014T12 (AMR 1.0).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源