论文标题
QA4QG:使用问题回答来限制多跳问题的生成
QA4QG: Using Question Answering to Constrain Multi-Hop Question Generation
论文作者
论文摘要
多跳问题生成(MQG)旨在产生复杂的问题,这些问题需要在输入段落的多个信息上进行推理。 MQG上的大多数现有工作都致力于探索基于图的网络,以配备传统的序列到序列框架和推理能力。但是,这些模型并不能充分利用问题和答案之间的约束。此外,对多跳问题回答(QA)的研究表明,变压器可以替换多跳推理的图形结构。因此,在这项工作中,我们提出了一个新颖的框架QA4QG,这是一个基于QA的基于BART的MQG框架。它使用额外的多跳质量质量检查模块增强了标准的BART模型,以进一步限制生成的问题。我们在HOTPOTQA数据集上的结果表明,与先前报道的最佳结果相比,QA4QG的表现均优于所有最新模型,其增长了8个BLEU-4和8个胭脂点。我们的工作提出了为MQG任务引入预训练的语言模型和质量检查模块的优势。
Multi-hop question generation (MQG) aims to generate complex questions which require reasoning over multiple pieces of information of the input passage. Most existing work on MQG has focused on exploring graph-based networks to equip the traditional Sequence-to-sequence framework with reasoning ability. However, these models do not take full advantage of the constraint between questions and answers. Furthermore, studies on multi-hop question answering (QA) suggest that Transformers can replace the graph structure for multi-hop reasoning. Therefore, in this work, we propose a novel framework, QA4QG, a QA-augmented BART-based framework for MQG. It augments the standard BART model with an additional multi-hop QA module to further constrain the generated question. Our results on the HotpotQA dataset show that QA4QG outperforms all state-of-the-art models, with an increase of 8 BLEU-4 and 8 ROUGE points compared to the best results previously reported. Our work suggests the advantage of introducing pre-trained language models and QA module for the MQG task.