论文标题

结构化知识基础以回答问题

Structured Knowledge Grounding for Question Answering

论文作者

Lu, Yujie, Ouyang, Siqi, Zhou, Kairui

论文摘要

语言模型(LM)是否可以通过固有的关系推理能力在知识库中的地面提问(QA)任务?尽管以前仅使用LMS的模型在许多QA任务上都看到了一些成功,但最新的方法包括知识图(kg),以与LMS更加逻辑驱动的隐式知识相辅相成。但是,有效地从结构化数据中提取信息,例如kgs,使LMS保持开放的问题,并且当前的模型依靠图形技术来提取知识。在本文中,我们建议仅利用LMS将基于知识的问题的语言和知识与灵活性,覆盖范围和结构化推理相结合。具体来说,我们设计了一种知识构建方法,该方法通过动态的跳跃来检索相关环境,该方法比传统的基于GNN的技术表达了更全面的。我们设计了一种深层的融合机制,以进一步弥合语言和知识之间交换瓶颈的信息。广泛的实验表明,我们的模型始终证明了其对CommenSensenSENSENSESQA基准测试的最先进性能,从而展示了仅利用LMS将LMS稳健地接地质量质量质量质量质量质量固定到知识库的可能性。

Can language models (LM) ground question-answering (QA) tasks in the knowledge base via inherent relational reasoning ability? While previous models that use only LMs have seen some success on many QA tasks, more recent methods include knowledge graphs (KG) to complement LMs with their more logic-driven implicit knowledge. However, effectively extracting information from structured data, like KGs, empowers LMs to remain an open question, and current models rely on graph techniques to extract knowledge. In this paper, we propose to solely leverage the LMs to combine the language and knowledge for knowledge based question-answering with flexibility, breadth of coverage and structured reasoning. Specifically, we devise a knowledge construction method that retrieves the relevant context with a dynamic hop, which expresses more comprehensivenes than traditional GNN-based techniques. And we devise a deep fusion mechanism to further bridge the information exchanging bottleneck between the language and the knowledge. Extensive experiments show that our model consistently demonstrates its state-of-the-art performance over CommensenseQA benchmark, showcasing the possibility to leverage LMs solely to robustly ground QA into the knowledge base.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源