论文标题
具有多种知识来源和元学习的推论文本生成
Inferential Text Generation with Multiple Knowledge Sources and Meta-Learning
论文作者
论文摘要
我们研究了为各种常识性生成推论文本的问题,例如\ textit {if-else}关系。现有方法通常使用培训示例中的有限证据,并分别学习每个关系。在这项工作中,我们将多个知识源用作模型的燃料。现有的常识性知识库(例如ConceptNet)以分类学知识(例如\ textit {isa}和\ textit {ressectto}关系)主导,具有有限的推论知识。我们不仅使用结构化常识性知识库,还使用搜索引擎结果的自然语言片段。这些来源通过钥匙值存储网络纳入生成基本模型中。此外,我们引入了基于元学习的多任务学习算法。对于每个有针对性的常识关系,我们将其他关系中的示例学习视为元训练过程,而对目标关系的示例进行评估是元检验过程。我们对Event2Mind和原子数据集进行实验。结果表明,多个知识源的整合以及元学习算法的使用都可以改善性能。
We study the problem of generating inferential texts of events for a variety of commonsense like \textit{if-else} relations. Existing approaches typically use limited evidence from training examples and learn for each relation individually. In this work, we use multiple knowledge sources as fuels for the model. Existing commonsense knowledge bases like ConceptNet are dominated by taxonomic knowledge (e.g., \textit{isA} and \textit{relatedTo} relations), having a limited number of inferential knowledge. We use not only structured commonsense knowledge bases, but also natural language snippets from search-engine results. These sources are incorporated into a generative base model via key-value memory network. In addition, we introduce a meta-learning based multi-task learning algorithm. For each targeted commonsense relation, we regard the learning of examples from other relations as the meta-training process, and the evaluation on examples from the targeted relation as the meta-test process. We conduct experiments on Event2Mind and ATOMIC datasets. Results show that both the integration of multiple knowledge sources and the use of the meta-learning algorithm improve the performance.