论文标题
常识性知识图的归纳学习图
Inductive Learning on Commonsense Knowledge Graph Completion
论文作者
论文摘要
常识知识图(CKG)是一种特殊的知识图(kg),其中实体由自由形式文本组成。但是,大多数现有的CKG完成方法都集中在培训时呈现所有实体的设置上。尽管此设置是常规KG完成的标准设置,但它具有完成CKG完成的局限性。在测试时,CKG中的实体可能是看不见的,因为它们可能具有看不见的文本/名称和实体可能与培训图断开,因为CKG通常非常稀疏。在这里,我们建议研究CKG完成的归纳学习设置,在该设置中可能会在测试时出现看不见的实体。我们开发了一个名为归纳的新型学习框架。与以前的方法不同,归纳性通过直接从原始实体属性/文本计算实体嵌入来确保归纳学习能力。电感由自由文本编码器,图形编码器和kg完成解码器组成。具体而言,自由文本编码器首先根据预先训练的语言模型和单词嵌入来提取每个实体的文本表示。图形编码器是一个封闭式的关系图卷积神经网络,它从致密的图表中学习,以实体表示学习。我们开发了一种通过在语义相关实体之间添加边缘并为看不见的实体提供更多支持性信息来增强CKG的方法,从而使实体嵌入未见实体的概括能力更好。最后,归纳者将Conv-transe用作CKG完成解码器。实验结果表明,在原子和概念网基准的标准和归纳环境中,归纳性在标准和诱导设置中的最先进基准都显着胜过。在电感场景上,感应性的表现尤其出色,在电感场景中,它比当前方法的提高了48%。
Commonsense knowledge graph (CKG) is a special type of knowledge graph (KG), where entities are composed of free-form text. However, most existing CKG completion methods focus on the setting where all the entities are presented at training time. Although this setting is standard for conventional KG completion, it has limitations for CKG completion. At test time, entities in CKGs can be unseen because they may have unseen text/names and entities may be disconnected from the training graph, since CKGs are generally very sparse. Here, we propose to study the inductive learning setting for CKG completion where unseen entities may present at test time. We develop a novel learning framework named InductivE. Different from previous approaches, InductiveE ensures the inductive learning capability by directly computing entity embeddings from raw entity attributes/text. InductiveE consists of a free-text encoder, a graph encoder, and a KG completion decoder. Specifically, the free-text encoder first extracts the textual representation of each entity based on the pre-trained language model and word embedding. The graph encoder is a gated relational graph convolutional neural network that learns from a densified graph for more informative entity representation learning. We develop a method that densifies CKGs by adding edges among semantic-related entities and provide more supportive information for unseen entities, leading to better generalization ability of entity embedding for unseen entities. Finally, inductiveE employs Conv-TransE as the CKG completion decoder. Experimental results show that InductiveE significantly outperforms state-of-the-art baselines in both standard and inductive settings on ATOMIC and ConceptNet benchmarks. InductivE performs especially well on inductive scenarios where it achieves above 48% improvement over present methods.