论文标题

规则形式:上下文意识到的可区分规则在知识图上挖掘

Ruleformer: Context-aware Differentiable Rule Mining over Knowledge Graph

论文作者

Xu, Zezhong, Ye, Peng, Chen, Hui, Zhao, Meng, Chen, Huajun, Zhang, Wen

论文摘要

规则挖掘是对知识图(KG)推理的有效方法。现有作品主要集中于采矿规则。但是,可能有几个规则可以用于某个关系的推理,并且如何选择完成不同三元组的适当规则尚未讨论。在本文中,我们建议考虑上下文信息,这有助于为推理任务选择合适的规则。基于这个想法,我们提出了一种基于变压器的规则挖掘方法,规则形式。它由两个块组成:1)一个编码器,从具有修改的注意机制的头部实体的子图中提取上下文信息,以及2)一个解码器,该解码器从编码器输出中汇总了子图信息,并为每个推理步骤生成关系的可能性。规则形式背后的基本思想是将规则挖掘过程作为序列任务的顺序。为了使子图成为编码器的序列输入并保留图形结构,我们在变压器中设计了一种关系注意机制。实验结果表明,有必要在规则挖掘任务中考虑这些信息以及我们模型的有效性。

Rule mining is an effective approach for reasoning over knowledge graph (KG). Existing works mainly concentrate on mining rules. However, there might be several rules that could be applied for reasoning for one relation, and how to select appropriate rules for completion of different triples has not been discussed. In this paper, we propose to take the context information into consideration, which helps select suitable rules for the inference tasks. Based on this idea, we propose a transformer-based rule mining approach, Ruleformer. It consists of two blocks: 1) an encoder extracting the context information from subgraph of head entities with modified attention mechanism, and 2) a decoder which aggregates the subgraph information from the encoder output and generates the probability of relations for each step of reasoning. The basic idea behind Ruleformer is regarding rule mining process as a sequence to sequence task. To make the subgraph a sequence input to the encoder and retain the graph structure, we devise a relational attention mechanism in Transformer. The experiment results show the necessity of considering these information in rule mining task and the effectiveness of our model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源