论文标题

rnnlogic:在知识图上学习推理的逻辑规则

RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs

论文作者

Qu, Meng, Chen, Junkun, Xhonneux, Louis-Pascal, Bengio, Yoshua, Tang, Jian

论文摘要

本文研究了学习知识图推理的逻辑规则。逻辑规则在用于预测并能够推广到其他任务时提供了可解释的解释,因此对于学习至关重要。现有方法要么遇到在大型搜索空间(例如神经逻辑编程)中搜索的问题,要么由于稀疏的奖励(例如,基于强化学习的技术)而导致的优化无效。为了解决这些局限性,本文提出了一种称为rnnlogic的概率模型。 rnnlogic将逻辑规则视为潜在变量,同时训练规则发生器以及具有逻辑规则的推理预测指标。我们开发了一种基于EM的算法以进行优化。在每次迭代中,首先更新推理预测因子以探索一些生成的逻辑规则以进行推理。然后,在E-Step中,我们从所有生成的规则中选择一组高质量的规则,并通过后验推断使用规则生成器和推理预测变量。在M-Step中,使用E-Step中选择的规则更新规则生成器。在四个数据集上的实验证明了rnnlogic的有效性。

This paper studies learning logic rules for reasoning on knowledge graphs. Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks, and hence are critical to learn. Existing methods either suffer from the problem of searching in a large search space (e.g., neural logic programming) or ineffective optimization due to sparse rewards (e.g., techniques based on reinforcement learning). To address these limitations, this paper proposes a probabilistic model called RNNLogic. RNNLogic treats logic rules as a latent variable, and simultaneously trains a rule generator as well as a reasoning predictor with logic rules. We develop an EM-based algorithm for optimization. In each iteration, the reasoning predictor is first updated to explore some generated logic rules for reasoning. Then in the E-step, we select a set of high-quality rules from all generated rules with both the rule generator and reasoning predictor via posterior inference; and in the M-step, the rule generator is updated with the rules selected in the E-step. Experiments on four datasets prove the effectiveness of RNNLogic.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源