论文标题

从知识图中学习有效逻辑规则学习的建筑规则层次结构

Building Rule Hierarchies for Efficient Logical Rule Learning from Knowledge Graphs

论文作者

Gu, Yulong, Guan, Yu, Missier, Paolo

论文摘要

近年来,许多系统都是从大规模知识图(kgs)中开发出的逻辑规则的,理由是代表规律的规则可以使新事实的可解释推论以及对已知事实的解释。在这些系统中,基于步行的方法通过在KG中抽象采样路径来生成含有常数的实例化规则,表现出强烈的预测性能和表现力。但是,由于可能的规则庞大,这些系统在生成和评估无主张的规则时经常浪费计算资源时并不能很好地扩展。在这项工作中,我们通过提出新的方法来解决使用规则层次结构来修剪非主张规则的新方法来解决此类可伸缩性问题。该方法包括两个阶段。首先,由于基于步行的方法不容易获得规则层次结构,因此我们构建了一个规则层次结构框架(RHF),该框架利用集合框架的集合来从一组学习的规则中构建适当的规则层次结构。其次,我们将RHF调整为现有规则学习者,在该规则学习者中,我们设计和实施了两种用于层次修剪的方法(HPMS),该方法利用生成的层次结构来删除无关紧要的冗余规则。通过四个公共基准数据集的实验,我们表明,HPM的应用可有效删除无主张的规则,这导致运行时以及学习规则的数量大大减少,而不会损害预测性能。

Many systems have been developed in recent years to mine logical rules from large-scale Knowledge Graphs (KGs), on the grounds that representing regularities as rules enables both the interpretable inference of new facts, and the explanation of known facts. Among these systems, the walk-based methods that generate the instantiated rules containing constants by abstracting sampled paths in KGs demonstrate strong predictive performance and expressivity. However, due to the large volume of possible rules, these systems do not scale well where computational resources are often wasted on generating and evaluating unpromising rules. In this work, we address such scalability issues by proposing new methods for pruning unpromising rules using rule hierarchies. The approach consists of two phases. Firstly, since rule hierarchies are not readily available in walk-based methods, we have built a Rule Hierarchy Framework (RHF), which leverages a collection of subsumption frameworks to build a proper rule hierarchy from a set of learned rules. And secondly, we adapt RHF to an existing rule learner where we design and implement two methods for Hierarchical Pruning (HPMs), which utilize the generated hierarchies to remove irrelevant and redundant rules. Through experiments over four public benchmark datasets, we show that the application of HPMs is effective in removing unpromising rules, which leads to significant reductions in the runtime as well as in the number of learned rules, without compromising the predictive performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源