论文标题
Deepe:知识图的深神经网络嵌入
DeepE: a deep neural network for knowledge graph embedding
论文作者
论文摘要
最近,基于神经网络的方法表明了他们在学习知识图嵌入任务(KGE)方面学习更多表达特征(KGE)。但是,在简单的图上,深度方法的性能通常落在浅层的后面。一个可能的原因是,深层模型很难训练,而浅层模型可能足以准确地表示简单的kg的结构。 在本文中,我们提出了一个名为Deepe的基于神经网络的模型,以解决该问题,该模型堆叠了多个构件,以根据头部实体和关系预测尾部实体。每个构建块都是线性和非线性函数的添加。堆叠的构建块等效于具有不同非线性深度的一组学习功能。因此,Deepe允许深层功能学习深度功能和浅色功能以学习浅特征。通过广泛的实验,我们发现深度胜过其他最先进的基线方法。 Deepe的主要优势是鲁棒性。 Deepe的平均排名(MR)得分比FB15K-237,WN18RR和Yago3-10的最佳基线方法低6%,30%,低65%。我们的设计使得在KGE上训练更深层次的网络成为可能,例如FB15K-237上的40层,并且没有简单关系上的精确度。
Recently, neural network based methods have shown their power in learning more expressive features on the task of knowledge graph embedding (KGE). However, the performance of deep methods often falls behind the shallow ones on simple graphs. One possible reason is that deep models are difficult to train, while shallow models might suffice for accurately representing the structure of the simple KGs. In this paper, we propose a neural network based model, named DeepE, to address the problem, which stacks multiple building blocks to predict the tail entity based on the head entity and the relation. Each building block is an addition of a linear and a non-linear function. The stacked building blocks are equivalent to a group of learning functions with different non-linear depth. Hence, DeepE allows deep functions to learn deep features, and shallow functions to learn shallow features. Through extensive experiments, we find DeepE outperforms other state-of-the-art baseline methods. A major advantage of DeepE is the robustness. DeepE achieves a Mean Rank (MR) score that is 6%, 30%, 65% lower than the best baseline methods on FB15k-237, WN18RR and YAGO3-10. Our design makes it possible to train much deeper networks on KGE, e.g. 40 layers on FB15k-237, and without scarifying precision on simple relations.