论文标题
R-GCN:R可以随机代表
R-GCN: The R Could Stand for Random
论文作者
论文摘要
关系图卷积网络(R-GCN)的成立标志着语义Web域中的一个里程碑是一种广泛引用的方法,该方法将端到端层次层次表示学习到知识图(KGS)。 R-GCN通过反复汇总其邻居的参数性,关系特异性转换来生成感兴趣的节点的表示。但是,在本文中,我们认为R-GCN的主要贡献在于这种“传递”范式,而不是学习的权重。为此,我们介绍了“随机关系图卷积网络”(RR-GCN),它通过汇总来自邻居的随机转换随机随机表示,即没有学习的参数,从而使所有参数未经训练,从而构建节点嵌入。我们从经验上表明,RR-GCN可以在节点分类和链接预测设置中与受过全面训练的R-GCN竞争。
The inception of the Relational Graph Convolutional Network (R-GCN) marked a milestone in the Semantic Web domain as a widely cited method that generalises end-to-end hierarchical representation learning to Knowledge Graphs (KGs). R-GCNs generate representations for nodes of interest by repeatedly aggregating parameterised, relation-specific transformations of their neighbours. However, in this paper, we argue that the the R-GCN's main contribution lies in this "message passing" paradigm, rather than the learned weights. To this end, we introduce the "Random Relational Graph Convolutional Network" (RR-GCN), which leaves all parameters untrained and thus constructs node embeddings by aggregating randomly transformed random representations from neighbours, i.e., with no learned parameters. We empirically show that RR-GCNs can compete with fully trained R-GCNs in both node classification and link prediction settings.