论文标题

图形邻里专心池

Graph Neighborhood Attentive Pooling

论文作者

Kefato, Zekarias T., Girdzijauskas, Sarunas

论文摘要

网络表示学习(NRL)是一种强大的技术,用于学习高维和稀疏图的低维矢量表示。大多数研究都使用随机步行探索与图相关的结构和元数据,并采用无监督或半监督的学习方案。在这些方法中学习是无上下文的,因为每个节点只学习一个表示。最近的研究对单个表示的充分性进行了争论,并提出了一种上下文敏感的方法,该方法被证明在诸如链接预测和排名之类的应用中非常有效。 但是,这些方法中的大多数都依赖于需要RNN或CNN捕获高级功能或依靠社区检测算法来识别节点的多个上下文的其他文本功能。 在这项研究中,在不需要其他功能或社区检测算法的情况下,我们提出了一种新颖的上下文敏感算法,称为GAP,该算法学会了使用细心的池网络在节点社区的不同部分进行参加。我们在链接预测和节点聚类任务上使用三个现实世界数据集显示了差距的功效,并将其与10个流行和最新的(SOTA)基线进行比较。 GAP始终优于它们,并且在链接预测和聚类任务上的最佳性能方法比最佳性能达到了约9%和约20%。

Network representation learning (NRL) is a powerful technique for learning low-dimensional vector representation of high-dimensional and sparse graphs. Most studies explore the structure and metadata associated with the graph using random walks and employ an unsupervised or semi-supervised learning schemes. Learning in these methods is context-free, because only a single representation per node is learned. Recently studies have argued on the sufficiency of a single representation and proposed a context-sensitive approach that proved to be highly effective in applications such as link prediction and ranking. However, most of these methods rely on additional textual features that require RNNs or CNNs to capture high-level features or rely on a community detection algorithm to identify multiple contexts of a node. In this study, without requiring additional features nor a community detection algorithm, we propose a novel context-sensitive algorithm called GAP that learns to attend on different parts of a node's neighborhood using attentive pooling networks. We show the efficacy of GAP using three real-world datasets on link prediction and node clustering tasks and compare it against 10 popular and state-of-the-art (SOTA) baselines. GAP consistently outperforms them and achieves up to ~9% and ~20% gain over the best performing methods on link prediction and clustering tasks, respectively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源