论文标题

将子图转换为节点使简单的gnns强大而有效地用于子图表示学习

Translating Subgraphs to Nodes Makes Simple GNNs Strong and Efficient for Subgraph Representation Learning

论文作者

Kim, Dongkwan, Oh, Alice

论文摘要

子图表示学习已成为一个重要的问题,但默认情况下,它与大型全局图上的专门图形神经网络相关。这些模型需要广泛的记忆和计算资源,但挑战了子图的层次结构。在本文中,我们提出了子图表(S2N)翻译,这是一种用于学习代表的新型表述。具体而言,给定全局图中的一组子图,我们通过将子图形转换为节点来构造一个新图。与最新模型相比,S2N证明了理论证据和经验证据,S2N不仅显着降低了记忆和计算成本,而且通过捕获子图的本地和全球结构来胜过它们。通过利用图形粗化方法,我们的方法即使在没有足够子图的数据筛分设置中也优于基准。我们对八个基准测试的实验表明,具有S2N翻译的罚款模型可以处理183-711倍的子图样本,比最先进的模型以更好或类似的性能水平的水平多。

Subgraph representation learning has emerged as an important problem, but it is by default approached with specialized graph neural networks on a large global graph. These models demand extensive memory and computational resources but challenge modeling hierarchical structures of subgraphs. In this paper, we propose Subgraph-To-Node (S2N) translation, a novel formulation for learning representations of subgraphs. Specifically, given a set of subgraphs in the global graph, we construct a new graph by coarsely transforming subgraphs into nodes. Demonstrating both theoretical and empirical evidence, S2N not only significantly reduces memory and computational costs compared to state-of-the-art models but also outperforms them by capturing both local and global structures of the subgraph. By leveraging graph coarsening methods, our method outperforms baselines even in a data-scarce setting with insufficient subgraphs. Our experiments on eight benchmarks demonstrate that fined-tuned models with S2N translation can process 183 -- 711 times more subgraph samples than state-of-the-art models at a better or similar performance level.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源