论文标题

将图神经网络的深度和范围解耦

Decoupling the Depth and Scope of Graph Neural Networks

论文作者

Zeng, Hanqing, Zhang, Muhan, Xia, Yinglong, Srivastava, Ajitesh, Malevich, Andrey, Kannan, Rajgopal, Prasanna, Viktor, Jin, Long, Chen, Ren

论文摘要

最新的图形神经网络(GNN)相对于图形和模型大小的可扩展性有限。在大图上,增加模型深度通常意味着示波器的指数扩展(即接受场)。除了几层外,还出现了两个基本挑战:1。由于过度厚度而导致的表现力降低,并且由于邻里爆炸而引起的昂贵计算。我们提出了一个设计原理,以使GNN的深度和范围解除 - 生成目标实体的表示(即节点或边缘),我们首先将局部子图提取为有界尺寸的范围,然后将其应用于该子图的顶部的任意深度。正确提取的子图由少数关键邻居组成,同时不包括无关的邻居。不管它有多深,GNN都将当地社区平滑为信息丰富的表示形式,而不是将全球图表置于“白噪声”中。从理论上讲,解耦从图形信号处理(GCN),功能近似(图)和拓扑学习(GIN)的角度提高了GNN的表达能力。从经验上讲,在七个图表(最高1100m节点)和六个骨干GNN体系结构上,我们的设计可通过降低计算和硬件成本的数量级来提高准确性。

State-of-the-art Graph Neural Networks (GNNs) have limited scalability with respect to the graph and model sizes. On large graphs, increasing the model depth often means exponential expansion of the scope (i.e., receptive field). Beyond just a few layers, two fundamental challenges emerge: 1. degraded expressivity due to oversmoothing, and 2. expensive computation due to neighborhood explosion. We propose a design principle to decouple the depth and scope of GNNs -- to generate representation of a target entity (i.e., a node or an edge), we first extract a localized subgraph as the bounded-size scope, and then apply a GNN of arbitrary depth on top of the subgraph. A properly extracted subgraph consists of a small number of critical neighbors, while excluding irrelevant ones. The GNN, no matter how deep it is, smooths the local neighborhood into informative representation rather than oversmoothing the global graph into "white noise". Theoretically, decoupling improves the GNN expressive power from the perspectives of graph signal processing (GCN), function approximation (GraphSAGE) and topological learning (GIN). Empirically, on seven graphs (with up to 110M nodes) and six backbone GNN architectures, our design achieves significant accuracy improvement with orders of magnitude reduction in computation and hardware cost.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源