论文标题
l $^2 $ -GCN:层次和学习的图形卷积网络的有效培训
L$^2$-GCN: Layer-Wise and Learned Efficient Training of Graph Convolutional Networks
论文作者
论文摘要
图形卷积网络(GCN)在许多应用程序中越来越流行,但众所周知,在大型图数据集上训练仍然很难进行训练。他们需要从邻居递归计算节点表示。当前的GCN培训算法遭受高计算成本的损失,这些计算成本会呈指数增长,或者以层数的数量成倍增长,或者用于加载整个图形和节点嵌入的高内存使用情况。在本文中,我们为GCN(L-GCN)提出了一个新颖的有效层训练框架,该培训框架(L-GCN)具有聚合和训练过程中的特征转换,从而大大降低了时间和记忆的复杂性。我们在图同构框架下对L-GCN进行了理论分析,在轻度条件下,L-GCN与更昂贵的常规训练算法一样强大。我们进一步提出了l $^2 $ -GCN,该$^2 $ -GCN学习每个层的控制器,该控制器可以自动调整L-GCN中每层训练时期。实验表明,L-GCN的速度至少要比最新的数量级快,并且在保持可比较的预测性能的同时,并不依赖于数据集大小的内存使用情况。有了学识渊博的控制器,L $^2 $ -GCN可以将训练时间进一步减少一半。我们的代码可在https://github.com/shen-lab/l2-gcn上找到。
Graph convolution networks (GCN) are increasingly popular in many applications, yet remain notoriously hard to train over large graph datasets. They need to compute node representations recursively from their neighbors. Current GCN training algorithms suffer from either high computational costs that grow exponentially with the number of layers, or high memory usage for loading the entire graph and node embeddings. In this paper, we propose a novel efficient layer-wise training framework for GCN (L-GCN), that disentangles feature aggregation and feature transformation during training, hence greatly reducing time and memory complexities. We present theoretical analysis for L-GCN under the graph isomorphism framework, that L-GCN leads to as powerful GCNs as the more costly conventional training algorithm does, under mild conditions. We further propose L$^2$-GCN, which learns a controller for each layer that can automatically adjust the training epochs per layer in L-GCN. Experiments show that L-GCN is faster than state-of-the-arts by at least an order of magnitude, with a consistent of memory usage not dependent on dataset size, while maintaining comparable prediction performance. With the learned controller, L$^2$-GCN can further cut the training time in half. Our codes are available at https://github.com/Shen-Lab/L2-GCN.