论文标题
图形卷积网络的三重稀疏而不牺牲准确性
Triple Sparsification of Graph Convolutional Networks without Sacrificing the Accuracy
论文作者
论文摘要
图形神经网络(GNN)被广泛用于在图形上执行不同的机器学习任务。随着图形的大小的增长,GNN变得更深,除了记忆要求之外,训练和推理时间也变得昂贵。因此,在不牺牲准确性的情况下,图形稀疏或模型压缩成为图形学习任务的可行方法。一些现有的技术仅研究图形和GNN模型的稀疏。在本文中,我们开发了一条稀疏管道,以研究GNN中所有可能的稀疏。我们提供了理论分析,并从经验上表明,它可以在嵌入矩阵的情况下总计11.6 \%的额外稀疏性,而无需牺牲常用的基准图数据集的准确性。
Graph Neural Networks (GNNs) are widely used to perform different machine learning tasks on graphs. As the size of the graphs grows, and the GNNs get deeper, training and inference time become costly in addition to the memory requirement. Thus, without sacrificing accuracy, graph sparsification, or model compression becomes a viable approach for graph learning tasks. A few existing techniques only study the sparsification of graphs and GNN models. In this paper, we develop a SparseGCN pipeline to study all possible sparsification in GNN. We provide a theoretical analysis and empirically show that it can add up to 11.6\% additional sparsity to the embedding matrix without sacrificing the accuracy of the commonly used benchmark graph datasets.