论文标题
研究图神经网络中的转移学习
Investigating Transfer Learning in Graph Neural Networks
论文作者
论文摘要
图形神经网络(GNN)通过将其扩展用于图形空间来建立在深度学习模型的成功之上。事实证明,转移学习在传统的深度学习问题上非常成功:导致更快的培训和提高的表现。尽管对GNN及其用例的兴趣越来越高,但对它们的可转移性的研究很少。这项研究表明,转移学习对GNN有效,并描述了源任务和GNN的选择如何影响学习通用知识的能力。我们在节点分类和图形分类的上下文中使用现实世界和合成数据执行实验。为此,我们还提供了一种传递学习实验的一般方法,并提出了一种用于生成合成图分类任务的新算法。我们比较合成和现实世界数据集中GCN,图形和杜松子酒的性能。我们的结果从经验上表明,具有归纳性操作的GNN在统计学上会显着改善转移。此外,我们表明,源和目标任务之间的社区结构相似性支持仅使用节点属性的转移统计上显着改善。
Graph neural networks (GNNs) build on the success of deep learning models by extending them for use in graph spaces. Transfer learning has proven extremely successful for traditional deep learning problems: resulting in faster training and improved performance. Despite the increasing interest in GNNs and their use cases, there is little research on their transferability. This research demonstrates that transfer learning is effective with GNNs, and describes how source tasks and the choice of GNN impact the ability to learn generalisable knowledge. We perform experiments using real-world and synthetic data within the contexts of node classification and graph classification. To this end, we also provide a general methodology for transfer learning experimentation and present a novel algorithm for generating synthetic graph classification tasks. We compare the performance of GCN, GraphSAGE and GIN across both the synthetic and real-world datasets. Our results demonstrate empirically that GNNs with inductive operations yield statistically significantly improved transfer. Further we show that similarity in community structure between source and target tasks support statistically significant improvements in transfer over and above the use of only the node attributes.