论文标题
强大的优化作为大规模图的数据增强
Robust Optimization as Data Augmentation for Large-scale Graphs
论文作者
论文摘要
数据增强通过扩大训练集来帮助神经网络更好地推广,但是它仍然是一个开放的问题,如何有效地增强图形数据以增强GNN的性能(图神经网络)。虽然大多数现有的图形正规化器都致力于通过添加/删除边缘来操纵图形拓扑结构,但我们提供了一种增强节点功能以提高性能的方法。我们提出了标志(图上的免费大规模对抗增强),在训练过程中,它迭代地增强了具有基于梯度的对抗扰动的节点特征。通过使模型不变到输入数据中的小波动,我们的方法有助于模型推广到分布式样本,并在测试时间提高模型性能。 FLAG是用于图形数据的通用方法,该方法在节点分类,链接预测和图形分类任务中普遍起作用。标志也非常灵活且可扩展,并且可以使用任意的GNN骨架和大规模数据集进行部署。我们通过广泛的实验和消融研究证明了方法的功效和稳定性。我们还提供了直观的观察,以更深入地了解我们的方法。
Data augmentation helps neural networks generalize better by enlarging the training set, but it remains an open question how to effectively augment graph data to enhance the performance of GNNs (Graph Neural Networks). While most existing graph regularizers focus on manipulating graph topological structures by adding/removing edges, we offer a method to augment node features for better performance. We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training. By making the model invariant to small fluctuations in input data, our method helps models generalize to out-of-distribution samples and boosts model performance at test time. FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks. FLAG is also highly flexible and scalable, and is deployable with arbitrary GNN backbones and large-scale datasets. We demonstrate the efficacy and stability of our method through extensive experiments and ablation studies. We also provide intuitive observations for a deeper understanding of our method.