论文标题

通过图形对抗性学习学习强大的表示

Learning Robust Representation through Graph Adversarial Contrastive Learning

论文作者

Guo, Jiayan, Li, Shangyang, Zhao, Yue, Zhang, Yan

论文摘要

现有研究表明,图神经网络(GNN)生成的节点表示很容易受到对抗性攻击的影响,例如相邻矩阵和节点特征的不可扰的扰动。因此,在图神经网络中学习强大的表示是必需的。为了提高图表表示学习的鲁棒性,我们提出了一个新颖的图形对比对比学习框架(GraphACL),通过将对抗性增强引入图表自我监督学习中。在此框架中,我们最大程度地利用了扰动图的本地和全局表示之间的共同信息及其对抗性增强,在该图形中可以通过受到监督或无监督的方法生成对抗图。根据信息瓶颈原则,我们从理论上证明我们的方法可以获得更严格的界限,从而提高了图表学习的鲁棒性。从经验上讲,我们在一系列节点分类基准上评估了几种方法,结果表明GraphACL可以比以前的监督方法实现可比的精度。

Existing studies show that node representations generated by graph neural networks (GNNs) are vulnerable to adversarial attacks, such as unnoticeable perturbations of adjacent matrix and node features. Thus, it is requisite to learn robust representations in graph neural networks. To improve the robustness of graph representation learning, we propose a novel Graph Adversarial Contrastive Learning framework (GraphACL) by introducing adversarial augmentations into graph self-supervised learning. In this framework, we maximize the mutual information between local and global representations of a perturbed graph and its adversarial augmentations, where the adversarial graphs can be generated in either supervised or unsupervised approaches. Based on the Information Bottleneck Principle, we theoretically prove that our method could obtain a much tighter bound, thus improving the robustness of graph representation learning. Empirically, we evaluate several methods on a range of node classification benchmarks and the results demonstrate GraphACL could achieve comparable accuracy over previous supervised methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源