论文标题
与隐式增强的图形对比度学习
Graph Contrastive Learning with Implicit Augmentations
论文作者
论文摘要
现有的图形对比学习方法依赖于基于随机扰动的增强技术(例如,随机添加或掉落边缘和节点)。然而,更改某些边缘或节点可以意外地更改图形特征,并选择每个数据集的最佳扰动率需要繁重的手动调整。在本文中,我们介绍了隐式图对比度学习(IGCL),该学习利用了通过重建图形拓扑结构从变异图自动编码器中学到的潜在空间中的增强。重要的是,我们进一步提出了一种预期的对比损失,以提高我们学习算法的效率,而不是从潜在分布中明确采样增强。因此,无需任意手动设计或事先人类知识,可以以智能方式保留图形语义。图级和节点级任务的实验结果表明,与其他基准相比,所提出的方法实现了最先进的性能,在这些基准测试中,消融研究最终证明了模块在IGCL中的有效性。
Existing graph contrastive learning methods rely on augmentation techniques based on random perturbations (e.g., randomly adding or dropping edges and nodes). Nevertheless, altering certain edges or nodes can unexpectedly change the graph characteristics, and choosing the optimal perturbing ratio for each dataset requires onerous manual tuning. In this paper, we introduce Implicit Graph Contrastive Learning (iGCL), which utilizes augmentations in the latent space learned from a Variational Graph Auto-Encoder by reconstructing graph topological structure. Importantly, instead of explicitly sampling augmentations from latent distributions, we further propose an upper bound for the expected contrastive loss to improve the efficiency of our learning algorithm. Thus, graph semantics can be preserved within the augmentations in an intelligent way without arbitrary manual design or prior human knowledge. Experimental results on both graph-level and node-level tasks show that the proposed method achieves state-of-the-art performance compared to other benchmarks, where ablation studies in the end demonstrate the effectiveness of modules in iGCL.