论文标题
在无监督的图表中学习增强鲁棒性学习:图形信息瓶颈透视图
Toward Enhanced Robustness in Unsupervised Graph Representation Learning: A Graph Information Bottleneck Perspective
论文作者
论文摘要
最近的研究表明,GNN容易受到对抗攻击的影响。大多数现有的鲁棒图学习方法基于标签信息来测量模型鲁棒性,在不可用的标签信息时使它们变得不可行。一个直接的方向是利用典型的无监督图表示学习(UGRL)的广泛使用的信息技术来学习强大的无监督表示。但是,将信息从典型UGRL直接移植到强大的UGRL可能涉及偏见的假设。鉴于信息的局限性,我们提出了一种新颖的无偏见的UGRL方法,称为鲁棒图信息瓶颈(RGIB),该方法基于信息瓶颈(IB)原理。我们的RGIB试图通过在良性图中保留原始信息,同时消除对抗图中的对抗信息,从而了解针对对抗扰动的鲁棒节点表示。优化RGIB有两个挑战:1)在训练过程中共同共同对对抗性攻击的高复杂性和图形结构; 2)对对抗攻击图的共同信息估计。为了解决这些问题,我们进一步提出了一种有效的对抗训练策略,仅具有特征扰动和带有子图级摘要的有效共同信息估计器。此外,我们从理论上建立了我们提出的RGIB与下游分类器的鲁棒性之间的联系,揭示了RGIB可以为下游分类器的对抗风险提供下限。对几个基准和下游任务进行的广泛实验证明了我们提出的方法的有效性和优势。
Recent studies have revealed that GNNs are vulnerable to adversarial attacks. Most existing robust graph learning methods measure model robustness based on label information, rendering them infeasible when label information is not available. A straightforward direction is to employ the widely used Infomax technique from typical Unsupervised Graph Representation Learning (UGRL) to learn robust unsupervised representations. Nonetheless, directly transplanting the Infomax technique from typical UGRL to robust UGRL may involve a biased assumption. In light of the limitation of Infomax, we propose a novel unbiased robust UGRL method called Robust Graph Information Bottleneck (RGIB), which is grounded in the Information Bottleneck (IB) principle. Our RGIB attempts to learn robust node representations against adversarial perturbations by preserving the original information in the benign graph while eliminating the adversarial information in the adversarial graph. There are mainly two challenges to optimize RGIB: 1) high complexity of adversarial attack to perturb node features and graph structure jointly in the training procedure; 2) mutual information estimation upon adversarially attacked graphs. To tackle these problems, we further propose an efficient adversarial training strategy with only feature perturbations and an effective mutual information estimator with subgraph-level summary. Moreover, we theoretically establish a connection between our proposed RGIB and the robustness of downstream classifiers, revealing that RGIB can provide a lower bound on the adversarial risk of downstream classifiers. Extensive experiments over several benchmarks and downstream tasks demonstrate the effectiveness and superiority of our proposed method.