论文标题
不确定性感知注意力图图形神经网络用于捍卫对抗性攻击
Uncertainty-aware Attention Graph Neural Network for Defending Adversarial Attacks
论文作者
论文摘要
随着基于图的学习的日益普及,图形神经网络(GNN)成为从图形中获得见解的必要工具。但是,与经过广泛探索和详尽测试的常规CNN不同,人们仍然担心GNNS在关键环境(例如金融服务)下的鲁棒性。主要原因是现有的GNN通常用作预测的黑框,并且不提供预测的不确定性。另一方面,贝叶斯对CNN的深度学习的最新进展证明了其量化和解释对强化CNN模型的这种不确定性的成功。在这些观察结果的推动下,我们提出了UAG,这是第一个通过识别和利用GNNS中的层次不确定性来捍卫对GNN的对抗性攻击的系统解决方案。 UAG开发了一种贝叶斯不确定性技术(但),以明确捕获GNN中的不确定性,并进一步采用不确定性意识的注意力技术(UAT)来捍卫对GNNS的对抗性攻击。密集实验表明,我们提出的防御方法的表现优于最先进的解决方案。
With the increasing popularity of graph-based learning, graph neural networks (GNNs) emerge as the essential tool for gaining insights from graphs. However, unlike the conventional CNNs that have been extensively explored and exhaustively tested, people are still worrying about the GNNs' robustness under the critical settings, such as financial services. The main reason is that existing GNNs usually serve as a black-box in predicting and do not provide the uncertainty on the predictions. On the other side, the recent advancement of Bayesian deep learning on CNNs has demonstrated its success of quantifying and explaining such uncertainties to fortify CNN models. Motivated by these observations, we propose UAG, the first systematic solution to defend adversarial attacks on GNNs through identifying and exploiting hierarchical uncertainties in GNNs. UAG develops a Bayesian Uncertainty Technique (BUT) to explicitly capture uncertainties in GNNs and further employs an Uncertainty-aware Attention Technique (UAT) to defend adversarial attacks on GNNs. Intensive experiments show that our proposed defense approach outperforms the state-of-the-art solutions by a significant margin.