论文标题

对抗性的对抗性免疫,可证明鲁棒性

Adversarial Immunization for Certifiable Robustness on Graphs

论文作者

Tao, Shuchang, Shen, Huawei, Cao, Qi, Hou, Liang, Cheng, Xueqi

论文摘要

尽管在半监督节点分类任务中取得了强大的性能,但图形神经网络(GNN)还是容易受到对抗性攻击的影响,类似于其他深度学习模型。现有研究的重点是开发强大的GNN模型或针对对抗攻击的攻击检测方法。但是,很少有人关注对对抗性攻击的潜在和实践的潜在和实践。在本文中,我们提出并制定了图形对抗免疫问题,即,接种负担得起的节点对(连接或无连接)的负担得起的部分,以改善图形可靠的鲁棒性,以针对任何可允许的对抗性攻击。我们进一步提出了一种称为Advimmune的有效算法,该算法以离散的方式通过元梯度优化,以在解决对抗性免疫问题时规避计算上昂贵的组合优化。实验是在两个引用网络和一个社交网络上进行的。实验结果表明,拟议的Advimmune方法显着提高了健壮节点的比率12%,42%,65%,负担得起的免疫预算仅为5%。

Despite achieving strong performance in semi-supervised node classification task, graph neural networks (GNNs) are vulnerable to adversarial attacks, similar to other deep learning models. Existing researches focus on developing either robust GNN models or attack detection methods against adversarial attacks on graphs. However, little research attention is paid to the potential and practice of immunization to adversarial attacks on graphs. In this paper, we propose and formulate the graph adversarial immunization problem, i.e., vaccinating an affordable fraction of node pairs, connected or unconnected, to improve the certifiable robustness of graph against any admissible adversarial attack. We further propose an effective algorithm, called AdvImmune, which optimizes with meta-gradient in a discrete way to circumvent the computationally expensive combinatorial optimization when solving the adversarial immunization problem. Experiments are conducted on two citation networks and one social network. Experimental results demonstrate that the proposed AdvImmune method remarkably improves the ratio of robust nodes by 12%, 42%, 65%, with an affordable immune budget of only 5% edges.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源