论文标题

节点复制以防止图形神经网络拓扑攻击

Node Copying for Protection Against Graph Neural Network Topology Attacks

论文作者

Regol, Florence, Pal, Soumyasundar, Coates, Mark

论文摘要

对抗性攻击会影响现有深度学习模型的性能。随着对基于图形的机器学习技术的兴趣,进行了一些调查表明,这些模型也容易受到攻击。特别是,图形拓扑的损坏会严重降低基于图的学​​习算法的性能。这是由于这些算法的预测能力主要依赖于图形连接性施加的相似性结构。因此,检测腐败的位置并纠正诱发错误变得至关重要。最近有一些解决了检测问题的工作,但是这些方法无法解决攻击对下游学习任务的影响。在这项工作中,我们提出了一种使用节点复制来减轻由对抗性攻击引起的分类降解的算法。仅在对下游任务的模型进行训练并为大图良好的添加计算成本量表量表之后,才应用所提出的方法。实验结果表明了我们方法对几个现实世界数据集的有效性。

Adversarial attacks can affect the performance of existing deep learning models. With the increased interest in graph based machine learning techniques, there have been investigations which suggest that these models are also vulnerable to attacks. In particular, corruptions of the graph topology can degrade the performance of graph based learning algorithms severely. This is due to the fact that the prediction capability of these algorithms relies mostly on the similarity structure imposed by the graph connectivity. Therefore, detecting the location of the corruption and correcting the induced errors becomes crucial. There has been some recent work which tackles the detection problem, however these methods do not address the effect of the attack on the downstream learning task. In this work, we propose an algorithm that uses node copying to mitigate the degradation in classification that is caused by adversarial attacks. The proposed methodology is applied only after the model for the downstream task is trained and the added computation cost scales well for large graphs. Experimental results show the effectiveness of our approach for several real world datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源