论文标题
从图形神经网络中窃取链接
Stealing Links from Graph Neural Networks
论文作者
论文摘要
图形数据(例如化学网络和社交网络)可以被视为机密/私人,因为数据所有者经常花费大量资源来收集数据,或者数据包含敏感信息,例如社交关系。最近,神经网络扩展到图形数据,这些数据被称为图神经网络(GNNS)。由于其出色的性能,GNN具有许多应用,例如医疗保健分析,推荐系统和欺诈检测。在这项工作中,我们提出了第一个攻击,以从图表上训练的GNN模型的输出中窃取图形。具体而言,给定对GNN模型的黑框访问,我们的攻击可以推断出用于训练模型的图中的任何一对节点之间是否存在链接。我们称我们的攻击链接窃取攻击。我们提出了一个威胁模型,以系统地表征沿三个维度的对手的背景知识,这总共导致了8种不同链接窃取攻击的全面分类。我们提出了多种新颖方法来实现这8次攻击。在8个现实世界数据集上进行的大量实验表明,我们的攻击在窃取链接方面有效,例如,在多种情况下,AUC(ROC曲线下的区域)高于0.95。我们的结果表明,GNN模型的输出揭示了有关用于训练模型的图的结构的丰富信息。
Graph data, such as chemical networks and social networks, may be deemed confidential/private because the data owner often spends lots of resources collecting the data or the data contains sensitive information, e.g., social relationships. Recently, neural networks were extended to graph data, which are known as graph neural networks (GNNs). Due to their superior performance, GNNs have many applications, such as healthcare analytics, recommender systems, and fraud detection. In this work, we propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph. Specifically, given a black-box access to a GNN model, our attacks can infer whether there exists a link between any pair of nodes in the graph used to train the model. We call our attacks link stealing attacks. We propose a threat model to systematically characterize an adversary's background knowledge along three dimensions which in total leads to a comprehensive taxonomy of 8 different link stealing attacks. We propose multiple novel methods to realize these 8 attacks. Extensive experiments on 8 real-world datasets show that our attacks are effective at stealing links, e.g., AUC (area under the ROC curve) is above 0.95 in multiple cases. Our results indicate that the outputs of a GNN model reveal rich information about the structure of the graph used to train the model.