论文标题

解释图形神经网络以了解节点分类中的加权图特征

Explain Graph Neural Networks to Understand Weighted Graph Features in Node Classification

论文作者

Li, Xiaoxiao, Saude, Joao

论文摘要

从具有其他拓扑结构和连接信息的不同应用程序中收集的实际数据可以表示为加权图。考虑到节点标记问题,图形神经网络(GNN)是一个强大的工具,可以模仿专家对节点标签的决定。 GNN通过使用神经网络嵌入节点信息并将其传递到图中的边缘,从而结合了节点特征,连接模式和图形结构。我们想确定GNN模型使用的输入数据中的模式,以做出决定并检查模型是否按照我们的要求起作用。但是,由于复杂的数据表示和非线性转换,解释GNNS做出的决策具有挑战性。在这项工作中,我们提出了新的图形特征的解释方法,以识别信息丰富的组件和重要的节点特征。此外,我们提出了一条管道来确定节点分类的关键因素。我们使用四个数据集(两个合成和两个真实)来验证我们的方法。我们的结果表明,我们的解释方法可以通过人类的解释模仿用于节点分类的数据模式,并在图中解散不同的特征。此外,我们的解释方法可用于理解数据,调试GNN模型并检查模型决策。

Real data collected from different applications that have additional topological structures and connection information are amenable to be represented as a weighted graph. Considering the node labeling problem, Graph Neural Networks (GNNs) is a powerful tool, which can mimic experts' decision on node labeling. GNNs combine node features, connection patterns, and graph structure by using a neural network to embed node information and pass it through edges in the graph. We want to identify the patterns in the input data used by the GNN model to make a decision and examine if the model works as we desire. However, due to the complex data representation and non-linear transformations, explaining decisions made by GNNs is challenging. In this work, we propose new graph features' explanation methods to identify the informative components and important node features. Besides, we propose a pipeline to identify the key factors used for node classification. We use four datasets (two synthetic and two real) to validate our methods. Our results demonstrate that our explanation approach can mimic data patterns used for node classification by human interpretation and disentangle different features in the graphs. Furthermore, our explanation methods can be used for understanding data, debugging GNN models, and examine model decisions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源