论文标题
XGNN:图形神经网络的模型级解释
XGNN: Towards Model-Level Explanations of Graph Neural Networks
论文作者
论文摘要
图形神经网络(GNN)通过汇总和组合邻居信息来学习节点特征,这些信息在许多图形任务上都实现了有希望的性能。但是,GNN大多被视为黑盒,缺乏人类可理解的解释。因此,如果无法解释GNN模型,则不能完全信任和使用在某些应用程序域中。在这项工作中,我们提出了一种新型方法,即XGNN,以在模型级别上解释GNN。我们的方法可以提供高级见解,并对GNN的运作方式提供一般性的理解。特别是,我们建议通过训练图形生成器来解释GNN,以使生成的图形模式最大化模型的某些预测。我们将图形生成作为增强学习任务,在每个步骤中,图形生成器预测如何在当前图中添加边缘。根据训练有素的GNN的信息,通过策略梯度方法对图形生成器进行培训。此外,我们合并了几种图形规则,以鼓励生成的图形有效。合成和现实世界数据集的实验结果表明,我们提出的方法有助于理解和验证训练有素的GNN。此外,我们的实验结果表明,生成的图可以为如何改善训练有素的GNN提供指导。
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information, which have achieved promising performance on many graph tasks. However, GNNs are mostly treated as black-boxes and lack human intelligible explanations. Thus, they cannot be fully trusted and used in certain application domains if GNN models cannot be explained. In this work, we propose a novel approach, known as XGNN, to interpret GNNs at the model-level. Our approach can provide high-level insights and generic understanding of how GNNs work. In particular, we propose to explain GNNs by training a graph generator so that the generated graph patterns maximize a certain prediction of the model.We formulate the graph generation as a reinforcement learning task, where for each step, the graph generator predicts how to add an edge into the current graph. The graph generator is trained via a policy gradient method based on information from the trained GNNs. In addition, we incorporate several graph rules to encourage the generated graphs to be valid. Experimental results on both synthetic and real-world datasets show that our proposed methods help understand and verify the trained GNNs. Furthermore, our experimental results indicate that the generated graphs can provide guidance on how to improve the trained GNNs.