论文标题
亚图增强图神经网络中的解释性
Explainability in subgraphs-enhanced Graph Neural Networks
论文作者
论文摘要
最近,引入了亚图增强的图形神经网络(SGNN),以增强图神经网络(GNN)的表达能力,事实证明,该功能不高于一维Weisfeiler-Leman同构测试。新的范式建议使用从输入图中提取的子图提高模型的表现力,但是额外的复杂性加剧了GNNS中本来可以具有挑战性的问题:解释其预测。在这项工作中,我们将PGEXPlainer(GNNS的最新解释者之一)改编为SGNNS。拟议的解释器解释了所有不同子图的贡献,并可以产生人类可以解释的有意义的解释。我们在真实和合成数据集上执行的实验表明,我们的框架成功地解释了SGNN在图形分类任务上的决策过程。
Recently, subgraphs-enhanced Graph Neural Networks (SGNNs) have been introduced to enhance the expressive power of Graph Neural Networks (GNNs), which was proved to be not higher than the 1-dimensional Weisfeiler-Leman isomorphism test. The new paradigm suggests using subgraphs extracted from the input graph to improve the model's expressiveness, but the additional complexity exacerbates an already challenging problem in GNNs: explaining their predictions. In this work, we adapt PGExplainer, one of the most recent explainers for GNNs, to SGNNs. The proposed explainer accounts for the contribution of all the different subgraphs and can produce a meaningful explanation that humans can interpret. The experiments that we performed both on real and synthetic datasets show that our framework is successful in explaining the decision process of an SGNN on graph classification tasks.