论文标题
迈向无监督的图形表示学习的解释
Towards Explanation for Unsupervised Graph-Level Representation Learning
论文作者
论文摘要
由于图形神经网络(GNN)在各个领域的卓越性能,因此对GNN的解释问题越来越兴趣“ \ EMPH {输入图的哪一部分是决定模型决定的最关键的最关键?}“现有的解释方法?未探索。当部署高级决策情况时,图表表示的不透明可能会导致意外风险。在本文中,我们推进了信息瓶颈原则(IB),以解决无监督的图表表示所提出的解释问题,这导致了一个新颖的原理,\ textit {无监督的子图表bottleneck}(USIB)。我们还理论上分析了标签空间上的图表和解释子图之间的联系,这表明表示的表达性和鲁棒性使解释性子图的保真度有益。合成数据集和现实世界数据集的实验结果证明了我们发达的解释器的优越性以及我们的理论分析的有效性。
Due to the superior performance of Graph Neural Networks (GNNs) in various domains, there is an increasing interest in the GNN explanation problem "\emph{which fraction of the input graph is the most crucial to decide the model's decision?}" Existing explanation methods focus on the supervised settings, \eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored. The opaqueness of the graph representations may lead to unexpected risks when deployed for high-stake decision-making scenarios. In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, \textit{Unsupervised Subgraph Information Bottleneck} (USIB). We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the expressiveness and robustness of representations benefit the fidelity of explanatory subgraphs. Experimental results on both synthetic and real-world datasets demonstrate the superiority of our developed explainer and the validity of our theoretical analysis.