论文标题

图形神经网络中的编码概念

Encoding Concepts in Graph Neural Networks

论文作者

Magister, Lucie Charlotte, Barbiero, Pietro, Kazhdan, Dmitry, Siciliano, Federico, Ciravegna, Gabriele, Silvestri, Fabrizio, Jamnik, Mateja, Lio, Pietro

论文摘要

图形神经网络的不透明推理会导致缺乏人类的信任。现有的图形网络解释器试图通过提供事后解释来解决此问题,但是,它们无法使模型本身更容易解释。为了填补这一空白,我们介绍了概念编码器模块,这是图形网络的第一个可区分概念 - 发现方法。所提出的方法使图形网络可以通过首先发现图形概念,然后使用这些来解释的图形网络来解决任务。我们的结果表明,这种方法允许图形网络:(i)获得模型的准确性与它们的等效香草版本相当,(ii)发现有意义的概念,以实现高概念的完整性和纯度得分,(iii)为高质量的逻辑解释提供了高质量的逻辑解释,以预测其预测,并且(iv)在测试时间支持有效的干预措施:这些可以提高人体信任的能力,并且可以改善模型的表现。及其效果效果并效果效果效果。

The opaque reasoning of Graph Neural Networks induces a lack of human trust. Existing graph network explainers attempt to address this issue by providing post-hoc explanations, however, they fail to make the model itself more interpretable. To fill this gap, we introduce the Concept Encoder Module, the first differentiable concept-discovery approach for graph networks. The proposed approach makes graph networks explainable by design by first discovering graph concepts and then using these to solve the task. Our results demonstrate that this approach allows graph networks to: (i) attain model accuracy comparable with their equivalent vanilla versions, (ii) discover meaningful concepts that achieve high concept completeness and purity scores, (iii) provide high-quality concept-based logic explanations for their prediction, and (iv) support effective interventions at test time: these can increase human trust as well as significantly improve model performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源