论文标题
正式验证消息通讯神经网络的基本限制
Fundamental Limits in Formal Verification of Message-Passing Neural Networks
论文作者
论文摘要
产出可达性和对抗性鲁棒性是神经网络中最相关的安全性能之一。我们表明,在消息传递神经网络(MPNN)的背景下,这是一个通用图神经网络(GNN)模型,正式验证是不可能的。特别是,我们表明,图形分类器MPNN的输出可达性,无法正式验证尺寸无绑定的图形,非边界尺寸,非伪造程度和足够表达的节点标签的图形:没有正确答案的算法(是是或否)(YES还是否)(给定MPNN),是否有一个有效的输入,对MPNN的定义不满意。但是,我们还表明,当对输入图的限制进行先验时,可以正式验证节点分类器MPNN的输出达到性和对抗性鲁棒性。我们讨论了这些结果的含义,目的是获得正式验证GNN的原则可能性的完整图片,具体取决于所涉及的GNN模型和输入输出规格的表现。
Output reachability and adversarial robustness are among the most relevant safety properties of neural networks. We show that in the context of Message Passing Neural Networks (MPNN), a common Graph Neural Network (GNN) model, formal verification is impossible. In particular, we show that output reachability of graph-classifier MPNN, working over graphs of unbounded size, non-trivial degree and sufficiently expressive node labels, cannot be verified formally: there is no algorithm that answers correctly (with yes or no), given an MPNN, whether there exists some valid input to the MPNN such that the corresponding output satisfies a given specification. However, we also show that output reachability and adversarial robustness of node-classifier MPNN can be verified formally when a limit on the degree of input graphs is given a priori. We discuss the implications of these results, for the purpose of obtaining a complete picture of the principle possibility to formally verify GNN, depending on the expressiveness of the involved GNN models and input-output specifications.