论文标题

神经网络培训和验证的可区分逻辑

Differentiable Logics for Neural Network Training and Verification

论文作者

Slusarz, Natalia, Komendantskaya, Ekaterina, Daggitt, Matthew L., Stewart, Robert

论文摘要

近年来,神经网络(NNS)的普及及其在实际应用中的流行率的日益普及引起了人们对其验证的重要性的关注。虽然验证在理论上是计算困难的,但在实践中提出了许多解决该验证的技术。在文献中已经观察到,默认情况下,神经网络很少满足我们想要验证的逻辑约束。良好的行动是在验证验证之前训练给定的NN满足所述约束。这个想法有时被称为持续验证,指训练和验证之间的循环。通常,通过将给定正式逻辑语言的翻译指定为损失功能,可以实现带有约束的培训。然后,这些损失功能用于训练神经网络。因为为了培训目的,这些功能需要可区分,因此这些翻译称为可区分逻辑(DL)。这提出了几个研究问题。什么样的可区分逻辑是可能的?在连续验证的背景下,DL的特定选择有什么区别?从最终损失函数的角度来看,DL的理想标准是什么?在这个扩展的摘要中,我们将讨论并回答这些问题。

The rising popularity of neural networks (NNs) in recent years and their increasing prevalence in real-world applications have drawn attention to the importance of their verification. While verification is known to be computationally difficult theoretically, many techniques have been proposed for solving it in practice. It has been observed in the literature that by default neural networks rarely satisfy logical constraints that we want to verify. A good course of action is to train the given NN to satisfy said constraint prior to verifying them. This idea is sometimes referred to as continuous verification, referring to the loop between training and verification. Usually training with constraints is implemented by specifying a translation for a given formal logic language into loss functions. These loss functions are then used to train neural networks. Because for training purposes these functions need to be differentiable, these translations are called differentiable logics (DL). This raises several research questions. What kind of differentiable logics are possible? What difference does a specific choice of DL make in the context of continuous verification? What are the desirable criteria for a DL viewed from the point of view of the resulting loss function? In this extended abstract we will discuss and answer these questions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源