论文标题
通过定理对神经网络的受限培训证明
Constrained Training of Neural Networks via Theorem Proving
论文作者
论文摘要
我们为训练神经网络的时间逻辑约束提供了一种定理证明方法。我们对有限轨迹(LTL $ _F $)的线性时间逻辑的深层嵌入方式,并在Isabelle定理供体的高阶逻辑中表征其语义的相关评估功能。然后,我们继续正式化损失函数$ \ MATHCAL {l} $,我们正式被证明是合理的,并且与功能$ d \ Mathcal {l} $可区分。随后,我们使用Isabelle的自动代码生成机制来生产LTL $ _F $,$ \ MATHCAL {L} $和$ D \ MATHCAL {l} $的OCAML版本,并通过Python ocaml绑定与Pytorch集成在一起。我们表明,在现有的动态运动深度学习框架中用于培训时,我们的方法会为常见运动规范模式(例如避免障碍和巡逻)产生预期的结果。我们方法的独特优势是完全严格的训练方法,消除了直接在诸如Python之类的“不安全”编程语言中的逻辑方面临时实现固有的许多风险。
We introduce a theorem proving approach to the specification and generation of temporal logical constraints for training neural networks. We formalise a deep embedding of linear temporal logic over finite traces (LTL$_f$) and an associated evaluation function characterising its semantics within the higher-order logic of the Isabelle theorem prover. We then proceed to formalise a loss function $\mathcal{L}$ that we formally prove to be sound, and differentiable to a function $d\mathcal{L}$. We subsequently use Isabelle's automatic code generation mechanism to produce OCaml versions of LTL$_f$, $\mathcal{L}$ and $d\mathcal{L}$ that we integrate with PyTorch via OCaml bindings for Python. We show that, when used for training in an existing deep learning framework for dynamic movement, our approach produces expected results for common movement specification patterns such as obstacle avoidance and patrolling. The distinctive benefit of our approach is the fully rigorous method for constrained training, eliminating many of the risks inherent to ad-hoc implementations of logical aspects directly in an "unsafe" programming language such as Python.