论文标题
Bern-NN:使用Bernstein多项式间隔的神经网络的紧密结合传播
BERN-NN: Tight Bound Propagation For Neural Networks Using Bernstein Polynomial Interval Arithmetic
论文作者
论文摘要
在本文中,我们将Bern-NN作为执行神经网络(NNS)绑定传播的有效工具。界传播是广泛的NN模型检查器和可及性分析工具的关键步骤。给定一个有界的输入集,结合的传播算法旨在计算NN输出的紧密界限。到目前为止,线性和凸优化已用于执行结合传播。由于神经网络是高度非凸的,因此最新的结合传播技术受到引入大错误的影响。为了避免这种缺点,伯尔尼-NN使用称为伯恩斯坦多项式的多项式近似每个神经元的边界。伯恩斯坦多项式具有几种有趣的属性,与依靠线性和凸近似的人相比,Bern-NN获得更紧密的边界。 Bern-NN在图形处理单元(GPU)上有效平行。广泛的数值结果表明,Bern-NN获得的边界比最先进的验证器(例如线性编程和线性间隔算术)获得的数量级要高。与Alpha-Crown这样的凸面编程方法相比,MoreVoer,Bern-NN既更快又产生更紧密的产出。
In this paper, we present BERN-NN as an efficient tool to perform bound propagation of Neural Networks (NNs). Bound propagation is a critical step in wide range of NN model checkers and reachability analysis tools. Given a bounded input set, bound propagation algorithms aim to compute tight bounds on the output of the NN. So far, linear and convex optimizations have been used to perform bound propagation. Since neural networks are highly non-convex, state-of-the-art bound propagation techniques suffer from introducing large errors. To circumvent such drawback, BERN-NN approximates the bounds of each neuron using a class of polynomials called Bernstein polynomials. Bernstein polynomials enjoy several interesting properties that allow BERN-NN to obtain tighter bounds compared to those relying on linear and convex approximations. BERN-NN is efficiently parallelized on graphic processing units (GPUs). Extensive numerical results show that bounds obtained by BERN-NN are orders of magnitude tighter than those obtained by state-of-the-art verifiers such as linear programming and linear interval arithmetic. Moreoveer, BERN-NN is both faster and produces tighter outputs compared to convex programming approaches like alpha-CROWN.