论文标题
使用神经网络控制器对随机系统的风险验证
Risk Verification of Stochastic Systems with Neural Network Controllers
论文作者
论文摘要
由神经网络(NN)控制器在安全性关键应用中的脆弱性激励,我们提出了一个数据驱动的框架,用于验证使用NN控制器验证随机动力学系统的风险。给定一个随机控制系统,NN控制器和配备有跟踪鲁棒性概念的规范(例如,约束函数或信号时间逻辑),我们从系统中收集轨迹,可能会或可能无法满足规范。特别是,每个轨迹都会产生一个鲁棒性值,以表明(严重)满足规范的程度(违反)。然后,我们计算这些鲁棒性值的风险指标,以估计NN控制器无法满足规范的风险。我们对量化两个系统之间的风险差异进一步感兴趣,并且我们展示了从名义系统估计的风险如何提供上限系统的风险。特别是,这种结合的紧密度取决于系统轨迹的亲密关系的系统的亲密关系。对于Lipschitz的连续和逐步输入到国家稳定系统,我们展示了如何通过不同程度的保守主义准确地量化系统的紧密度,而我们在实验中从数据中估算了更多通用系统的系统接近度。我们证明了我们在两个案例研究的风险验证方法,一辆水下车和一辆F1/10自动驾驶汽车。
Motivated by the fragility of neural network (NN) controllers in safety-critical applications, we present a data-driven framework for verifying the risk of stochastic dynamical systems with NN controllers. Given a stochastic control system, an NN controller, and a specification equipped with a notion of trace robustness (e.g., constraint functions or signal temporal logic), we collect trajectories from the system that may or may not satisfy the specification. In particular, each of the trajectories produces a robustness value that indicates how well (severely) the specification is satisfied (violated). We then compute risk metrics over these robustness values to estimate the risk that the NN controller will not satisfy the specification. We are further interested in quantifying the difference in risk between two systems, and we show how the risk estimated from a nominal system can provide an upper bound the risk of a perturbed version of the system. In particular, the tightness of this bound depends on the closeness of the systems in terms of the closeness of their system trajectories. For Lipschitz continuous and incrementally input-to-state stable systems, we show how to exactly quantify system closeness with varying degrees of conservatism, while we estimate system closeness for more general systems from data in our experiments. We demonstrate our risk verification approach on two case studies, an underwater vehicle and an F1/10 autonomous car.