论文标题
基于模型的强化学习控制者的安全验证
Safety Verification of Model Based Reinforcement Learning Controllers
论文作者
论文摘要
基于模型的增强学习(RL)已成为开发现实世界系统控制器(例如机器人技术,自动驾驶等)的有前途的工具。但是,实际系统通常对其状态空间施加的限制,必须满足以确保系统及其环境的安全性。为RL算法开发验证工具很具有挑战性,因为神经网络的非线性结构阻碍了此类模型或控制器的分析验证。为此,我们使用可及的集合分析为基于模型的RL控制器提供了一个新颖的安全验证框架。所提出的框架工作可以有效处理使用神经网络表示的模型和控制器。此外,如果控制器通常无法满足安全约束,则建议的框架还可以用于确定可以安全执行控制器的初始状态的子集。
Model-based reinforcement learning (RL) has emerged as a promising tool for developing controllers for real world systems (e.g., robotics, autonomous driving, etc.). However, real systems often have constraints imposed on their state space which must be satisfied to ensure the safety of the system and its environment. Developing a verification tool for RL algorithms is challenging because the non-linear structure of neural networks impedes analytical verification of such models or controllers. To this end, we present a novel safety verification framework for model-based RL controllers using reachable set analysis. The proposed frame-work can efficiently handle models and controllers which are represented using neural networks. Additionally, if a controller fails to satisfy the safety constraints in general, the proposed framework can also be used to identify the subset of initial states from which the controller can be safely executed.