论文标题
对对抗性状态的可认证鲁棒性在深度强化学习中
Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
论文作者
论文摘要
现在,基于神经网络的深度网络系统是许多机器人任务中的最新系统,但是它们在安全至关重要的领域中的应用仍然危险,而没有正式保证网络鲁棒性。对传感器输入(来自噪声或对抗性示例)的小扰动通常足以改变基于网络的决策,最近证明这会导致自动驾驶汽车转向另一个车道。鉴于这些危险,已经开发了许多算法作为这些对抗性投入的防御机制,其中一些提供了正式的鲁棒性保证或证书。这项工作利用了认证的对抗性鲁棒性的研究,可以为深度强化学习算法开发在线稳健性。拟议的防御计算在执行过程中确保国家行动值的下限,以识别并选择由于可能的对手或噪声而导致输入空间中最严重的偏差下的强大动作。此外,最终的策略带有解决方案质量证书,即使由于扰动而对认证者未知的真实状态和最佳行动也是如此。该方法在深度的Q-Network政策中得到了证明,并显示出在行人避免碰撞场景和经典控制任务中对噪音和对手的鲁棒性。这项工作扩展了我们先前的工作之一,具有新的性能保证,扩展到其他RL算法,扩展了在更多场景中汇总的结果,扩展到具有对抗性行为的场景,与计算更昂贵的方法进行比较,以及提供有关鲁棒性算法的直觉的可视化。
Deep Neural Network-based systems are now the state-of-the-art in many robotics tasks, but their application in safety-critical domains remains dangerous without formal guarantees on network robustness. Small perturbations to sensor inputs (from noise or adversarial examples) are often enough to change network-based decisions, which was recently shown to cause an autonomous vehicle to swerve into another lane. In light of these dangers, numerous algorithms have been developed as defensive mechanisms from these adversarial inputs, some of which provide formal robustness guarantees or certificates. This work leverages research on certified adversarial robustness to develop an online certifiably robust for deep reinforcement learning algorithms. The proposed defense computes guaranteed lower bounds on state-action values during execution to identify and choose a robust action under a worst-case deviation in input space due to possible adversaries or noise. Moreover, the resulting policy comes with a certificate of solution quality, even though the true state and optimal action are unknown to the certifier due to the perturbations. The approach is demonstrated on a Deep Q-Network policy and is shown to increase robustness to noise and adversaries in pedestrian collision avoidance scenarios and a classic control task. This work extends one of our prior works with new performance guarantees, extensions to other RL algorithms, expanded results aggregated across more scenarios, an extension into scenarios with adversarial behavior, comparisons with a more computationally expensive method, and visualizations that provide intuition about the robustness algorithm.