论文标题

基于模型的安全深度强化学习通过受约束的近端政策优化算法

Model-based Safe Deep Reinforcement Learning via a Constrained Proximal Policy Optimization Algorithm

论文作者

Jayant, Ashish Kumar, Bhatnagar, Shalabh

论文摘要

在大多数增强学习(RL)算法中训练的最初迭代期间,代理执行大量随机探索性步骤。在现实世界中,这可能会限制这些算法的实用性,因为它可能导致潜在的危险行为。因此,安全探索是在现实世界中应用RL算法的关键问题。最近在受约束的马尔可夫决策过程(CMDP)框架下对这个问题进行了很好的研究,除了单级奖励外,代理人还获得单级成本或罚款,具体取决于状态过渡。规定的成本功能负责将任何给定时间步长的不良行为映射到标量值。然后,目标是找到一项可行的政策,该政策在培训期间和部署期间将成本收益限制在规定的门槛以下。 我们提出了一种基于基于模型的安全深度RL算法,在该算法中,我们以在线方式学习环境的过渡动态,并使用基于Lagrangian放松的近端政策优化找到可行的最佳政策。我们使用具有不同初始化的神经网络的合奏来解决环境模型学习过程中面临的认知和态度不确定性问题。我们使用具有挑战性的安全强化学习基准-Open AI安全健身房将我们的方法与相关的无模型和基于模型的方法进行比较。我们证明我们的算法更有效地样本效率,并且与受约束的无模型方法相比,累积危害较低。此外,与文献中其他基于模型的方法相比,我们的方法显示出更好的奖励性能。

During initial iterations of training in most Reinforcement Learning (RL) algorithms, agents perform a significant number of random exploratory steps. In the real world, this can limit the practicality of these algorithms as it can lead to potentially dangerous behavior. Hence safe exploration is a critical issue in applying RL algorithms in the real world. This problem has been recently well studied under the Constrained Markov Decision Process (CMDP) Framework, where in addition to single-stage rewards, an agent receives single-stage costs or penalties as well depending on the state transitions. The prescribed cost functions are responsible for mapping undesirable behavior at any given time-step to a scalar value. The goal then is to find a feasible policy that maximizes reward returns while constraining the cost returns to be below a prescribed threshold during training as well as deployment. We propose an On-policy Model-based Safe Deep RL algorithm in which we learn the transition dynamics of the environment in an online manner as well as find a feasible optimal policy using the Lagrangian Relaxation-based Proximal Policy Optimization. We use an ensemble of neural networks with different initializations to tackle epistemic and aleatoric uncertainty issues faced during environment model learning. We compare our approach with relevant model-free and model-based approaches in Constrained RL using the challenging Safe Reinforcement Learning benchmark - the Open AI Safety Gym. We demonstrate that our algorithm is more sample efficient and results in lower cumulative hazard violations as compared to constrained model-free approaches. Further, our approach shows better reward performance than other constrained model-based approaches in the literature.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源