论文标题

罕见事件估计的深入强化学习方法

A Deep Reinforcement Learning Approach to Rare Event Estimation

论文作者

Corso, Anthony, Kim, Kyu-Young, Gupta, Shubh, Gao, Grace, Kochenderfer, Mykel J.

论文摘要

自主系统设计的重要一步是评估发生故障的可能性。在安全 - 关键领域中,故障概率非常小,因此通过蒙特卡洛抽样评估政策的效率低下。已经开发了自适应重要性抽样方法,用于罕见的事件估计,但不能很好地扩展到具有较长视野的顺序系统。在这项工作中,我们开发了两种自适应取样算法,可以有效地估计顺序决策系统罕见事件的概率。这些算法的基础是最小化kullback-leibler依赖性提案分布与轨迹目标分布之间的差异,但是所产生的算法类似于策略梯度和基于价值的加强学习。我们将多重重要性采样用于减少估计的差异,并解决最佳提案分布中多模式的问题。我们通过连续和离散的动作空间展示了在控制任务上的方法,并在几个基线上显示了准确性的改进。

An important step in the design of autonomous systems is to evaluate the probability that a failure will occur. In safety-critical domains, the failure probability is extremely small so that the evaluation of a policy through Monte Carlo sampling is inefficient. Adaptive importance sampling approaches have been developed for rare event estimation but do not scale well to sequential systems with long horizons. In this work, we develop two adaptive importance sampling algorithms that can efficiently estimate the probability of rare events for sequential decision making systems. The basis for these algorithms is the minimization of the Kullback-Leibler divergence between a state-dependent proposal distribution and a target distribution over trajectories, but the resulting algorithms resemble policy gradient and value-based reinforcement learning. We apply multiple importance sampling to reduce the variance of our estimate and to address the issue of multi-modality in the optimal proposal distribution. We demonstrate our approach on a control task with both continuous and discrete actions spaces and show accuracy improvements over several baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源