论文标题
具有下行风险限制的自然参与者批评算法
A Natural Actor-Critic Algorithm with Downside Risk Constraints
论文作者
论文摘要
现有的关于风险敏感的增强学习的工作 - 无论是对称和下行风险措施 - 通常都使用了蒙特卡洛的直接估计政策梯度的估计。尽管该方法得出无偏梯度的估计值,但与时间差异方法相比,它也遭受了较高的差异和样本效率的降低。在本文中,我们以厌恶为低风险研究了预测和控制,我们可以通过返回的较低部分时刻来衡量。我们介绍了一个新的钟形方程,该方程式在下部时刻范围,从而规避了其非线性。我们证明,较低部分力矩的代理是收缩,并通过方差分解为算法的稳定性提供直觉。这允许对部分力矩进行样品有效的在线估计。对于对风险敏感的控制,我们实例化了奖励有限的政策优化,这是一种最新的参与者批评方法,用于寻找受限的政策,我们的替代方位是较低的部分时刻。我们扩展了使用自然政策梯度的方法,并证明了我们的方法对三个基准问题的有效性用于风险敏感的增强学习。
Existing work on risk-sensitive reinforcement learning - both for symmetric and downside risk measures - has typically used direct Monte-Carlo estimation of policy gradients. While this approach yields unbiased gradient estimates, it also suffers from high variance and decreased sample efficiency compared to temporal-difference methods. In this paper, we study prediction and control with aversion to downside risk which we gauge by the lower partial moment of the return. We introduce a new Bellman equation that upper bounds the lower partial moment, circumventing its non-linearity. We prove that this proxy for the lower partial moment is a contraction, and provide intuition into the stability of the algorithm by variance decomposition. This allows sample-efficient, on-line estimation of partial moments. For risk-sensitive control, we instantiate Reward Constrained Policy Optimization, a recent actor-critic method for finding constrained policies, with our proxy for the lower partial moment. We extend the method to use natural policy gradients and demonstrate the effectiveness of our approach on three benchmark problems for risk-sensitive reinforcement learning.