论文标题

策略梯度的样本复杂性查找二阶固定点

Sample Complexity of Policy Gradient Finding Second-Order Stationary Points

论文作者

Yang, Long, Zheng, Qian, Pan, Gang

论文摘要

基于政策的强化学习(RL)的目标是搜索其目标的最大要点。但是,由于其目标的固有非cove性,融合到一阶固定点(FOSP)无法保证策略梯度方法找到最大点。 FOSP可以是最小甚至是鞍点,对于RL来说是不希望的。幸运的是,如果所有马鞍点都是\ emph {strict},则所有二阶固定点(SOSP)完全等同于本地最大值。我们将SOSP视为与策略梯度的样本复杂性相同的收敛标准,而不是FOSP。我们的结果表明,策略梯度收敛到$(ε,\ sqrt {εχ})$ - SOSP,概率至少$ 1- \ widetilde {\ Mathcal {o}}(Δ)$之后$ \ MATHCAL {O} \ left(\ dfrac {ε^{ - \ frac {9} {2}}}}} {(1-γ)\sqrtχ} \ log \ log \ log \ dfrac {1}δ\ right)我们的结果在需要$ \ Mathcal {o} \ left(\ dfrac {\ dfrac {ε^{ - 9}χ^{\ frac {3} {2}}}Δ\ log \ log \ dfrac {1} {1} {{{{{frac {frac){我们的分析基于将参数空间$ \ mathbb {r}^p $分解为三个非交流区域的关键思想:非平稳点,鞍点和局部最佳区域,然后对每个区域的RL目标进行局部改进。该技术可能会推广到广泛的政策梯度方法。

The goal of policy-based reinforcement learning (RL) is to search the maximal point of its objective. However, due to the inherent non-concavity of its objective, convergence to a first-order stationary point (FOSP) can not guarantee the policy gradient methods finding a maximal point. A FOSP can be a minimal or even a saddle point, which is undesirable for RL. Fortunately, if all the saddle points are \emph{strict}, all the second-order stationary points (SOSP) are exactly equivalent to local maxima. Instead of FOSP, we consider SOSP as the convergence criteria to character the sample complexity of policy gradient. Our result shows that policy gradient converges to an $(ε,\sqrt{εχ})$-SOSP with probability at least $1-\widetilde{\mathcal{O}}(δ)$ after the total cost of $\mathcal{O}\left(\dfrac{ε^{-\frac{9}{2}}}{(1-γ)\sqrtχ}\log\dfrac{1}δ\right)$, where $γ\in(0,1)$. Our result improves the state-of-the-art result significantly where it requires $\mathcal{O}\left(\dfrac{ε^{-9}χ^{\frac{3}{2}}}δ\log\dfrac{1}{εχ}\right)$. Our analysis is based on the key idea that decomposes the parameter space $\mathbb{R}^p$ into three non-intersected regions: non-stationary point, saddle point, and local optimal region, then making a local improvement of the objective of RL in each region. This technique can be potentially generalized to extensive policy gradient methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源