论文标题
通过双拟合迭代对政策策略梯度的最佳估计
Optimal Estimation of Off-Policy Policy Gradient via Double Fitted Iteration
论文作者
论文摘要
当我们不允许我们使用目标策略进行采样,而只能访问某些未知行为策略生成的数据集时,策略梯度(PG)估计就成为一个挑战。用于支付政策PG估计的常规方法通常会遭受明显的偏见或指数较大的差异。在本文中,我们提出了双拟合的PG估计(FPG)算法。假设访问Bellman-Complete值函数类,FPG可以与任意策略参数化一起工作。在线性值函数近似的情况下,我们在策略梯度估计误差上提供了一个紧密的有限样本上限,该级别受特征空间中测量的分布不匹配量的控制。我们还以精确的协方差表征建立了FPG估计误差的渐近正态性,这进一步证明在统计上是最佳的,具有匹配的Cramer-Rao下限。从经验上讲,我们使用SoftMax表格或RELU策略网络评估了FPG在策略梯度估计和策略优化方面的性能。在各种指标下,我们的结果表明,基于重要性采样和降低方差技术,FPG显着胜过现有的非政策PG估计方法。
Policy gradient (PG) estimation becomes a challenge when we are not allowed to sample with the target policy but only have access to a dataset generated by some unknown behavior policy. Conventional methods for off-policy PG estimation often suffer from either significant bias or exponentially large variance. In this paper, we propose the double Fitted PG estimation (FPG) algorithm. FPG can work with an arbitrary policy parameterization, assuming access to a Bellman-complete value function class. In the case of linear value function approximation, we provide a tight finite-sample upper bound on policy gradient estimation error, that is governed by the amount of distribution mismatch measured in feature space. We also establish the asymptotic normality of FPG estimation error with a precise covariance characterization, which is further shown to be statistically optimal with a matching Cramer-Rao lower bound. Empirically, we evaluate the performance of FPG on both policy gradient estimation and policy optimization, using either softmax tabular or ReLU policy networks. Under various metrics, our results show that FPG significantly outperforms existing off-policy PG estimation methods based on importance sampling and variance reduction techniques.