论文标题
时间差异学习是最佳的吗?依赖实例的分析
Is Temporal Difference Learning Optimal? An Instance-Dependent Analysis
论文作者
论文摘要
我们在折扣的马尔可夫决策过程中解决了政策评估问题,并在生成模型下对$ \ ell_ \ infty $ error提供了依赖实例的保证。我们建立了局部最小值下限的渐近版本和非质子版本进行政策评估,从而提供了一个依赖实例的基线来比较算法。理论启发的模拟表明,即使在非反应环境中进行评估时,广泛使用的时间差异(TD)算法是严格次优的,即使与Polyak-Ruppert迭代平均相结合。我们通过引入和分析降低方差的随机近似形式来解决这个问题,表明它们实现了非反应,实例依赖性最优性,直到对数因素。
We address the problem of policy evaluation in discounted Markov decision processes, and provide instance-dependent guarantees on the $\ell_\infty$-error under a generative model. We establish both asymptotic and non-asymptotic versions of local minimax lower bounds for policy evaluation, thereby providing an instance-dependent baseline by which to compare algorithms. Theory-inspired simulations show that the widely-used temporal difference (TD) algorithm is strictly suboptimal when evaluated in a non-asymptotic setting, even when combined with Polyak-Ruppert iterate averaging. We remedy this issue by introducing and analyzing variance-reduced forms of stochastic approximation, showing that they achieve non-asymptotic, instance-dependent optimality up to logarithmic factors.