论文标题
动态平均字段编程
Dynamic mean field programming
论文作者
论文摘要
为有限状态和行动贝叶斯强化学习开发了动态平均理论。在与统计物理学类比中,将Bellman方程式研究为无序的动力学系统。马尔可夫决策过程过渡概率被解释为耦合,而值的功能是动态发展的确定性旋转。因此,均值奖励和过渡概率被认为是淬灭随机变量。该理论表明,在某些假设下,国家行动值在渐近状态空间限制中跨州行动对在统计上是独立的,并准确地提供了分布的形式。结果在有限且折现的无限视野设置中,以进行价值迭代和政策评估。可以从一组平均字段方程计算状态值统计信息,我们称之为动态平均字段编程(DMFP)。对于政策评估,方程式是准确的。对于价值迭代,通过诉诸极值理论或边界来获得近似方程。结果提供了对表格增强学习的统计结构的分析见解,例如,揭示强化学习等同于一组独立的多军匪徒问题的条件。
A dynamic mean field theory is developed for finite state and action Bayesian reinforcement learning in the large state space limit. In an analogy with statistical physics, the Bellman equation is studied as a disordered dynamical system; the Markov decision process transition probabilities are interpreted as couplings and the value functions as deterministic spins that evolve dynamically. Thus, the mean-rewards and transition probabilities are considered to be quenched random variables. The theory reveals that, under certain assumptions, the state-action values are statistically independent across state-action pairs in the asymptotic state space limit, and provides the form of the distribution exactly. The results hold in the finite and discounted infinite horizon settings, for both value iteration and policy evaluation. The state-action value statistics can be computed from a set of mean field equations, which we call dynamic mean field programming (DMFP). For policy evaluation the equations are exact. For value iteration, approximate equations are obtained by appealing to extreme value theory or bounds. The result provides analytic insight into the statistical structure of tabular reinforcement learning, for example revealing the conditions under which reinforcement learning is equivalent to a set of independent multi-armed bandit problems.