论文标题

通过后状态 - ABSTRATITACH采样的随机价值函数

Randomized Value Functions via Posterior State-Abstraction Sampling

论文作者

Arumugam, Dilip, Van Roy, Benjamin

论文摘要

国家抽象一直是显着提高增强学习算法样品效率的重要工具。实际上,通过在环境中揭露和强调各种类型的潜在结构,不同类别的抽象阶层已使理论保证和经验绩效提高了。但是,在处理捕获价值函数中结构的状态抽象时,标准假设是,已经提供了或不切实际地计算出了真实的抽象,从而打开了如何有效地发现这种潜在结构的同时共同寻求最佳行为的问题。从土匪文献中汲取灵感,我们建议寻求潜在任务结构的代理必须明确表示并保持其对该结构的不确定性,这是其对环境的整体不确定性的一部分。我们介绍了一种实用算法,该算法使用两个后验分布在状态抽象和抽象状态值中进行。在经验验证我们的方法时,我们发现多任务设置中有很大的绩效提高,在该设置中,任务具有共同的低维表示。

State abstraction has been an essential tool for dramatically improving the sample efficiency of reinforcement-learning algorithms. Indeed, by exposing and accentuating various types of latent structure within the environment, different classes of state abstraction have enabled improved theoretical guarantees and empirical performance. When dealing with state abstractions that capture structure in the value function, however, a standard assumption is that the true abstraction has been supplied or unrealistically computed a priori, leaving open the question of how to efficiently uncover such latent structure while jointly seeking out optimal behavior. Taking inspiration from the bandit literature, we propose that an agent seeking out latent task structure must explicitly represent and maintain its uncertainty over that structure as part of its overall uncertainty about the environment. We introduce a practical algorithm for doing this using two posterior distributions over state abstractions and abstract-state values. In empirically validating our approach, we find that substantial performance gains lie in the multi-task setting where tasks share a common, low-dimensional representation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源