论文标题

学会将动作值表示为动作顶点的超图

Learning to Represent Action Values as a Hypergraph on the Action Vertices

论文作者

Tavakoli, Arash, Fatemi, Mehdi, Kormushev, Petar

论文摘要

动作价值估计是许多增强学习方法(RL)方法的关键组成部分,在这种方法中,样本复杂性在很大程度上取决于可以学习良好的估计值的速度。通过通过代表学习的角度查看此问题,国家和行动的良好表示可以促进行动价值估计。尽管深度学习的进步在学习状态表示方面取得了无缝驱动的进展,但鉴于代理对RL的概念的特殊性,但对学习行动表征的关注很少。我们猜想利用多维作用空间的组合结构是学习良好作用表示的关键要素。为了测试这一点,我们设置了动作超图网络框架 - 在具有结构性归纳偏见的多维离散动作空间中学习动作表示的一类功能。使用此框架,我们实现了基于与深Q-Networks的组合的代理类,我们将其配置为HyperGraph Q-networks。我们展示了方法对无数领域的有效性:在最小的混杂效果,Atari 2600游戏和离散的物理控制基准下进行的说明性预测问题。

Action-value estimation is a critical component of many reinforcement learning (RL) methods whereby sample complexity relies heavily on how fast a good estimator for action value can be learned. By viewing this problem through the lens of representation learning, good representations of both state and action can facilitate action-value estimation. While advances in deep learning have seamlessly driven progress in learning state representations, given the specificity of the notion of agency to RL, little attention has been paid to learning action representations. We conjecture that leveraging the combinatorial structure of multi-dimensional action spaces is a key ingredient for learning good representations of action. To test this, we set forth the action hypergraph networks framework -- a class of functions for learning action representations in multi-dimensional discrete action spaces with a structural inductive bias. Using this framework we realise an agent class based on a combination with deep Q-networks, which we dub hypergraph Q-networks. We show the effectiveness of our approach on a myriad of domains: illustrative prediction problems under minimal confounding effects, Atari 2600 games, and discretised physical control benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源