论文标题
ACE:与双向动作依赖性的合作多机构Q学习
ACE: Cooperative Multi-agent Q-learning with Bidirectional Action-Dependency
论文作者
论文摘要
多机构增强学习(MARL)遇到了非平稳性问题,当多个代理商同时更新其策略时,这是每次迭代的不断变化的目标。从第一原理开始,在本文中,我们通过提出双向动作依赖性Q学习(ACE)来解决非平稳性问题。 ACE发展的核心是顺序决策过程,其中仅允许一个代理一次采取行动。在此过程中,考虑到推理阶段的前一个代理采取的动作,每个代理都会最大化其价值函数。在学习阶段,每个代理都可以最大程度地减少TD误差,这取决于随后的代理对其选择的作用的反应。鉴于双向依赖性的设计,ACE有效地将多重MDP变成了单个代理MDP。我们通过识别适当的网络表示以制定操作依赖性来实现ACE框架,从而使顺序决策过程被隐式地计算在一个远程通道中。为了验证ACE,我们将其与两个MARL基准测试的强基线进行了比较。经验实验表明,ACE的表现优于Google Research Football和Starcraft多代理挑战的最新算法。特别是,在SMAC任务上,ACE几乎在所有硬地图上都达到了100%的成功率。我们进一步研究了有关ACE的广泛研究问题,包括扩展,泛化和实用性。代码可用于促进进一步的研究。
Multi-agent reinforcement learning (MARL) suffers from the non-stationarity problem, which is the ever-changing targets at every iteration when multiple agents update their policies at the same time. Starting from first principle, in this paper, we manage to solve the non-stationarity problem by proposing bidirectional action-dependent Q-learning (ACE). Central to the development of ACE is the sequential decision-making process wherein only one agent is allowed to take action at one time. Within this process, each agent maximizes its value function given the actions taken by the preceding agents at the inference stage. In the learning phase, each agent minimizes the TD error that is dependent on how the subsequent agents have reacted to their chosen action. Given the design of bidirectional dependency, ACE effectively turns a multiagent MDP into a single-agent MDP. We implement the ACE framework by identifying the proper network representation to formulate the action dependency, so that the sequential decision process is computed implicitly in one forward pass. To validate ACE, we compare it with strong baselines on two MARL benchmarks. Empirical experiments demonstrate that ACE outperforms the state-of-the-art algorithms on Google Research Football and StarCraft Multi-Agent Challenge by a large margin. In particular, on SMAC tasks, ACE achieves 100% success rate on almost all the hard and super-hard maps. We further study extensive research problems regarding ACE, including extension, generalization, and practicability. Code is made available to facilitate further research.