论文标题
仅在测试时使用MCT,以α的灵感游戏学习:更快的培训
AlphaZero-Inspired Game Learning: Faster Training by Using MCTS Only at Test Time
论文作者
论文摘要
最近,开创性的算法Alphago和Alphazero在游戏学习和深度强化学习方面开始了一个新时代。尽管Alphago和Alphazero的成就 - 在超级人类层面上玩的GO和其他复杂游戏 - 确实令人印象深刻,但这些架构的缺点是它们需要高度的计算资源。许多研究人员正在寻找类似于alphazero但计算需求较低的方法,因此更容易重现。 在本文中,我们选择了Alphazero的重要元素 - 蒙特卡洛树搜索(MCTS)计划阶段 - 并将其与时间差异(TD)学习剂相结合。我们首次将MCT包装在TD N材料网络上,我们仅在测试时间使用此包装来创建多功能代理,以使计算需求保持较低。我们将这种新体系结构应用于几个复杂的游戏(Othello,Connectfour,Rubik's Cube),并显示了此Alphazero启发的MCTS包装器所获得的优势。特别是,我们提出的结果是,该代理是第一个在标准硬件(无GPU或TPU)上训练的代理商,可以击败非常强大的Othello计划Edax到达和包括7级(大多数其他学习算法的学习算法,只能击败EDAX至2级)。
Recently, the seminal algorithms AlphaGo and AlphaZero have started a new era in game learning and deep reinforcement learning. While the achievements of AlphaGo and AlphaZero - playing Go and other complex games at super human level - are truly impressive, these architectures have the drawback that they require high computational resources. Many researchers are looking for methods that are similar to AlphaZero, but have lower computational demands and are thus more easily reproducible. In this paper, we pick an important element of AlphaZero - the Monte Carlo Tree Search (MCTS) planning stage - and combine it with temporal difference (TD) learning agents. We wrap MCTS for the first time around TD n-tuple networks and we use this wrapping only at test time to create versatile agents that keep at the same time the computational demands low. We apply this new architecture to several complex games (Othello, ConnectFour, Rubik's Cube) and show the advantages achieved with this AlphaZero-inspired MCTS wrapper. In particular, we present results that this agent is the first one trained on standard hardware (no GPU or TPU) to beat the very strong Othello program Edax up to and including level 7 (where most other learning-from-scratch algorithms could only defeat Edax up to level 2).