论文标题

通过自我玩法学习多代理谈判

Towards Learning Multi-agent Negotiations via Self-Play

论文作者

Tang, Yichuan Charlie

论文摘要

做出复杂,健壮和安全的顺序决策是智能系统的核心。这对于在复杂的多代理环境中进行计划尤其重要,在复杂的多代理环境中,代理需要预测其他代理商的意图和可能的未来行动。传统方法将问题作为马尔可夫决策过程,但是解决方案通常依赖于各种假设,并在出现角案件时变得脆弱。相比之下,通过同时探索,互动和从环境中学习,深入的增强学习(DEEP RL)在寻找政策方面非常有效。利用强大的深度RL范式,我们证明了自我播放的迭代程序可以逐渐创造出更加多样化的环境,从而导致学习复杂和强大的多代理政策。我们通过对合并流量的具有挑战性的多代理模拟来证明这一点,在该模拟中,代理必须与他人进行互动和谈判,以便成功地在道路上或在道路上合并。尽管环境开始简单,但随着训练的进行,我们通过迭代地向代理“动物园”添加越来越多样的代理来提高其复杂性。定性地,我们发现,通过自我播放,我们的政策会自动学习有趣的行为,例如防御性驾驶,超车,屈服,以及使用信号灯来传达与其他代理商的意图。此外,我们数量地显示了合并操作从63%到98%以上的成功率的显着提高。

Making sophisticated, robust, and safe sequential decisions is at the heart of intelligent systems. This is especially critical for planning in complex multi-agent environments, where agents need to anticipate other agents' intentions and possible future actions. Traditional methods formulate the problem as a Markov Decision Process, but the solutions often rely on various assumptions and become brittle when presented with corner cases. In contrast, deep reinforcement learning (Deep RL) has been very effective at finding policies by simultaneously exploring, interacting, and learning from environments. Leveraging the powerful Deep RL paradigm, we demonstrate that an iterative procedure of self-play can create progressively more diverse environments, leading to the learning of sophisticated and robust multi-agent policies. We demonstrate this in a challenging multi-agent simulation of merging traffic, where agents must interact and negotiate with others in order to successfully merge on or off the road. While the environment starts off simple, we increase its complexity by iteratively adding an increasingly diverse set of agents to the agent "zoo" as training progresses. Qualitatively, we find that through self-play, our policies automatically learn interesting behaviors such as defensive driving, overtaking, yielding, and the use of signal lights to communicate intentions to other agents. In addition, quantitatively, we show a dramatic improvement of the success rate of merging maneuvers from 63% to over 98%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源