论文标题

Marllib:可扩展有效的多代理增强学习库

MARLlib: A Scalable and Efficient Multi-agent Reinforcement Learning Library

论文作者

Hu, Siyi, Zhong, Yifan, Gao, Minquan, Wang, Weixun, Dong, Hao, Liang, Xiaodan, Li, Zhihui, Chang, Xiaojun, Yang, Yaodong

论文摘要

多代理增强学习(MARL)领域的研究人员面临的一个重大挑战与识别可以为多机构任务和算法组合提供快速和兼容的开发的图书馆有关,同时消除了考虑兼容性问题的需求。在本文中,我们介绍了Marllib,这是一个旨在通过利用三个关键机制来应对上述挑战的图书馆:1)标准化的多机构环境包装器,2)代理级算法实现和3)灵活的策略映射策略。通过利用这些机制,Marllib可以有效地解开多代理任务的相互交织的性质和算法的学习过程,并能够根据当前任务的属性自动改变培训策略。 Marllib库的源代码可在GitHub上公开访问:\ url {https://github.com/replicable-marl/marllib}。

A significant challenge facing researchers in the area of multi-agent reinforcement learning (MARL) pertains to the identification of a library that can offer fast and compatible development for multi-agent tasks and algorithm combinations, while obviating the need to consider compatibility issues. In this paper, we present MARLlib, a library designed to address the aforementioned challenge by leveraging three key mechanisms: 1) a standardized multi-agent environment wrapper, 2) an agent-level algorithm implementation, and 3) a flexible policy mapping strategy. By utilizing these mechanisms, MARLlib can effectively disentangle the intertwined nature of the multi-agent task and the learning process of the algorithm, with the ability to automatically alter the training strategy based on the current task's attributes. The MARLlib library's source code is publicly accessible on GitHub: \url{https://github.com/Replicable-MARL/MARLlib}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源