论文标题

熟练度受限的多代理强化学习用于环境自适应多重UAV-UGV组合

Proficiency Constrained Multi-Agent Reinforcement Learning for Environment-Adaptive Multi UAV-UGV Teaming

论文作者

Yu, Qifei, Shen, Zhexin, Pang, Yijiang, Liu, Rui

论文摘要

一支混合的空中机器人团队包括无人接地车(UGV)和无人驾驶汽车(UAV),被广泛用于灾难救助,社会保障,精密农业和军事任务。但是,由于机器人具有不同的运动速度,感知范围,到达区域以及对动态环境的弹性功能,因此团队功能和相应的配置有所不同。由于团队内部的异质机器人以及机器人的弹性功能,在合理的任务分配和最大程度地利用机器人能力之间以最佳平衡执行任务是一项挑战。为了应对有效的混合地面和空中团队的这一挑战,本文开发了一种新颖的团队方法,熟练的多代理深入增强学习(MIX-RL),以考虑机器人能力,任务需求和环境条件之间的最佳对齐方式来指导地面和空中合作。 Mix-RL在很大程度上利用了机器人功能,同时意识到机器人功能对任务要求和环境条件的适应性。 Mix-RL在指导混合组合方面的有效性已通过“犯罪车辆跟踪的社会保障”任务进行了验证。

A mixed aerial and ground robot team, which includes both unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs), is widely used for disaster rescue, social security, precision agriculture, and military missions. However, team capability and corresponding configuration vary since robots have different motion speeds, perceiving ranges, reaching areas, and resilient capabilities to the dynamic environment. Due to heterogeneous robots inside a team and the resilient capabilities of robots, it is challenging to perform a task with an optimal balance between reasonable task allocations and maximum utilization of robot capability. To address this challenge for effective mixed ground and aerial teaming, this paper developed a novel teaming method, proficiency aware multi-agent deep reinforcement learning (Mix-RL), to guide ground and aerial cooperation by considering the best alignments between robot capabilities, task requirements, and environment conditions. Mix-RL largely exploits robot capabilities while being aware of the adaption of robot capabilities to task requirements and environment conditions. Mix-RL's effectiveness in guiding mixed teaming was validated with the task "social security for criminal vehicle tracking".

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源