论文标题
共同训练观察者和逃避目标
Co-Training an Observer and an Evading Target
论文作者
论文摘要
增强学习(RL)已经广泛应用于诸如机器人技术之类的应用,但仅在传感器管理中稀少。在本文中,我们将流行的近端策略优化(PPO)方法应用于多代理无人机跟踪方案。尽管记录的真实场景数据可以准确地反映现实世界,但所需的数据量并不总是可用。但是,仿真数据通常很便宜,但是所使用的目标行为通常是天真的,只有模糊地代表现实世界。在本文中,我们利用多代理RL共同生成主角和拮抗政策,并克服数据生成问题,因为策略是在现场和连续适应的。这样,我们能够清楚地超过基线方法,并强大地制定竞争政策。此外,我们通过解释特征显着性并生成易于阅读的决策树作为简化策略来研究可解释的人工智能(XAI)。
Reinforcement learning (RL) is already widely applied to applications such as robotics, but it is only sparsely used in sensor management. In this paper, we apply the popular Proximal Policy Optimization (PPO) approach to a multi-agent UAV tracking scenario. While recorded data of real scenarios can accurately reflect the real world, the required amount of data is not always available. Simulation data, however, is typically cheap to generate, but the utilized target behavior is often naive and only vaguely represents the real world. In this paper, we utilize multi-agent RL to jointly generate protagonistic and antagonistic policies and overcome the data generation problem, as the policies are generated on-the-fly and adapt continuously. This way, we are able to clearly outperform baseline methods and robustly generate competitive policies. In addition, we investigate explainable artificial intelligence (XAI) by interpreting feature saliency and generating an easy-to-read decision tree as a simplified policy.