论文标题
基于深厚学习的深度访问边缘计算时间表
Deep Reinforcement Learning Based Multi-Access Edge Computing Schedule for Internet of Vehicle
论文作者
论文摘要
由于智能运输系统被广泛地实施,无人驾驶汽车(UAV)可以帮助陆生基站充当多访问边缘计算(MEC),以为车辆互联网(IOV)提供更好的无线网络通信(IOV),我们提出了一种无人用的无线方法,以帮助提供更好的无用网络网络服务,以提供最高的经验(QoE)(QOE)(QOES)的最高质量(QOES)。在本文中,我们提出了一个多代理图卷积的深度加固学习(M-AGCDRL)算法,该算法将每个代理的本地观察结果与低分辨率的全局映射与学习每个代理的策略的输入相结合。代理商可以在图形注意网络中与其他人共享他们的信息,从而产生有效的联合政策。仿真结果表明,M-AGCDRL方法可实现更好的物联网Qoe,并实现良好的性能。
As intelligent transportation systems been implemented broadly and unmanned arial vehicles (UAVs) can assist terrestrial base stations acting as multi-access edge computing (MEC) to provide a better wireless network communication for Internet of Vehicles (IoVs), we propose a UAVs-assisted approach to help provide a better wireless network service retaining the maximum Quality of Experience(QoE) of the IoVs on the lane. In the paper, we present a Multi-Agent Graph Convolutional Deep Reinforcement Learning (M-AGCDRL) algorithm which combines local observations of each agent with a low-resolution global map as input to learn a policy for each agent. The agents can share their information with others in graph attention networks, resulting in an effective joint policy. Simulation results show that the M-AGCDRL method enables a better QoE of IoTs and achieves good performance.