论文标题

信息新鲜感知的任务在空气地面集成边缘计算系统中卸载

Information Freshness-Aware Task Offloading in Air-Ground Integrated Edge Computing Systems

论文作者

Chen, Xianfu, Wu, Celimuge, Chen, Tao, Liu, Zhi, Zhang, Honggang, Bennis, Mehdi, Liu, Hang, Ji, Yusheng

论文摘要

本文研究了在空气地面集成的多访问边缘计算系统中卸载的信息问题,该任务由基础架构提供商(INP)部署。第三方实时应用程序服务提供商根据长期业务协议从INP中提供有限的通信和计算资源为订阅的移动用户(MUS)提供计算服务。由于动态特征,MUS之间的相互作用是由非合作随机游戏建模的,其中控制策略是耦合的,并且每个MU的目标都旨在自私地最大程度地提高自己的预期长期回报。为了解决NASH均衡解决方案,我们建议每个MU根据本地系统状态和猜想的行为,基于随机游戏将随机游戏转换为单代理Markov决策过程。此外,我们得出了一种新颖的在线深入增强学习(RL)方案,该方案为每个MU采用两个单独的双重Q-networks,以近似Q-因子和决定后的Q-因子。使用拟议的深度RL方案,系统中的每个MU都能在没有动力学的先验统计知识的情况下做出决策。数值实验检查了拟议方案在平衡信息时代和能耗的潜力。

This paper studies the problem of information freshness-aware task offloading in an air-ground integrated multi-access edge computing system, which is deployed by an infrastructure provider (InP). A third-party real-time application service provider provides computing services to the subscribed mobile users (MUs) with the limited communication and computation resources from the InP based on a long-term business agreement. Due to the dynamic characteristics, the interactions among the MUs are modelled by a non-cooperative stochastic game, in which the control policies are coupled and each MU aims to selfishly maximize its own expected long-term payoff. To address the Nash equilibrium solutions, we propose that each MU behaves in accordance with the local system states and conjectures, based on which the stochastic game is transformed into a single-agent Markov decision process. Moreover, we derive a novel online deep reinforcement learning (RL) scheme that adopts two separate double deep Q-networks for each MU to approximate the Q-factor and the post-decision Q-factor. Using the proposed deep RL scheme, each MU in the system is able to make decisions without a priori statistical knowledge of dynamics. Numerical experiments examine the potentials of the proposed scheme in balancing the age of information and the energy consumption.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源