论文标题

联合能源和AOI优化的统一框架通过基于NOMA MEC的网络的深入增强学习

A Unified Framework for Joint Energy and AoI Optimization via Deep Reinforcement Learning for NOMA MEC-based Networks

论文作者

Zakeri, Abolfazl, Parvini, Mohammad, Javan, Mohammad Reza, Mokari, Nader, Jorswieck, Eduard A

论文摘要

在本文中,我们为智能移动边缘计算(MEC)辅助无线电访问网络设计了一种新颖的调度和资源分配算法。与以前的能源效率(EE)或基于信息的平均年龄(AAOI)网络设计不同,我们提出了一个统一的度量,以同时优化网络的ESE和AAOI。为了进一步提高系统容量,提出了非正交的多重访问(NOMA)作为未来蜂窝网络多个访问方案的候选者。我们的主要目的是使用随机优化在AOI,NOMA和资源容量限制下最大化长期目标函数。为了克服网络参数的复杂性和未知动力学(例如,无线通道和干扰),我们应用了强化学习的概念并实现了深Q-NETWORK(DQN)。仿真结果说明了提出的框架的有效性,并分析了不同参数对网络性能的影响。根据结果​​,我们提出的奖励函数以微不足道的损失值快速收敛。此外,他们说明了我们的工作优于现有最高的基本底线,在目标函数上最高64%,而在AAOI中,这是示例的51%。

In this paper, we design a novel scheduling and resource allocation algorithm for a smart mobile edge computing (MEC) assisted radio access network. Different from previous energy efficiency (EE) based or the average age of information (AAoI)-based network designs, we propose a unified metric for simultaneously optimizing ESE and AAoI of the network. To further improve the system capacity, non-orthogonal multiple access (NOMA) is proposed as a candidate for multiple access schemes for future cellular networks. Our main aim is to maximize the long-term objective function under AoI, NOMA, and resource capacity constraints using stochastic optimization. To overcome the complexities and unknown dynamics of the network parameters (e.g., wireless channel and interference), we apply the concept of reinforcement learning and implement a deep Q-network (DQN). Simulation results illustrate the effectiveness of the proposed framework and analyze different parameters impact on network performance. Based on the results, our proposed reward function converges fast with negligible loss value. Also, they illustrate our work outperforms the existing state of the art baselines up to 64% in the objective function and 51% in AAoI, which are stated as examples.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源