论文标题
在异质网络中多代理电源控制的深度强化学习
Deep Reinforcement Learning for Multi-Agent Power Control in Heterogeneous Networks
论文作者
论文摘要
我们考虑一个典型的异质网络(HETNET),其中通过重复使用相同的频谱频段来部署多个访问点(AP)为用户服务。由于不同的AP和用户可能会互相严重干扰,因此需要先进的功率控制技术来管理干扰并增强整个网络的总和率。传统的电力控制技术首先收集瞬时全球通道状态信息(CSI),然后计算亚最佳解决方案。然而,在HETNET中收集瞬时全球CSI是一项挑战,其中全球CSI通常会迅速变化。在本文中,我们利用深度加固学习(DRL)来设计HETNET中的多代理功率控制算法。具体来说,通过将每个AP视为局部深神经网络(DNN)的代理,我们提出了一种多动态共享的批评(MASC)方法,以在线反复试验方式分别训练本地DNN。使用拟议的算法,每个AP只能使用局部观测值独立使用局部DNN来控制发射功率。仿真结果表明,所提出的算法在收敛的平均总和率和计算复杂性方面都优于常规功率控制算法。
We consider a typical heterogeneous network (HetNet), in which multiple access points (APs) are deployed to serve users by reusing the same spectrum band. Since different APs and users may cause severe interference to each other, advanced power control techniques are needed to manage the interference and enhance the sum-rate of the whole network. Conventional power control techniques first collect instantaneous global channel state information (CSI) and then calculate sub-optimal solutions. Nevertheless, it is challenging to collect instantaneous global CSI in the HetNet, in which global CSI typically changes fast. In this paper, we exploit deep reinforcement learning (DRL) to design a multi-agent power control algorithm in the HetNet. To be specific, by treating each AP as an agent with a local deep neural network (DNN), we propose a multiple-actor-shared-critic (MASC) method to train the local DNNs separately in an online trial-and-error manner. With the proposed algorithm, each AP can independently use the local DNN to control the transmit power with only local observations. Simulations results show that the proposed algorithm outperforms the conventional power control algorithms in terms of both the converged average sum-rate and the computational complexity.