论文标题

沟通和计算使用深度强化学习的URLLC服务的O-RAN资源切片

Communication and Computation O-RAN Resource Slicing for URLLC Services Using Deep Reinforcement Learning

论文作者

Filali, Abderrahime, Nour, Boubakr, Cherkaoui, Soumaya, Kobbane, Abdellatif

论文摘要

超过5G/6G网络向服务感知网络的未来发展是基于网络切片技术。通过网络切片,通信服务提供商试图满足垂直行业施加的所有要求,包括超级可靠的低延迟通信(URLLC)服务。此外,开放的无线电访问网络(O-RAN)体系结构通过在RAN中引入更多的可编程性来灵活地共享网络资源。切片是端到端网络切片的重要组成部分,因为它可以确保有效地共享通信和计算资源。但是,由于URLLC服务的严格要求和RAN环境的动态,跑步切片是具有挑战性的。在本文中,我们提出了一种基于O-RAN体系结构的两级式切片方法,以分配通信和计算在URLLC终端设备之间进行资源。对于每个运行切片级别,我们将资源切片问题建模为单一的马尔可夫决策过程,并设计一种深入的增强学习算法来解决它。仿真结果证明了拟议方法在满足所需服务质量要求的效率。

The evolution of the future beyond-5G/6G networks towards a service-aware network is based on network slicing technology. With network slicing, communication service providers seek to meet all the requirements imposed by the verticals, including ultra-reliable low-latency communication (URLLC) services. In addition, the open radio access network (O-RAN) architecture paves the way for flexible sharing of network resources by introducing more programmability into the RAN. RAN slicing is an essential part of end-to-end network slicing since it ensures efficient sharing of communication and computation resources. However, due to the stringent requirements of URLLC services and the dynamics of the RAN environment, RAN slicing is challenging. In this article, we propose a two-level RAN slicing approach based on the O-RAN architecture to allocate the communication and computation RAN resources among URLLC end-devices. For each RAN slicing level, we model the resource slicing problem as a single-agent Markov decision process and design a deep reinforcement learning algorithm to solve it. Simulation results demonstrate the efficiency of the proposed approach in meeting the desired quality of service requirements.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源