论文标题
分布式AI工作流和URLLC之间的相互作用
Interplay between Distributed AI Workflow and URLLC
论文作者
论文摘要
分布式人工智能(AI)最近在各种通信服务方面取得了巨大的突破,从容忍性的工厂自动化到智能城市。当分布式学习通过一组无线连接的设备,随机频道波动以及现有服务同时运行在同一网络上会影响分布式学习的性能。在本文中,我们调查了分布式AI工作流程与超可靠的低潜伏通信(URLLC)服务之间的相互作用,并同时通过网络运行。在出厂自动化用例中,使用3GPP符合的模拟,我们显示了各种分布式AI设置(例如,型号大小和参与设备的数量)对分布式AI的收敛时间和URLLC的应用层性能的影响。除非我们利用现有的5G-NR服务处理机制将流量与两种服务分开,否则我们的仿真结果表明,分布式AI对URLLC设备的可用性的影响非常重要。此外,通过正确设置分布式AI(例如,正确的用户选择),我们可以大大降低网络资源利用率,从而导致分布式AI的延迟降低,并为URLLC用户提供更高的可用性。我们的结果为将来的6G和AI标准化提供了重要的见解。
Distributed artificial intelligence (AI) has recently accomplished tremendous breakthroughs in various communication services, ranging from fault-tolerant factory automation to smart cities. When distributed learning is run over a set of wireless connected devices, random channel fluctuations, and the incumbent services simultaneously running on the same network affect the performance of distributed learning. In this paper, we investigate the interplay between distributed AI workflow and ultra-reliable low latency communication (URLLC) services running concurrently over a network. Using 3GPP compliant simulations in a factory automation use case, we show the impact of various distributed AI settings (e.g., model size and the number of participating devices) on the convergence time of distributed AI and the application layer performance of URLLC. Unless we leverage the existing 5G-NR quality of service handling mechanisms to separate the traffic from the two services, our simulation results show that the impact of distributed AI on the availability of the URLLC devices is significant. Moreover, with proper setting of distributed AI (e.g., proper user selection), we can substantially reduce network resource utilization, leading to lower latency for distributed AI and higher availability for the URLLC users. Our results provide important insights for future 6G and AI standardization.