论文标题
超越RF:启用AI的多模式波束成形将如何塑造Nextg标准
Going Beyond RF: How AI-enabled Multimodal Beamforming will Shape the NextG Standard
论文作者
论文摘要
将人工智能和机器学习(AI/ML)方法纳入5G无线标准中,有望自主网络行为和超低延迟重新配置。但是,到目前为止的努力纯粹是从射频(RF)信号中学习。 Future standards and next-generation (nextG) networks beyond 5G will have two significant evolutions over the state-of-the-art 5G implementations: (i) massive number of antenna elements, scaling up to hundreds-to-thousands in number, and (ii) inclusion of AI/ML in the critical path of the network reconfiguration process that can access sensor feeds from a variety of RF and non-RF sources.虽然前者允许在“波束形成”中进行前所未有的灵活性,而信号在目标接收器上进行了建设性的结合,但后者使网络具有增强的情况,而不是由单个和孤立的数据模式捕获的。这项调查对当今光束成型的不同方法进行了彻底的分析,重点关注MMWave频段,然后继续为考虑多种模态的非RF传感器数据(例如LIDAR,LIDAR,GPS,GPS)提高光束形式的方向准确性和减少处理时间而提出令人信服的案例。这种所谓的多模式波束形成的想法将需要基于深度学习的融合技术,这将有助于增强目前的纯粹和经典信号处理方法,这些方法对于大型天线阵列而言不能很好地扩展。该调查描述了用于多模式波束形成的相关深度学习体系结构,确定了计算挑战和边缘计算在此过程中的作用,数据集生成工具,最后,列出了社区应该应对的开放挑战,以实现对光束形成未来的这种变革性的看法。
Incorporating artificial intelligence and machine learning (AI/ML) methods within the 5G wireless standard promises autonomous network behavior and ultra-low-latency reconfiguration. However, the effort so far has purely focused on learning from radio frequency (RF) signals. Future standards and next-generation (nextG) networks beyond 5G will have two significant evolutions over the state-of-the-art 5G implementations: (i) massive number of antenna elements, scaling up to hundreds-to-thousands in number, and (ii) inclusion of AI/ML in the critical path of the network reconfiguration process that can access sensor feeds from a variety of RF and non-RF sources. While the former allows unprecedented flexibility in 'beamforming', where signals combine constructively at a target receiver, the latter enables the network with enhanced situation awareness not captured by a single and isolated data modality. This survey presents a thorough analysis of the different approaches used for beamforming today, focusing on mmWave bands, and then proceeds to make a compelling case for considering non-RF sensor data from multiple modalities, such as LiDAR, Radar, GPS for increasing beamforming directional accuracy and reducing processing time. This so called idea of multimodal beamforming will require deep learning based fusion techniques, which will serve to augment the current RF-only and classical signal processing methods that do not scale well for massive antenna arrays. The survey describes relevant deep learning architectures for multimodal beamforming, identifies computational challenges and the role of edge computing in this process, dataset generation tools, and finally, lists open challenges that the community should tackle to realize this transformative vision of the future of beamforming.