论文标题
通过对称的先验提高无线资源分配的学习效率
Improving Learning Efficiency for Wireless Resource Allocation with Symmetric Prior
论文作者
论文摘要
在高度动态的环境中,通过无线通信中深层神经网络(DNN)进行学习资源分配至关重要。将领域知识纳入学习是解决此问题的一种有前途的方式,这是无线社区中新兴的话题。在本文中,我们首先简要概述了使用领域知识的两类方法:将数学模型或先验知识引入深度学习。然后,我们考虑一种对称的先验,置换率均衡,在无线任务中广泛存在。为了解释如何利用这种通用的先验来提高学习效率,我们求助于排名,该排名将共同分类DNN的输入和输出。我们在子载体,概率内容缓存和干扰协调之间使用电力分配来说明通过利用该财产的提高学习效率。从案例研究中,我们发现,由于一个有趣的现象,要实现给定系统性能的所需训练样品随着子载体或内容的数量而降低:“样品硬化”。仿真结果表明,可以通过利用先验知识来大大减少训练样本,DNN中的自由参数和训练时间。排名后训练DNN所需的样品可以减少$ 15 \ sim 2,400 $折叠,以实现与同行者相同的系统性能而无需使用先验。
Improving learning efficiency is paramount for learning resource allocation with deep neural networks (DNNs) in wireless communications over highly dynamic environments. Incorporating domain knowledge into learning is a promising way of dealing with this issue, which is an emerging topic in the wireless community. In this article, we first briefly summarize two classes of approaches to using domain knowledge: introducing mathematical models or prior knowledge to deep learning. Then, we consider a kind of symmetric prior, permutation equivariance, which widely exists in wireless tasks. To explain how such a generic prior is harnessed to improve learning efficiency, we resort to ranking, which jointly sorts the input and output of a DNN. We use power allocation among subcarriers, probabilistic content caching, and interference coordination to illustrate the improvement of learning efficiency by exploiting the property. From the case study, we find that the required training samples to achieve given system performance decreases with the number of subcarriers or contents, owing to an interesting phenomenon: "sample hardening". Simulation results show that the training samples, the free parameters in DNNs and the training time can be reduced dramatically by harnessing the prior knowledge. The samples required to train a DNN after ranking can be reduced by $15 \sim 2,400$ folds to achieve the same system performance as the counterpart without using prior.