论文标题
通过增强学习,在雾辅助的物联网网络中延迟感知的资源分配
Delay-aware Resource Allocation in Fog-assisted IoT Networks Through Reinforcement Learning
论文作者
论文摘要
物联网设备附近的FOG节点有望通过将IOT设备从IoT设备卸载任务来提供低延迟服务。移动物联网由移动物联网设备(例如车辆,可穿戴设备和智能手机)组成。由于随时间变化的频道条件,流量负载和计算负载,改善移动物联网设备的服务质量(QoS)是一项挑战。由于任务延迟包括传输延迟和计算延迟,我们研究了无线通道和雾节点中的资源分配(即包括无线电资源和计算资源),以最大程度地减少所有任务的延迟,同时满足其QoS约束。我们将资源分配问题提出为整数非线性问题,在此考虑无线电资源和计算资源。由于物联网任务是动态的,因此不同任务的资源分配相互耦合,并且将来的信息是不切实际的。因此,我们设计了一种在线增强学习算法,以根据系统的体验重播数据实时实时做出次级决策。通过广泛的模拟结果证明了设计算法的性能。
Fog nodes in the vicinity of IoT devices are promising to provision low latency services by offloading tasks from IoT devices to them. Mobile IoT is composed by mobile IoT devices such as vehicles, wearable devices and smartphones. Owing to the time-varying channel conditions, traffic loads and computing loads, it is challenging to improve the quality of service (QoS) of mobile IoT devices. As task delay consists of both the transmission delay and computing delay, we investigate the resource allocation (i.e., including both radio resource and computation resource) in both the wireless channel and fog node to minimize the delay of all tasks while their QoS constraints are satisfied. We formulate the resource allocation problem into an integer non-linear problem, where both the radio resource and computation resource are taken into account. As IoT tasks are dynamic, the resource allocation for different tasks are coupled with each other and the future information is impractical to be obtained. Therefore, we design an on-line reinforcement learning algorithm to make the sub-optimal decision in real time based on the system's experience replay data. The performance of the designed algorithm has been demonstrated by extensive simulation results.