论文标题

Pora:动态雾计算系统中的预测卸载和资源分配

PORA: Predictive Offloading and Resource Allocation in Dynamic Fog Computing Systems

论文作者

Gao, Xin, Huang, Xi, Bian, Simeng, Shao, Ziyu, Yang, Yang

论文摘要

在多层雾计算系统中,为了加速用于实时IoT应用程序的计算密集型任务的处理,资源有限的IoT设备可以将部分工作负载卸载到附近的FOG节点,以下这些工作负载可能会以更大的计算能力将其卸载到上层型雾气。这种层次的卸载虽然有望缩短加工潜伏期,但也可能引起过度消耗和无线传输的潜伏期。随着各种系统动态的时间变化,这种权衡使得进行有效和在线卸载决策变得更具挑战性。同时,预测卸载到雾计算系统的基本好处仍然没有探索。在本文中,我们关注动态卸载和资源分配的问题,并在多层雾计算系统中进行流量预测。通过将问题提出为随机网络优化问题,我们的目标是最大程度地减少系统中所有队列的稳定性保证的时间平均值。我们利用独特的问题结构并提出Pora,这是多层雾计算系统的高效且分布式的预测卸载和资源分配方案。我们的理论分析和仿真结果表明,Pora带来了近乎最佳的功率消耗,并具有队列稳定性保证。此外,即使存在预测误差,Pora仅需要温和的预测信息,即使预测错误也可以实现明显的延迟降低。

In multi-tiered fog computing systems, to accelerate the processing of computation-intensive tasks for real-time IoT applications, resource-limited IoT devices can offload part of their workloads to nearby fog nodes, whereafter such workloads may be offloaded to upper-tier fog nodes with greater computation capacities. Such hierarchical offloading, though promising to shorten processing latencies, may also induce excessive power consumptions and latencies for wireless transmissions. With the temporal variation of various system dynamics, such a trade-off makes it rather challenging to conduct effective and online offloading decision making. Meanwhile, the fundamental benefits of predictive offloading to fog computing systems still remain unexplored. In this paper, we focus on the problem of dynamic offloading and resource allocation with traffic prediction in multi-tiered fog computing systems. By formulating the problem as a stochastic network optimization problem, we aim to minimize the time-average power consumptions with stability guarantee for all queues in the system. We exploit unique problem structures and propose PORA, an efficient and distributed predictive offloading and resource allocation scheme for multi-tiered fog computing systems. Our theoretical analysis and simulation results show that PORA incurs near-optimal power consumptions with queue stability guarantee. Furthermore, PORA requires only mild-value of predictive information to achieve a notable latency reduction, even with prediction errors.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源