论文标题
AOI和能源消耗的导向动态状态更新启用了IoT网络
AoI and Energy Consumption Oriented Dynamic Status Updating in Caching Enabled IoT Networks
论文作者
论文摘要
通过使用存储在Edge Caching Node(ECN)中的数据包来响应用户的请求,缓存被视为一种有希望的技术,可以通过响应用户的请求来减轻传感器(IoT)网络中传感器的能源消耗。对于启用了启用的IOT网络的实时应用程序,必须制定动态状态更新策略,以在用户所经历的新鲜信息与传感器消耗的能量之间取得平衡,但是,这并没有很好地解决。在本文中,我们首先描绘了每个用户的信息年龄(AOI)的信息新鲜度的演变。然后,我们制定动态状态更新优化问题,以最大程度地减少长期累积成本的期望,该成本共同考虑用户的AOI和传感器的能耗。为了解决这个问题,制定了马尔可夫决策过程(MDP)来施放状态更新过程,并提出了一种无模型的强化学习算法,可以解决该法式MDP动力学未知的挑战。最后,进行模拟以验证我们提出的算法的收敛性及其有效性与零等待基线策略相比。
Caching has been regarded as a promising technique to alleviate energy consumption of sensors in Internet of Things (IoT) networks by responding to users' requests with the data packets stored in the edge caching node (ECN). For real-time applications in caching enabled IoT networks, it is essential to develop dynamic status update strategies to strike a balance between the information freshness experienced by users and energy consumed by the sensor, which, however, is not well addressed. In this paper, we first depict the evolution of information freshness, in terms of age of information (AoI), at each user. Then, we formulate a dynamic status update optimization problem to minimize the expectation of a long term accumulative cost, which jointly considers the users' AoI and sensor's energy consumption. To solve this problem, a Markov Decision Process (MDP) is formulated to cast the status updating procedure, and a model-free reinforcement learning algorithm is proposed, with which the challenge brought by the unknown of the formulated MDP's dynamics can be addressed. Finally, simulations are conducted to validate the convergence of our proposed algorithm and its effectiveness compared with the zero-wait baseline policy.