论文标题

优化雾计算环境中分散分析的成本与精度

Optimising cost vs accuracy of decentralised analytics in fog computing environments

论文作者

Valerio, Lorenzo, Passarella, Andrea, Conti, Marco

论文摘要

Internet边缘设备和数据的指数增长是针对仅基于远程云平台的方法提高的可扩展性和隐私问题。数据重力是雾计算中的基本概念,指出了用于数据分析的计算分散化,作为解决这些问题的可行替代方案。在多个合作设备上分散AI任务意味着在完全集中化(即单个设备上的所有数据)和完全分散的(即在源位置上的数据)中识别最佳位置或收集点(简称CP)(简称CP)。我们提出了一个分析框架,能够在此连续体中找到最佳操作点,将学习任务的精度与相应的网络和相应的网络和计算成本联系起来,用于移动数据并在CPS上运行分布式培训。我们通过模拟表明,该模型可以准确预测最佳权衡,这通常是完全集中化和全面权力下放之间的中间点,还显示了大量的成本节省W.R.T.他们俩。最后,分析模型允许封闭形式或数字解决方案,这不仅使其成为性能评估工具,而且还使其成为设计工具,以在部署之前最佳地配置给定的分布式学习任务。

The exponential growth of devices and data at the edges of the Internet is rising scalability and privacy concerns on approaches based exclusively on remote cloud platforms. Data gravity, a fundamental concept in Fog Computing, points towards decentralisation of computation for data analysis, as a viable alternative to address those concerns. Decentralising AI tasks on several cooperative devices means identifying the optimal set of locations or Collection Points (CP for short) to use, in the continuum between full centralisation (i.e., all data on a single device) and full decentralisation (i.e., data on source locations). We propose an analytical framework able to find the optimal operating point in this continuum, linking the accuracy of the learning task with the corresponding network and computational cost for moving data and running the distributed training at the CPs. We show through simulations that the model accurately predicts the optimal trade-off, quite often an intermediate point between full centralisation and full decentralisation, showing also a significant cost saving w.r.t. both of them. Finally, the analytical model admits closed-form or numeric solutions, making it not only a performance evaluation instrument but also a design tool to configure a given distributed learning task optimally before its deployment.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源