论文标题
推理时间优化使用枝条分区
Inference Time Optimization Using BranchyNet Partitioning
论文作者
论文摘要
带有边缘计算的深神经网络(DNN)应用程序在响应能力和计算资源之间进行了权衡。一方面,边缘计算可以提供高响应能力,该计算资源接近最终设备,这对于大多数云计算服务可能是令人难以置信的。另一方面,DNN推理需要执行计算能力,这可能在边缘设备上无法使用,但是云服务器可以提供它。为了解决此问题(权衡取舍),我们将DNN分配在边缘设备和云服务器之间,这意味着第一个DNN层是在边缘和云的其他图层进行处理的。本文根据网络带宽,边缘和云的计算资源以及数据固有的参数提出了DNN的最佳分区。我们的建议旨在最大程度地减少推理时间,以允许高度响应应用。为此,我们使用Dijkstra的算法显示了DNN分区问题与最短路径问题之间找到最佳解决方案之间的等效性。
Deep Neural Network (DNN) applications with edge computing presents a trade-off between responsiveness and computational resources. On one hand, edge computing can provide high responsiveness deploying computational resources close to end devices, which may be prohibitive for the majority of cloud computing services. On the other hand, DNN inference requires computational power to be executed, which may not be available on edge devices, but a cloud server can provide it. To solve this problem (trade-off), we partition a DNN between edge device and cloud server, which means the first DNN layers are processed at the edge and the other layers at the cloud. This paper proposes an optimal partition of DNN, according to network bandwidth, computational resources of edge and cloud, and parameter inherent to data. Our proposal aims to minimize the inference time, to allow high responsiveness applications. To this end, we show the equivalency between DNN partitioning problem and shortest path problem to find an optimal solution, using Dijkstra's algorithm.