论文标题

SC2基准:分割计算的有监督压缩

SC2 Benchmark: Supervised Compression for Split Computing

论文作者

Matsubara, Yoshitomo, Yang, Ruihan, Levorato, Marco, Mandt, Stephan

论文摘要

随着移动设备上对深度学习模型的需求不断增长,设备和更强大的边缘服务器之间的神经网络计算已成为有吸引力的解决方案。但是,与压缩数据的远程计算基线相比,现有的拆分计算方法通常表现不佳。最近的研究建议学习压缩表示,其中包含有关监督下游任务的更多相关信息,显示了压缩数据大小和监督性能之间的改善。但是,现有的评估指标仅提供分算计算的不完整图片。这项研究介绍了拆分计算(SC2)的监督压缩,并提出了新的评估标准:最大程度地限制移动设备上的计算,最大程度地减少传输数据大小,并最大化模型的准确性。我们使用10种基线方法,三个计算机视觉任务和超过180个训练有素的模型进行了全面的基准研究,并讨论了SC2的各个方面。我们还发布了SC2Bench,这是一个用于SC2的未来研究的Python软件包。我们提出的指标和软件包将帮助研究人员更好地了解拆分计算中监督压缩的权衡。

With the increasing demand for deep learning models on mobile devices, splitting neural network computation between the device and a more powerful edge server has become an attractive solution. However, existing split computing approaches often underperform compared to a naive baseline of remote computation on compressed data. Recent studies propose learning compressed representations that contain more relevant information for supervised downstream tasks, showing improved tradeoffs between compressed data size and supervised performance. However, existing evaluation metrics only provide an incomplete picture of split computing. This study introduces supervised compression for split computing (SC2) and proposes new evaluation criteria: minimizing computation on the mobile device, minimizing transmitted data size, and maximizing model accuracy. We conduct a comprehensive benchmark study using 10 baseline methods, three computer vision tasks, and over 180 trained models, and discuss various aspects of SC2. We also release sc2bench, a Python package for future research on SC2. Our proposed metrics and package will help researchers better understand the tradeoffs of supervised compression in split computing.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源