论文标题

ADADEEP:用法驱动的,自动的深层模型压缩框架,用于无处不在的智能手机

AdaDeep: A Usage-Driven, Automated Deep Model Compression Framework for Enabling Ubiquitous Intelligent Mobiles

论文作者

Liu, Sicong, Du, Junzhao, Nan, Kaiming, ZimuZhou, Wang, Atlas, Lin, Yingyan

论文摘要

深度神经网络(DNN)的最新突破促进了将DNN驱动智能带入移动平台的巨大需求。尽管DNN压缩技术已经证明了在资源受限平台上部署DNN的潜力,但当前的实践受到了两个局限性:1)仅研究了独立的压缩方案,即使每种压缩技术仅适用于某些类型的DNN层; 2)主要是针对DNNS的推理准确性进行了优化的,而无需明确考虑其他以应用程序驱动的系统性能(例如延迟和能源成本)以及跨平台(例如存储和处理能力)的不同资源可用性。为此,我们提出了Adadeep,这是一种用法驱动的自动化DNN压缩框架,用于系统地探索整体系统级别的性能和资源限制之间所需的权衡。具体而言,以层次的方式,Adadeep自动为给定DNN选择了最合适的压缩技术组合和相应的压缩超参数。对六个数据集和十二个设备进行全面评估表明,Adadeep可以达到$ 18.6 \ times $延迟减少,$ 9.8 \ times $提高能源效率,$ 37.3 \ times $ $ 37.3 \ times $存储减少DNN,同时又不可忽略的准确性损失。此外,Adadeep还发现了压缩技术的多种新型组合。

Recent breakthroughs in Deep Neural Networks (DNNs) have fueled a tremendously growing demand for bringing DNN-powered intelligence into mobile platforms. While the potential of deploying DNNs on resource-constrained platforms has been demonstrated by DNN compression techniques, the current practice suffers from two limitations: 1) merely stand-alone compression schemes are investigated even though each compression technique only suit for certain types of DNN layers; and 2) mostly compression techniques are optimized for DNNs' inference accuracy, without explicitly considering other application-driven system performance (e.g., latency and energy cost) and the varying resource availability across platforms (e.g., storage and processing capability). To this end, we propose AdaDeep, a usage-driven, automated DNN compression framework for systematically exploring the desired trade-off between performance and resource constraints, from a holistic system level. Specifically, in a layer-wise manner, AdaDeep automatically selects the most suitable combination of compression techniques and the corresponding compression hyperparameters for a given DNN. Thorough evaluations on six datasets and across twelve devices demonstrate that AdaDeep can achieve up to $18.6\times$ latency reduction, $9.8\times$ energy-efficiency improvement, and $37.3\times$ storage reduction in DNNs while incurring negligible accuracy loss. Furthermore, AdaDeep also uncovers multiple novel combinations of compression techniques.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源