论文标题

跨堆栈的工作负载表征深度推荐系统

Cross-Stack Workload Characterization of Deep Recommendation Systems

论文作者

Hsia, Samuel, Gupta, Udit, Wilkening, Mark, Wu, Carole-Jean, Wei, Gu-Yeon, Brooks, David

论文摘要

基于深度学习的推荐系统构成了最个性化云服务的骨干。尽管计算机架构社区最近开始注意到深入的建议推断,但最终的解决方案采用了截然不同的方法 - 从近乎存储器处理到尺度优化。为了更好地设计未来的硬件系统以进行深入建议推断,我们必须首先系统地检查和表征在执行堆栈不同级别的设计决策的基础系统级别的影响。在本文中,我们在执行堆栈的三个不同级别上表征了八个行业代表性的深度建议模型:算法和软件,系统平台和硬件微体系结构。通过这种跨堆栈的特征,我们首先表明系统部署选择(即CPU或GPU,批处理尺寸粒度)可以使我们最多可提高15倍的速度。为了更好地了解瓶颈以进行进一步的优化,我们研究了软件操作员的使用情况分解和CPU前端和后端微体系式效率低下。最后,我们对关键算法模型体系结构特征和硬件瓶颈之间的相关性进行建模,从而揭示了每个硬件瓶颈背后没有单个主导算法组件。

Deep learning based recommendation systems form the backbone of most personalized cloud services. Though the computer architecture community has recently started to take notice of deep recommendation inference, the resulting solutions have taken wildly different approaches - ranging from near memory processing to at-scale optimizations. To better design future hardware systems for deep recommendation inference, we must first systematically examine and characterize the underlying systems-level impact of design decisions across the different levels of the execution stack. In this paper, we characterize eight industry-representative deep recommendation models at three different levels of the execution stack: algorithms and software, systems platforms, and hardware microarchitectures. Through this cross-stack characterization, we first show that system deployment choices (i.e., CPUs or GPUs, batch size granularity) can give us up to 15x speedup. To better understand the bottlenecks for further optimization, we look at both software operator usage breakdown and CPU frontend and backend microarchitectural inefficiencies. Finally, we model the correlation between key algorithmic model architecture features and hardware bottlenecks, revealing the absence of a single dominant algorithmic component behind each hardware bottleneck.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源