论文标题

Diva:用于私人机器学习的加速器

DiVa: An Accelerator for Differentially Private Machine Learning

论文作者

Park, Beomsik, Hwang, Ranggi, Yoon, Dongho, Choi, Yoonhyuk, Rhu, Minsoo

论文摘要

机器学习(ML)的广泛部署正在引起严重的关注,以保护为收集培训数据做出贡献的用户的隐私。差异隐私(DP)正在迅速在行业中获得势头,作为保护隐私保护的实用标准。尽管DP的重要性,但是在计算机系统社区中,几乎没有探索这种新兴ML算法对系统设计的影响。在这项工作中,我们对名为DP-SGD的最先进的私人ML培训算法进行了详细的工作量表征。我们发现了DP-SGD的几种独特属性(例如,其高内存能力和计算要求与非私人ML),从根引起其关键瓶颈。基于我们的分析,我们提出了一个名为Diva的差异私有ML的加速器,该加速器在计算利用率方面具有显着改善,从而导致高2.6倍的能量效率与常规收缩期阵列。

The widespread deployment of machine learning (ML) is raising serious concerns on protecting the privacy of users who contributed to the collection of training data. Differential privacy (DP) is rapidly gaining momentum in the industry as a practical standard for privacy protection. Despite DP's importance, however, little has been explored within the computer systems community regarding the implication of this emerging ML algorithm on system designs. In this work, we conduct a detailed workload characterization on a state-of-the-art differentially private ML training algorithm named DP-SGD. We uncover several unique properties of DP-SGD (e.g., its high memory capacity and computation requirements vs. non-private ML), root-causing its key bottlenecks. Based on our analysis, we propose an accelerator for differentially private ML named DiVa, which provides a significant improvement in compute utilization, leading to 2.6x higher energy-efficiency vs. conventional systolic arrays.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源