论文标题
反对会员推理攻击:您需要修剪
Against Membership Inference Attack: Pruning is All You Need
论文作者
论文摘要
针对会员推理攻击(MIA)的大型模型大小,高计算操作以及脆弱性阻碍了深度学习或深度神经网络(DNN)的受欢迎程度,尤其是在移动设备上。为了应对挑战,我们设想,重量修剪技术将有助于对MIA的DNN,同时减少模型存储和计算操作。在这项工作中,我们提出了一种修剪算法,我们表明所提出的算法可以找到可以防止MIA隐私泄漏的子网,并与原始DNN达到竞争精度。我们还通过实验来验证我们的理论见解。我们的实验结果表明,使用模型压缩的攻击准确性相应地比基线和最小游戏的攻击精度低13.6%和10%。
The large model size, high computational operations, and vulnerability against membership inference attack (MIA) have impeded deep learning or deep neural networks (DNNs) popularity, especially on mobile devices. To address the challenge, we envision that the weight pruning technique will help DNNs against MIA while reducing model storage and computational operation. In this work, we propose a pruning algorithm, and we show that the proposed algorithm can find a subnetwork that can prevent privacy leakage from MIA and achieves competitive accuracy with the original DNNs. We also verify our theoretical insights with experiments. Our experimental results illustrate that the attack accuracy using model compression is up to 13.6% and 10% lower than that of the baseline and Min-Max game, accordingly.