论文标题
通过稀疏模型扰动的联合学习:在客户层差异隐私下提高准确性
Federated Learning with Sparsified Model Perturbation: Improving Accuracy under Client-Level Differential Privacy
论文作者
论文摘要
联合学习(FL)使Edge设备能够在当地保持培训数据的同时协作学习共享模型,并且与传统的集中学习范式相比,可以保护隐私。但是,仍然可以根据FL中共享的模型参数来推断有关培训数据的敏感信息。差异隐私(DP)是防御这些攻击的最新技术。在FL中实现DP的关键挑战在于DP噪声对模型准确性的不利影响,尤其是对于具有大量参数的深度学习模型。本文开发了一种名为Fed-SMP的新型差异私有FL方案,该方案在保持高模型精度的同时提供了客户级DP保证。为了减轻隐私保护对模型准确性的影响,FED-SMP利用一种称为稀疏模型扰动(SMP)的新技术,在该技术中,首先首先对本地型号进行稀疏,然后才被高斯噪声扰动。我们使用Renyi DP为FED-SMP提供了严格的端到端隐私分析,并证明了FED-SMP与无偏见和偏见的稀疏性的收敛性。对现实世界数据集进行了广泛的实验,以证明FED-SMP同时使用相同的DP保证和节省沟通成本来提高模型准确性的有效性。
Federated learning (FL) that enables edge devices to collaboratively learn a shared model while keeping their training data locally has received great attention recently and can protect privacy in comparison with the traditional centralized learning paradigm. However, sensitive information about the training data can still be inferred from model parameters shared in FL. Differential privacy (DP) is the state-of-the-art technique to defend against those attacks. The key challenge to achieving DP in FL lies in the adverse impact of DP noise on model accuracy, particularly for deep learning models with large numbers of parameters. This paper develops a novel differentially-private FL scheme named Fed-SMP that provides a client-level DP guarantee while maintaining high model accuracy. To mitigate the impact of privacy protection on model accuracy, Fed-SMP leverages a new technique called Sparsified Model Perturbation (SMP) where local models are sparsified first before being perturbed by Gaussian noise. We provide a tight end-to-end privacy analysis for Fed-SMP using Renyi DP and prove the convergence of Fed-SMP with both unbiased and biased sparsifications. Extensive experiments on real-world datasets are conducted to demonstrate the effectiveness of Fed-SMP in improving model accuracy with the same DP guarantee and saving communication cost simultaneously.