论文标题

通过不精确的ADMM差异私人联盟学习,并具有多个本地更新

Differentially Private Federated Learning via Inexact ADMM with Multiple Local Updates

论文作者

Ryu, Minseok, Kim, Kibaek

论文摘要

差异隐私(DP)技术可以应用于联合学习模型,以统计地保证数据隐私,以防止对学习代理之间进行交流的推理攻击。但是,在确保强大的数据隐私的同时,DP技术阻碍了更大的学习表现。在本文中,我们开发了DP不精确的交替方向方法乘数算法的方法,并具有多个用于联合学习的局部更新,其中通过从拉普拉斯分布产生的随机噪声来解决一系列凸子问题。我们表明,我们的算法为每次迭代提供$ \barε$ -DP,其中$ \barε$是用户控制的隐私预算。我们还提出了所提出算法的收敛分析。使用MNIST和女权数据集进行图像分类,我们证明我们的算法将测试错误最多减少了$ 31 \%$ $与现有的DP算法相比,同时实现了相同的数据隐私级别。数值实验还表明,我们的算法比现有算法更快。

Differential privacy (DP) techniques can be applied to the federated learning model to statistically guarantee data privacy against inference attacks to communication among the learning agents. While ensuring strong data privacy, however, the DP techniques hinder achieving a greater learning performance. In this paper we develop a DP inexact alternating direction method of multipliers algorithm with multiple local updates for federated learning, where a sequence of convex subproblems is solved with the objective perturbation by random noises generated from a Laplace distribution. We show that our algorithm provides $\barε$-DP for every iteration, where $\barε$ is a privacy budget controlled by the user. We also present convergence analyses of the proposed algorithm. Using MNIST and FEMNIST datasets for the image classification, we demonstrate that our algorithm reduces the testing error by at most $31\%$ compared with the existing DP algorithm, while achieving the same level of data privacy. The numerical experiment also shows that our algorithm converges faster than the existing algorithm.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源