论文标题

保护联合学习的分散聚合隐私化聚合

Privacy-preserving Decentralized Aggregation for Federated Learning

论文作者

Jeon, Beomyeol, Ferdous, S. M., Rahman, Muntasir Raihan, Walid, Anwar

论文摘要

联合学习是一个有前途的框架,用于学习跨越多个区域的分散数据。这种方法避免了昂贵的中央培训数据聚合成本,并且可以提高隐私性,因为分布式站点不必揭示对隐私敏感的数据。在本文中,我们为联合学习开发了一个保护隐私的分散聚合协议。我们使用乘数(ADMM)的交替方向方法制定分布式聚合协议,并检查其隐私弱点。与先前使用差异隐私或同形加密进行隐私的工作不同,我们开发了一项协议,该协议控制每一轮聚合中参与者之间的通信,以最大程度地减少隐私泄漏。我们为诚实而有趣的对手建立了其隐私保证。我们还提出了一种有效的算法来构建这种通信模式,灵感来自组合块设计理论。我们基于这种新型群体沟通模式设计的安全聚合协议可为您提供隐私保证的联合培训的有效算法。我们通过9和15个分布式站点的基准数据集评估了有关图像分类和下字预测应用的联合培训算法。评估结果表明,我们的算法在保护隐私时与标准的集中联盟联合学习方法相当地执行。测试准确性的降解仅高达0.73%。

Federated learning is a promising framework for learning over decentralized data spanning multiple regions. This approach avoids expensive central training data aggregation cost and can improve privacy because distributed sites do not have to reveal privacy-sensitive data. In this paper, we develop a privacy-preserving decentralized aggregation protocol for federated learning. We formulate the distributed aggregation protocol with the Alternating Direction Method of Multiplier (ADMM) and examine its privacy weakness. Unlike prior work that use Differential Privacy or homomorphic encryption for privacy, we develop a protocol that controls communication among participants in each round of aggregation to minimize privacy leakage. We establish its privacy guarantee against an honest-but-curious adversary. We also propose an efficient algorithm to construct such a communication pattern, inspired by combinatorial block design theory. Our secure aggregation protocol based on this novel group communication pattern design leads to an efficient algorithm for federated training with privacy guarantees. We evaluate our federated training algorithm on image classification and next-word prediction applications over benchmark datasets with 9 and 15 distributed sites. Evaluation results show that our algorithm performs comparably to the standard centralized federated learning method while preserving privacy; the degradation in test accuracy is only up to 0.73%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源