论文标题
detrust-fl:在分散信任设置中保存隐私的联合学习
DeTrust-FL: Privacy-Preserving Federated Learning in Decentralized Trust Setting
论文作者
论文摘要
联邦学习已成为一种保护隐私的机器学习方法,多方可以在不共享其原始培训数据的情况下培训单个模型。联合学习通常需要利用多方计算技术来确保不受信任或好奇的聚合器无法从参与培训过程中涉及的各方获得孤立的答复,从而提供强大的隐私保证,从而防止潜在的推理攻击。直到最近,人们认为其中一些安全的聚合技术足以完全保护来自好奇的聚合器的推理攻击。但是,最近的研究表明,一个好奇的聚合器可以成功发起分类攻击,以学习有关目标方模型更新的信息。本文介绍了Detrust-FL,这是一个有效地保存隐私的联合学习框架,可在安全汇总期间通过确保当事方的模型更新以私人和安全的方式包含在汇总模型中,以解决隔离攻击,例如分类攻击,例如分类攻击。 DeCrust-FL提出了一个分散的信任共识机制,并结合了最近提出的分散功能加密(FE)方案,所有当事方在协作生成解密关键片段之前,在该矩阵之前同意,从而获得控制和信任,从而在体面的设置中获得控制和信任。我们的实验评估表明,在训练时间方面,DeTrust-FL胜过最先进的安全多方聚合解决方案,并减少了传输的数据量。与现有方法相反,这是在没有建立对外部受信任实体的任何信任依赖性的情况下实现的。
Federated learning has emerged as a privacy-preserving machine learning approach where multiple parties can train a single model without sharing their raw training data. Federated learning typically requires the utilization of multi-party computation techniques to provide strong privacy guarantees by ensuring that an untrusted or curious aggregator cannot obtain isolated replies from parties involved in the training process, thereby preventing potential inference attacks. Until recently, it was thought that some of these secure aggregation techniques were sufficient to fully protect against inference attacks coming from a curious aggregator. However, recent research has demonstrated that a curious aggregator can successfully launch a disaggregation attack to learn information about model updates of a target party. This paper presents DeTrust-FL, an efficient privacy-preserving federated learning framework for addressing the lack of transparency that enables isolation attacks, such as disaggregation attacks, during secure aggregation by assuring that parties' model updates are included in the aggregated model in a private and secure manner. DeTrust-FL proposes a decentralized trust consensus mechanism and incorporates a recently proposed decentralized functional encryption (FE) scheme in which all parties agree on a participation matrix before collaboratively generating decryption key fragments, thereby gaining control and trust over the secure aggregation process in a decentralized setting. Our experimental evaluation demonstrates that DeTrust-FL outperforms state-of-the-art FE-based secure multi-party aggregation solutions in terms of training time and reduces the volume of data transferred. In contrast to existing approaches, this is achieved without creating any trust dependency on external trusted entities.