论文标题
联合学习的鲁棒性和隐私性的本地和中央差别隐私
Local and Central Differential Privacy for Robustness and Privacy in Federated Learning
论文作者
论文摘要
联合学习(FL)允许多个参与者通过在仅交换模型更新的同时保持其数据集的本地来协作训练机器学习模型。 las,这不一定没有隐私和鲁棒性漏洞,例如通过会员资格,财产和后门攻击。本文研究了是否以及在何种程度上可以使用差异隐私(DP)来保护佛罗里达州的隐私和鲁棒性。为此,我们在FL中对本地和中央差异隐私(LDP/CDP)技术进行了首次评估,评估了它们的可行性和有效性。我们的实验表明,这两个DP变体都对后门攻击都充满了影响,尽管具有不同水平的保护效果权衡,但无论如何,比其他鲁棒性防御能力更有效。 DP还减轻了FL中的白盒会员推理攻击,我们的工作是第一个以经验证明它的作品。但是,不民度和CDP均未抗财产推断。总体而言,我们的工作提供了一种全面的,可重复使用的测量方法,以量化差异性私人FL中鲁棒性/隐私性与效用之间的权衡。
Federated Learning (FL) allows multiple participants to train machine learning models collaboratively by keeping their datasets local while only exchanging model updates. Alas, this is not necessarily free from privacy and robustness vulnerabilities, e.g., via membership, property, and backdoor attacks. This paper investigates whether and to what extent one can use differential Privacy (DP) to protect both privacy and robustness in FL. To this end, we present a first-of-its-kind evaluation of Local and Central Differential Privacy (LDP/CDP) techniques in FL, assessing their feasibility and effectiveness. Our experiments show that both DP variants do d fend against backdoor attacks, albeit with varying levels of protection-utility trade-offs, but anyway more effectively than other robustness defenses. DP also mitigates white-box membership inference attacks in FL, and our work is the first to show it empirically. Neither LDP nor CDP, however, defend against property inference. Overall, our work provides a comprehensive, re-usable measurement methodology to quantify the trade-offs between robustness/privacy and utility in differentially private FL.