论文标题
压缩增强了差异化私人联合学习
Compression Boosts Differentially Private Federated Learning
论文作者
论文摘要
联合学习允许分布式实体在不共享自己的数据的情况下协作培训通用模型。尽管它仅通过仅交换参数更新来防止数据收集和汇总,但它仍然容易受到各种推理和重建攻击的影响,恶意实体可以从捕获的梯度中学习有关参与者培训数据的私人信息。差异隐私用于通过陈述交换的更新向量来获得理论上的合理隐私,以防止这种推理攻击。但是,附加的噪声与模型大小成正比,该模型大小与现代神经网络可能非常大。这可能导致模型质量差。在本文中,压缩感被用来降低模型大小,从而在不牺牲隐私的情况下提高模型质量。我们在实验上使用2个数据集表明,与传统的非私人联邦学习计划相比,我们的隐私提案可以将沟通成本降低多达95%,而绩效可忽略不计。
Federated Learning allows distributed entities to train a common model collaboratively without sharing their own data. Although it prevents data collection and aggregation by exchanging only parameter updates, it remains vulnerable to various inference and reconstruction attacks where a malicious entity can learn private information about the participants' training data from the captured gradients. Differential Privacy is used to obtain theoretically sound privacy guarantees against such inference attacks by noising the exchanged update vectors. However, the added noise is proportional to the model size which can be very large with modern neural networks. This can result in poor model quality. In this paper, compressive sensing is used to reduce the model size and hence increase model quality without sacrificing privacy. We show experimentally, using 2 datasets, that our privacy-preserving proposal can reduce the communication costs by up to 95% with only a negligible performance penalty compared to traditional non-private federated learning schemes.