论文标题

通过可信赖的硬件减轻联邦学习中的泄漏

Mitigating Leakage in Federated Learning with Trusted Hardware

论文作者

Chamani, Javad Ghareh, Papadopoulos, Dimitrios

论文摘要

在联邦学习中,多个政党为了通过其各自的数据集培训全球模型。即使加密原语(例如,同型加密)可以帮助在此环境中获得数据隐私,但如果无司法这样做,各方仍可能会泄漏一些部分信息。在这项工作中,我们研究了Secureboost的联合学习框架[Cheng等人,FL@IJCAI'19]作为一个具体的例子,它根据其泄漏概况证明了泄漏滥用的攻击,并通过实验评估我们攻击的有效性。然后,我们提出了依靠可信赖的执行环境的两个安全版本。我们实施和基准测试我们的协议,以证明它们在计算方面的速度快1.2-5.4倍,并且比Secureboost少5-49倍。

In federated learning, multiple parties collaborate in order to train a global model over their respective datasets. Even though cryptographic primitives (e.g., homomorphic encryption) can help achieve data privacy in this setting, some partial information may still be leaked across parties if this is done non-judiciously. In this work, we study the federated learning framework of SecureBoost [Cheng et al., FL@IJCAI'19] as a specific such example, demonstrate a leakage-abuse attack based on its leakage profile, and experimentally evaluate the effectiveness of our attack. We then propose two secure versions relying on trusted execution environments. We implement and benchmark our protocols to demonstrate that they are 1.2-5.4X faster in computation and need 5-49X less communication than SecureBoost.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源