论文标题

联邦深度学习隐私攻击和防御策略的概述

An Overview of Federated Deep Learning Privacy Attacks and Defensive Strategies

论文作者

Enthoven, David, Al-Ars, Zaid

论文摘要

随着对数据私人关系的关注和立法,正在开发协作机器学习(ML)算法,以确保保护用于处理的私人数据。联合学习(FL)是这些方法中最受欢迎的方法,它通过促进共享模型的协作培训提供隐私保护,而无需将任何私人数据与集中式服务器交换。相反,发送了以机器学习模型更新形式的数据抽象。最近的研究表明,此类模型更新可能仍然很好地泄漏私人信息,因此需要进行更多结构化的风险评估。在本文中,我们分析了FL的现有漏洞,并随后对针对FLS隐私保护功能的可能攻击方法进行文献综述。然后,这些攻击方法由基本分类法分类。此外,我们还提供了有关FL的最新防御策略和算法的文献研究,旨在克服这些攻击。这些防御策略由各自的基本防御原则归类。本文得出结论,单个防御策略的应用不足以为所有可用的攻击方法提供足够的保护。

With the increased attention and legislation for data-privacy, collaborative machine learning (ML) algorithms are being developed to ensure the protection of private data used for processing. Federated learning (FL) is the most popular of these methods, which provides privacy preservation by facilitating collaborative training of a shared model without the need to exchange any private data with a centralized server. Rather, an abstraction of the data in the form of a machine learning model update is sent. Recent studies showed that such model updates may still very well leak private information and thus more structured risk assessment is needed. In this paper, we analyze existing vulnerabilities of FL and subsequently perform a literature review of the possible attack methods targetingFL privacy protection capabilities. These attack methods are then categorized by a basic taxonomy. Additionally, we provide a literature study of the most recent defensive strategies and algorithms for FL aimed to overcome these attacks. These defensive strategies are categorized by their respective underlying defence principle. The paper concludes that the application of a single defensive strategy is not enough to provide adequate protection to all available attack methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源