论文标题

对损坏来源的强大联邦学习的绩效加权

Performance Weighting for Robust Federated Learning Against Corrupted Sources

论文作者

Stripelis, Dimitris, Abram, Marcin, Ambite, Jose Luis

论文摘要

联合学习已成为用于分布式机器学习的主要计算范式。它独特的数据隐私属性使我们能够协作培训模型,同时为参与的客户提供某些隐私保证的保证。但是,在实际应用程序中,联合环境可能包括仁慈和恶意客户的混合物,后者的目的是破坏和降级联合模型的绩效。可以应用不同的腐败计划,例如模型中毒和数据腐败。在这里,我们专注于后者,即联邦学习对各种数据腐败攻击的敏感性。我们表明,在损坏的客户的存在下,本地权重的标准全球聚合方案效率低下。为了减轻此问题,我们提出了一类通过分布式验证数据集计算的基于任务的基于任务的方法,其目标是检测和减轻损坏的客户端。具体而言,我们基于几何均值构建了强大的重量聚集方案,并在随机标签改组和目标标签翻转攻击下证明了其有效性。

Federated Learning has emerged as a dominant computational paradigm for distributed machine learning. Its unique data privacy properties allow us to collaboratively train models while offering participating clients certain privacy-preserving guarantees. However, in real-world applications, a federated environment may consist of a mixture of benevolent and malicious clients, with the latter aiming to corrupt and degrade federated model's performance. Different corruption schemes may be applied such as model poisoning and data corruption. Here, we focus on the latter, the susceptibility of federated learning to various data corruption attacks. We show that the standard global aggregation scheme of local weights is inefficient in the presence of corrupted clients. To mitigate this problem, we propose a class of task-oriented performance-based methods computed over a distributed validation dataset with the goal to detect and mitigate corrupted clients. Specifically, we construct a robust weight aggregation scheme based on geometric mean and demonstrate its effectiveness under random label shuffling and targeted label flipping attacks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源