论文标题

LOMAR:当地防御对联邦学习的中毒攻击

LoMar: A Local Defense Against Poisoning Attack on Federated Learning

论文作者

Li, Xingyu, Qu, Zhe, Zhao, Shangqing, Tang, Bo, Lu, Zhuo, Liu, Yao

论文摘要

联合学习(FL)提供了一个高效的分散机器学习框架,其中培训数据仍在网络中的远程客户端分发。尽管FL可以使用IoT设备启用保护隐私的移动边缘计算框架,但最近的研究表明,这种方法容易受到远程客户端中毒攻击的影响。为了解决对FL的中毒攻击,我们提供了称为{lo} cal {ma}的\ textit {两相}防御算法。在第一阶段,Lomar通过使用内核密度估计方法来测量邻居的相对分布来分数模型。在第二阶段,最佳阈值近似以从统计角度区分恶意和干净的更新。已经进行了四个现实世界数据集的全面实验,实验结果表明,我们的防御策略可以有效地保护FL系统。 {具体来说,在标签上贴上标签的攻击下,在亚马逊数据集上的防御性能表明,与FG+Krum相比,Lomar将目标标签测试准确性从$ 96.0 \%$ $提高到$ 98.8 \%\%\%$,而整体平均测试准确性从$ 90.1 \%\%\%\%\%\%\ $ 97.0.0.0.0.0.0.0 \%。

Federated learning (FL) provides a high efficient decentralized machine learning framework, where the training data remains distributed at remote clients in a network. Though FL enables a privacy-preserving mobile edge computing framework using IoT devices, recent studies have shown that this approach is susceptible to poisoning attacks from the side of remote clients. To address the poisoning attacks on FL, we provide a \textit{two-phase} defense algorithm called {Lo}cal {Ma}licious Facto{r} (LoMar). In phase I, LoMar scores model updates from each remote client by measuring the relative distribution over their neighbors using a kernel density estimation method. In phase II, an optimal threshold is approximated to distinguish malicious and clean updates from a statistical perspective. Comprehensive experiments on four real-world datasets have been conducted, and the experimental results show that our defense strategy can effectively protect the FL system. {Specifically, the defense performance on Amazon dataset under a label-flipping attack indicates that, compared with FG+Krum, LoMar increases the target label testing accuracy from $96.0\%$ to $98.8\%$, and the overall averaged testing accuracy from $90.1\%$ to $97.0\%$.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源