论文标题
如何在个性化联合学习中进行后门超网络?
How to Backdoor HyperNetwork in Personalized Federated Learning?
论文作者
论文摘要
本文通过中毒攻击探讨了以前未知的基于超核的个性化联合学习(HyperNETFL)的后门风险。基于此,我们提出了一种新型的模型传输攻击(称为HNTROJ),即,第一个模型将本地后门感染模型转移到所有合法和个性化的本地模型中,这些模型是由HyperNetfl模型通过在整个训练过程中在所有损害客户中计算出的一致有效的局部梯度而生成的。结果,HNTROJ减少了成功启动攻击所需的折衷客户的数量,而没有任何关于模型效用的突然变化或退化的迹象,这使我们的攻击隐秘。为了防止HNTROJ,我们将几种抗后门训练算法调整为HyperNetfl。使用多个基准数据集进行的广泛实验表明,HNTROJ显着超过了数据中毒和模型替换攻击,即使具有适度的受损客户端,也可以绕开强大的培训算法。
This paper explores previously unknown backdoor risks in HyperNet-based personalized federated learning (HyperNetFL) through poisoning attacks. Based upon that, we propose a novel model transferring attack (called HNTroj), i.e., the first of its kind, to transfer a local backdoor infected model to all legitimate and personalized local models, which are generated by the HyperNetFL model, through consistent and effective malicious local gradients computed across all compromised clients in the whole training process. As a result, HNTroj reduces the number of compromised clients needed to successfully launch the attack without any observable signs of sudden shifts or degradation regarding model utility on legitimate data samples making our attack stealthy. To defend against HNTroj, we adapted several backdoor-resistant FL training algorithms into HyperNetFL. An extensive experiment that is carried out using several benchmark datasets shows that HNTroj significantly outperforms data poisoning and model replacement attacks and bypasses robust training algorithms even with modest numbers of compromised clients.