论文标题

FLVOOGD:坚固和隐私保护联合学习

FLVoogd: Robust And Privacy Preserving Federated Learning

论文作者

Tian, Yuhang, Wang, Rui, Qiao, Yanqi, Panaousis, Emmanouil, Liang, Kaitai

论文摘要

在这项工作中,我们提出了一种更新的联合学习方法Flvoogd,其中服务器和客户协同消除了拜占庭式攻击,同时保存隐私。特别是,服务器使用基于噪声(DBSCAN)的应用程序的自动密度的空间聚类(DBSCAN)结合使用S2PC,以将良好的良性群聚类而无需获得敏感的个人信息。同时,客户构建双重模型并执行基于测试的距离控制,以将其本地模型调整为全球模型以实现个性化。我们的框架是自动和自适应的,服务器/客户端在培训过程中无需调整参数。此外,我们的框架利用安全的多方计算(SMPC)操作,包括乘法,加法和比较,而不需要代价高昂的操作,例如除法和平方根。评估是在图像分类字段的一些常规数据集上进行的。结果表明,在大多数情况下,Flvoogd可以有效拒绝恶意上传。同时,它避免了服务器端的数据泄漏。

In this work, we propose FLVoogd, an updated federated learning method in which servers and clients collaboratively eliminate Byzantine attacks while preserving privacy. In particular, servers use automatic Density-based Spatial Clustering of Applications with Noise (DBSCAN) combined with S2PC to cluster the benign majority without acquiring sensitive personal information. Meanwhile, clients build dual models and perform test-based distance controlling to adjust their local models toward the global one to achieve personalizing. Our framework is automatic and adaptive that servers/clients don't need to tune the parameters during the training. In addition, our framework leverages Secure Multi-party Computation (SMPC) operations, including multiplications, additions, and comparison, where costly operations, like division and square root, are not required. Evaluations are carried out on some conventional datasets from the image classification field. The result shows that FLVoogd can effectively reject malicious uploads in most scenarios; meanwhile, it avoids data leakage from the server-side.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源