论文标题
评估垂直联合学习中隐私 - 耐用性权衡的框架
A Framework for Evaluating Privacy-Utility Trade-off in Vertical Federated Learning
论文作者
论文摘要
联合学习(FL)已成为解决数据筒仓问题的实用解决方案,而不会损害用户隐私。它的一种变体之一,即垂直联合学习(VFL),由于VFL符合企业对利用更有价值的功能以建立更好的机器学习模型的同时,在保留用户隐私的同时,人们越来越受到关注。当前在VFL中的工作集中于为特定VFL算法开发特定的保护或攻击机制。在这项工作中,我们提出了一个评估框架,该框架旨在提出隐私 - 私人评估问题。然后,我们将此框架作为指南,以全面评估针对三种广泛部署的VFL算法的大多数最先进的隐私攻击的广泛保护机制。这些评估可以帮助FL从业人员在特定要求下选择适当的保护机制。我们的评估结果表明:模型反转和大多数标签推理攻击可能会因现有保护机制而挫败;很难防止模型完成(MC)攻击,这需要更先进的MC靶向保护机制。根据我们的评估结果,我们为提高VFL系统的隐私能力提供了具体建议。该代码可从https://github.com/yankang18/attack-defense-vfl获得
Federated learning (FL) has emerged as a practical solution to tackle data silo issues without compromising user privacy. One of its variants, vertical federated learning (VFL), has recently gained increasing attention as the VFL matches the enterprises' demands of leveraging more valuable features to build better machine learning models while preserving user privacy. Current works in VFL concentrate on developing a specific protection or attack mechanism for a particular VFL algorithm. In this work, we propose an evaluation framework that formulates the privacy-utility evaluation problem. We then use this framework as a guide to comprehensively evaluate a broad range of protection mechanisms against most of the state-of-the-art privacy attacks for three widely deployed VFL algorithms. These evaluations may help FL practitioners select appropriate protection mechanisms given specific requirements. Our evaluation results demonstrate that: the model inversion and most of the label inference attacks can be thwarted by existing protection mechanisms; the model completion (MC) attack is difficult to be prevented, which calls for more advanced MC-targeted protection mechanisms. Based on our evaluation results, we offer concrete advice on improving the privacy-preserving capability of VFL systems. The code is available at https://github.com/yankang18/Attack-Defense-VFL