论文标题

保护隐私的自我监督学习的加性逻辑机制

Additive Logistic Mechanism for Privacy-Preserving Self-Supervised Learning

论文作者

Yang, Yunhao, Gohari, Parham, Topcu, Ufuk

论文摘要

我们研究与自我监督学习算法培训神经网络的权重有关的隐私风险。通过经验证据,我们表明,通过信息丰富且通常是私人数据集更新网络权重的微调阶段很容易受到隐私攻击的影响。为了解决这些漏洞,我们设计了一种训练后隐私保护算法,该算法为微调的重量增加了噪声,并提出了一种新型的差异隐私机制,该机制从物流分布中采样噪声。与两种常规的添加噪声机制相比,拉普拉斯和高斯机制,提出的机制使用了类似于高斯机制的分布的钟形分布,并且满足了与拉普拉斯机制相似的纯$ε$ - $ - 差异性隐私。我们对未受保护和受保护的模型进行会员推理攻击,以量化模型的隐私和性能之间的权衡。我们表明,提出的保护算法可以有效地将攻击精度降低至50 \% - 相当于随机猜测 - 在5 \%以下保持绩效损失。

We study the privacy risks that are associated with training a neural network's weights with self-supervised learning algorithms. Through empirical evidence, we show that the fine-tuning stage, in which the network weights are updated with an informative and often private dataset, is vulnerable to privacy attacks. To address the vulnerabilities, we design a post-training privacy-protection algorithm that adds noise to the fine-tuned weights and propose a novel differential privacy mechanism that samples noise from the logistic distribution. Compared to the two conventional additive noise mechanisms, namely the Laplace and the Gaussian mechanisms, the proposed mechanism uses a bell-shaped distribution that resembles the distribution of the Gaussian mechanism, and it satisfies pure $ε$-differential privacy similar to the Laplace mechanism. We apply membership inference attacks on both unprotected and protected models to quantify the trade-off between the models' privacy and performance. We show that the proposed protection algorithm can effectively reduce the attack accuracy to roughly 50\%-equivalent to random guessing-while maintaining a performance loss below 5\%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源