论文标题

提出测试释放的Renyi差异隐私以及对私人和健壮的机器学习的应用

Renyi Differential Privacy of Propose-Test-Release and Applications to Private and Robust Machine Learning

论文作者

Wang, Jiachen T., Mahloujifar, Saeed, Wang, Shouda, Jia, Ruoxi, Mittal, Prateek

论文摘要

提出测试释放(PTR)是一个差异隐私框架,可与局部功能的敏感性,而不是其全球敏感性。该框架通常用于以差异性私有方式释放强大的统计数据,例如中值或修剪平均值。虽然PTR是十年前引入的常见框架,但在诸如Robust SGD之类的应用中使用它,我们需要许多自适应鲁棒的查询是具有挑战性的。这主要是由于缺乏Renyi差异隐私(RDP)分析,这是会计师差异私人深度学习的瞬间方法的基本要素。在这项工作中,我们概括了标准PTR,并在目标函数界定全局敏感性时得出了第一个RDP绑定的RDP。我们表明,与直接分析的$(\ eps,δ)$ -DP相比,我们的RDP绑定可产生更严格的DP保证。我们还得出了在亚采样下PTR的算法特异性隐私扩增。我们表明,我们的界限比一般的上限和接近下限的界限要紧密得多。我们的RDP界限可以使PTR的许多自适应运行的组成更严格的隐私损失计算。作为我们分析的应用,我们表明PTR和我们的理论结果可用于设计用于拜占庭强大的训练算法的私人变体,这些变体使用强大的统计量来梯度聚集。我们对不同数据集和体系结构的标签,功能和梯度损坏的设置进行实验。我们表明,与基线相比,基于PTR的私人和强大的培训算法可显着改善该实用性。

Propose-Test-Release (PTR) is a differential privacy framework that works with local sensitivity of functions, instead of their global sensitivity. This framework is typically used for releasing robust statistics such as median or trimmed mean in a differentially private manner. While PTR is a common framework introduced over a decade ago, using it in applications such as robust SGD where we need many adaptive robust queries is challenging. This is mainly due to the lack of Renyi Differential Privacy (RDP) analysis, an essential ingredient underlying the moments accountant approach for differentially private deep learning. In this work, we generalize the standard PTR and derive the first RDP bound for it when the target function has bounded global sensitivity. We show that our RDP bound for PTR yields tighter DP guarantees than the directly analyzed $(\eps, δ)$-DP. We also derive the algorithm-specific privacy amplification bound of PTR under subsampling. We show that our bound is much tighter than the general upper bound and close to the lower bound. Our RDP bounds enable tighter privacy loss calculation for the composition of many adaptive runs of PTR. As an application of our analysis, we show that PTR and our theoretical results can be used to design differentially private variants for byzantine robust training algorithms that use robust statistics for gradients aggregation. We conduct experiments on the settings of label, feature, and gradient corruption across different datasets and architectures. We show that PTR-based private and robust training algorithm significantly improves the utility compared with the baseline.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源