论文标题
可证明的会员推理隐私
Provable Membership Inference Privacy
论文作者
论文摘要
在涉及敏感数据(例如财务和医疗保健)的应用中,保留数据隐私的必要性可能是机器学习模型开发的重要障碍。差异隐私(DP)已成为可证明隐私的规范标准。但是,DP的强大理论保证通常是以大幅度下降的机器学习为代价,而DP保证本身可能很难解释。在这项工作中,我们提出了一个新颖的隐私概念,会员推理隐私(MIP),以应对这些挑战。我们对MIP和DP之间的关系进行了精确的表征,并证明与保证DP所需的量相比,可以使用较小的随机性来实现MIP,从而导致效用较小。 MIP保证也可以根据会员推理攻击的成功率很容易解释。我们的理论结果还产生了一种简单的算法,用于保证MIP,该算法可以用作具有连续输出的任何算法(包括参数模型训练)的包装器。
In applications involving sensitive data, such as finance and healthcare, the necessity for preserving data privacy can be a significant barrier to machine learning model development. Differential privacy (DP) has emerged as one canonical standard for provable privacy. However, DP's strong theoretical guarantees often come at the cost of a large drop in its utility for machine learning, and DP guarantees themselves can be difficult to interpret. In this work, we propose a novel privacy notion, membership inference privacy (MIP), to address these challenges. We give a precise characterization of the relationship between MIP and DP, and show that MIP can be achieved using less amount of randomness compared to the amount required for guaranteeing DP, leading to a smaller drop in utility. MIP guarantees are also easily interpretable in terms of the success rate of membership inference attacks. Our theoretical results also give rise to a simple algorithm for guaranteeing MIP which can be used as a wrapper around any algorithm with a continuous output, including parametric model training.