论文标题
仔细观察差异私人学习者的校准
A Closer Look at the Calibration of Differentially Private Learners
论文作者
论文摘要
我们系统地研究了经过不同私有随机梯度下降(DP-SGD)训练的分类器的校准,并观察到各种视力和语言任务的误解。我们的分析将DP-SGD中的每个例子梯度剪辑确定为误解的主要原因,我们表明,现有的改善校准的方法仅在校准误差方面提供边缘改善,而偶尔会导致精确度的大量降级。作为解决方案,我们表明,私人的私有变体的后处理校准方法(例如温度缩放和PLATT缩放)令人惊讶地有效,并且对整个模型的实用性成本可忽略不计。在7个任务中,与DP-SGD的温度缩放和PLATT缩放平均会导致预期校准误差的平均降低3.1倍,并且最多只会降低准确性的降低。
We systematically study the calibration of classifiers trained with differentially private stochastic gradient descent (DP-SGD) and observe miscalibration across a wide range of vision and language tasks. Our analysis identifies per-example gradient clipping in DP-SGD as a major cause of miscalibration, and we show that existing approaches for improving calibration with differential privacy only provide marginal improvements in calibration error while occasionally causing large degradations in accuracy. As a solution, we show that differentially private variants of post-processing calibration methods such as temperature scaling and Platt scaling are surprisingly effective and have negligible utility cost to the overall model. Across 7 tasks, temperature scaling and Platt scaling with DP-SGD result in an average 3.1-fold reduction in the in-domain expected calibration error and only incur at most a minor percent drop in accuracy.