论文标题
带有差异隐私的深度学习的反向传播剪辑
Backpropagation Clipping for Deep Learning with Differential Privacy
论文作者
论文摘要
我们提出了反向传播剪辑,这是一种私人随机梯度下降(DP-SGD)的新型变体,用于保护隐私,深度学习。我们的方法剪辑了每个可训练层的输入(在向前传球期间)及其上游梯度(在向后传)中,以确保对层梯度的有界全局灵敏度;该组合取代了现有DP-SGD变体中的梯度剪辑步骤。我们的方法易于在现有的深度学习框架中实施。我们的经验评估结果表明,与以前的工作相比,背面剪辑的较低值可为隐私参数$ε$较低的值提供更高的准确性。 MNIST的精度为98.7%,$ε= 0.07 $,CIFAR-10的精度为74%,$ε= 3.64 $。
We present backpropagation clipping, a novel variant of differentially private stochastic gradient descent (DP-SGD) for privacy-preserving deep learning. Our approach clips each trainable layer's inputs (during the forward pass) and its upstream gradients (during the backward pass) to ensure bounded global sensitivity for the layer's gradient; this combination replaces the gradient clipping step in existing DP-SGD variants. Our approach is simple to implement in existing deep learning frameworks. The results of our empirical evaluation demonstrate that backpropagation clipping provides higher accuracy at lower values for the privacy parameter $ε$ compared to previous work. We achieve 98.7% accuracy for MNIST with $ε= 0.07$ and 74% accuracy for CIFAR-10 with $ε= 3.64$.