论文标题

通过直接反馈对齐的差异私人深度学习

Differentially Private Deep Learning with Direct Feedback Alignment

论文作者

Lee, Jaewoo, Kifer, Daniel

论文摘要

对深神经网络进行差异私有培训的标准方法,用偏见和嘈杂的梯度替代后传播的迷你批次梯度。这些对培训的修改通常会导致隐私隐私模型的准确性明显低于其非私人关系。我们假设替代培训算法可能更适合差异隐私。具体而言,我们研究了直接反馈对齐(DFA)的适用性。我们提出了一种使用DFA训练深神网络的第一种差异私人方法,并表明与基于反向Prop的各种体系结构(完全连接,卷积)和数据集的差异私有培训相比,它在准确性上取得了显着提高(通常为10-20%)。

Standard methods for differentially private training of deep neural networks replace back-propagated mini-batch gradients with biased and noisy approximations to the gradient. These modifications to training often result in a privacy-preserving model that is significantly less accurate than its non-private counterpart. We hypothesize that alternative training algorithms may be more amenable to differential privacy. Specifically, we examine the suitability of direct feedback alignment (DFA). We propose the first differentially private method for training deep neural networks with DFA and show that it achieves significant gains in accuracy (often by 10-20%) compared to backprop-based differentially private training on a variety of architectures (fully connected, convolutional) and datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源