论文标题

波塞冬:保护联合神经网络学习

POSEIDON: Privacy-Preserving Federated Neural Network Learning

论文作者

Sav, Sinem, Pyrgelis, Apostolos, Troncoso-Pastoriza, Juan R., Froelicher, David, Bossuat, Jean-Philippe, Sousa, Joao Sa, Hubaux, Jean-Pierre

论文摘要

在本文中,我们解决了在$ n $ - 合作,联合学习环境中对神经网络进行隐私培训和评估的问题。我们提出了一个新颖的系统Poseidon,这是保护隐私神经网络培训政权中的第一个。它采用基于多方晶格的密码学来保留培训数据,模型和评估数据的机密性,在被动对抗模型和最多$ N-1 $派对之间的碰撞下。为了有效地执行用于培训神经网络的安全反向传播算法,我们提供了一种通用的包装方法,可以在加密数据上进行单个指令,多个数据(SIMD)操作。我们还在加密自举操作中引入了任意线性转换,优化了各方的代价高昂的加密计算,并为选择加密参数定义了受约束的优化问题。我们的实验结果表明,Poseidon具有类似于集中式或分散的非私有方法的精度,并且其计算和通信开销与当事方的数量线性缩放。 Poseidon在MNIST数据集上训练一个三层神经网络,在不到2小时的时间内,具有784个功能和60k样品分布在10个各方之间。

In this paper, we address the problem of privacy-preserving training and evaluation of neural networks in an $N$-party, federated learning setting. We propose a novel system, POSEIDON, the first of its kind in the regime of privacy-preserving neural network training. It employs multiparty lattice-based cryptography to preserve the confidentiality of the training data, the model, and the evaluation data, under a passive-adversary model and collusions between up to $N-1$ parties. To efficiently execute the secure backpropagation algorithm for training neural networks, we provide a generic packing approach that enables Single Instruction, Multiple Data (SIMD) operations on encrypted data. We also introduce arbitrary linear transformations within the cryptographic bootstrapping operation, optimizing the costly cryptographic computations over the parties, and we define a constrained optimization problem for choosing the cryptographic parameters. Our experimental results show that POSEIDON achieves accuracy similar to centralized or decentralized non-private approaches and that its computation and communication overhead scales linearly with the number of parties. POSEIDON trains a 3-layer neural network on the MNIST dataset with 784 features and 60K samples distributed among 10 parties in less than 2 hours.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源