论文标题
猎鹰:诚实的少数恶意安全的私人深度学习框架
FALCON: Honest-Majority Maliciously Secure Framework for Private Deep Learning
论文作者
论文摘要
我们建议Falcon是一种端到端的三方协议,用于有效的私人培训和大型机器学习模型的推断。 Falcon提出了四个主要优势 - (i)它具有高度表现力,并支持高容量网络,例如VGG16(ii),它支持批处理标准化,这对于培训诸如Alexnet(III)猎鹰(III)诸如Alexnet(III)诸如Alexnet(iii)的安全性的重要性都很重要,以免堕胎的安全性,以防止恶意的对手,以使其具有良好的诚实(IV)的效果,并使其具有新的效果,并使其具有新的效果,并具有新的效果。学习解决方案。与私人推理的先前艺术相比,我们平均比Securenn(PETS'19)快8倍,与ABY3(CCS'18)相当。我们的沟通效率比其中的任何一个高约16-200倍。对于私人培训,我们比Securenn快6倍,比ABY3快4.4倍,沟通效率高约2-60倍。我们在WAN设置中的实验表明,在大型网络和数据集上,计算操作主导了MPC的整体延迟,而不是通信。
We propose Falcon, an end-to-end 3-party protocol for efficient private training and inference of large machine learning models. Falcon presents four main advantages - (i) It is highly expressive with support for high capacity networks such as VGG16 (ii) it supports batch normalization which is important for training complex networks such as AlexNet (iii) Falcon guarantees security with abort against malicious adversaries, assuming an honest majority (iv) Lastly, Falcon presents new theoretical insights for protocol design that make it highly efficient and allow it to outperform existing secure deep learning solutions. Compared to prior art for private inference, we are about 8x faster than SecureNN (PETS'19) on average and comparable to ABY3 (CCS'18). We are about 16-200x more communication efficient than either of these. For private training, we are about 6x faster than SecureNN, 4.4x faster than ABY3 and about 2-60x more communication efficient. Our experiments in the WAN setting show that over large networks and datasets, compute operations dominate the overall latency of MPC, as opposed to the communication.