论文标题
通过算法 - 晶状体共同设计迈向可扩展和隐私的深神经网络
Towards Scalable and Privacy-Preserving Deep Neural Network via Algorithmic-Cryptographic Co-design
论文作者
论文摘要
深度神经网络(DNNS)在各种现实世界应用中取得了显着进步,尤其是在提供丰富的训练数据时。但是,目前数据隔离已成为一个严重的问题。现有作品从算法的角度或加密角度建立了保留DNN模型的隐私。前者主要在数据持有人之间或数据持有人和服务器之间分配DNN计算图,这表明可扩展性良好,但具有准确的损失和潜在的隐私风险。相比之下,后者利用了耗时的加密技术,具有强大的隐私保证,但可扩展性差。在本文中,我们提出了SPNN-从算法 - 晶状体的共同观点上,可扩展且具有隐私的深度神经网络学习框架。从算法的角度来看,我们将DNN模型的计算图分为两个部分,即由数据持有者执行的私人数据相关计算以及委派较高计算能力的服务器的其余重量计算。从密码的角度来看,我们建议使用两种类型的加密技术,即秘密共享和同型加密,以使孤立的数据持有人私下和合作地进行私人数据相关的计算。此外,我们在分散的设置中实现SPNN,并介绍用户友好的API。在现实世界数据集上进行的实验结果证明了SPNN的优势。
Deep Neural Networks (DNNs) have achieved remarkable progress in various real-world applications, especially when abundant training data are provided. However, data isolation has become a serious problem currently. Existing works build privacy preserving DNN models from either algorithmic perspective or cryptographic perspective. The former mainly splits the DNN computation graph between data holders or between data holders and server, which demonstrates good scalability but suffers from accuracy loss and potential privacy risks. In contrast, the latter leverages time-consuming cryptographic techniques, which has strong privacy guarantee but poor scalability. In this paper, we propose SPNN - a Scalable and Privacy-preserving deep Neural Network learning framework, from algorithmic-cryptographic co-perspective. From algorithmic perspective, we split the computation graph of DNN models into two parts, i.e., the private data related computations that are performed by data holders and the rest heavy computations that are delegated to a server with high computation ability. From cryptographic perspective, we propose using two types of cryptographic techniques, i.e., secret sharing and homomorphic encryption, for the isolated data holders to conduct private data related computations privately and cooperatively. Furthermore, we implement SPNN in a decentralized setting and introduce user-friendly APIs. Experimental results conducted on real-world datasets demonstrate the superiority of SPNN.