论文标题

襟翼:联合学习和私人扩展

FLaPS: Federated Learning and Privately Scaling

论文作者

Paul, Sudipta, Sengupta, Poushali, Mishra, Subhankar

论文摘要

联合学习(FL)是一个分布式学习过程,将模型(权重和检查点)转移到具有数据的设备上,而不是经典的传输和集中数据的经典方式。这样,敏感数据不会离开用户设备。 FL使用在非IID和不平衡分布式数据上以迭代模型平均训练的FedAvg算法,该算法在不依赖数据数量的情况下进行了训练。 FL的某些问题是,1)没有可扩展性,因为该模型在所有设备上都经过迭代训练,该设备会随着设备的降低而放大; 2)学习过程的安全性和隐私权衡不够强大,3)总体沟通效率和成本较高。为了减轻这些挑战,我们提出了联合学习和私人扩展(襟翼)体系结构,从而提高了系统的可扩展性以及系统的安全性和隐私性。这些设备分为群集,进一步使更好的隐私缩放了时间,以完成一轮培训。因此,即使设备在训练的中间掉落,整个过程也可以在确定的时间后重新开始。数据和模型都使用迭代改组的差异私人报告进行传达,这提供了更好的隐私 - 私人权衡权衡。我们使用各种CNN模型评估了MNIST,CIFAR10和TINY-IMAGENET-200数据集的皮瓣。实验结果证明,皮瓣是一种改进,时间和隐私缩放环境,相对于中央和FL模型具有更好和可比的学习后参数。

Federated learning (FL) is a distributed learning process where the model (weights and checkpoints) is transferred to the devices that posses data rather than the classical way of transferring and aggregating the data centrally. In this way, sensitive data does not leave the user devices. FL uses the FedAvg algorithm, which is trained in the iterative model averaging way, on the non-iid and unbalanced distributed data, without depending on the data quantity. Some issues with the FL are, 1) no scalability, as the model is iteratively trained over all the devices, which amplifies with device drops; 2) security and privacy trade-off of the learning process still not robust enough and 3) overall communication efficiency and the cost are higher. To mitigate these challenges we present Federated Learning and Privately Scaling (FLaPS) architecture, which improves scalability as well as the security and privacy of the system. The devices are grouped into clusters which further gives better privacy scaled turn around time to finish a round of training. Therefore, even if a device gets dropped in the middle of training, the whole process can be started again after a definite amount of time. The data and model both are communicated using differentially private reports with iterative shuffling which provides a better privacy-utility trade-off. We evaluated FLaPS on MNIST, CIFAR10, and TINY-IMAGENET-200 dataset using various CNN models. Experimental results prove FLaPS to be an improved, time and privacy scaled environment having better and comparable after-learning-parameters with respect to the central and FL models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源