论文标题

通过多分支建筑的个性化联合学习

Personalized Federated Learning with Multi-branch Architecture

论文作者

Mori, Junki, Yoshiyama, Tomoyuki, Ryo, Furukawa, Teranishi, Isamu

论文摘要

联合学习(FL)是一种分散的机器学习技术,它使多个客户能够协作训练模型,而无需客户彼此透露其原始数据。尽管传统的FL训练一个单一的全球模型,但客户之间的统计数据异质性导致了个性化FL(PFL)的开发,该模型在每个客户的数据上训练具有良好性能的个性化模型。 PFL的一个关键挑战是如何促进使用类似数据的客户在每个客户具有复杂分布数据并且无法确定彼此分布的情况下进行更多协作。在本文中,我们使用多分支体系结构提出了一种新的PFL方法(PFEDMB),该方法通过将神经网络的每个层分成多个分支并将特定于客户端的权重分配给每个分支来实现个性化。我们还设计了一种聚合方法,以提高通信效率和模型性能,每个分支在全球范围内通过分配给分支机构的特定于客户特定权重进行加权平均。 PFEDMB很简单,但可以通过调整分配给每个分支的权重来促进每个客户与类似客户共享知识。我们通过实验表明,使用CIFAR10和CIFAR100数据集,PFEDMB的性能比最先进的PFL方法更好。

Federated learning (FL) is a decentralized machine learning technique that enables multiple clients to collaboratively train models without requiring clients to reveal their raw data to each other. Although traditional FL trains a single global model with average performance among clients, statistical data heterogeneity across clients has resulted in the development of personalized FL (PFL), which trains personalized models with good performance on each client's data. A key challenge with PFL is how to facilitate clients with similar data to collaborate more in a situation where each client has data from complex distribution and cannot determine one another's distribution. In this paper, we propose a new PFL method (pFedMB) using multi-branch architecture, which achieves personalization by splitting each layer of a neural network into multiple branches and assigning client-specific weights to each branch. We also design an aggregation method to improve the communication efficiency and the model performance, with which each branch is globally updated with weighted averaging by client-specific weights assigned to the branch. pFedMB is simple but effective in facilitating each client to share knowledge with similar clients by adjusting the weights assigned to each branch. We experimentally show that pFedMB performs better than the state-of-the-art PFL methods using the CIFAR10 and CIFAR100 datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源