论文标题
通过中间表示正规化联合学习
Federated Learning with Intermediate Representation Regularization
论文作者
论文摘要
与涉及数据收集的集中模型培训相反,联合学习(FL)使远程客户可以在不暴露其私人数据的情况下协作训练模型。但是,由于客户特征的客户产生的异质数据,模型性能通常会在FL中降低。保持良好表现的一种有希望的策略是将当地培训限制在远离全球模型的地方。先前的研究通过将本地模型和全球模型所学的表示形式之间的距离进行正规化来实现这一目标。但是,他们仅考虑模型早期或输出层之前的层的表示形式。在这项研究中,我们介绍了FedIntr,该研究通过将中间层的表示形式整合到本地训练过程中,从而提供了更细粒度的正则化。具体而言,FedIntr计算了一个正规化术语,该术语鼓励本地模型和全局模型的中间层表示之间的亲密关系。此外,FedIntr会根据本地和全局表示之间的相似性自动确定每层表示对正则化项的贡献。我们在各种数据集上进行了广泛的实验,以表明与最新方法相比,FedIntr可以实现等效或更高的性能。我们的代码可在https://github.com/yltun/fedintr上找到。
In contrast to centralized model training that involves data collection, federated learning (FL) enables remote clients to collaboratively train a model without exposing their private data. However, model performance usually degrades in FL due to the heterogeneous data generated by clients of diverse characteristics. One promising strategy to maintain good performance is by limiting the local training from drifting far away from the global model. Previous studies accomplish this by regularizing the distance between the representations learned by the local and global models. However, they only consider representations from the early layers of a model or the layer preceding the output layer. In this study, we introduce FedIntR, which provides a more fine-grained regularization by integrating the representations of intermediate layers into the local training process. Specifically, FedIntR computes a regularization term that encourages the closeness between the intermediate layer representations of the local and global models. Additionally, FedIntR automatically determines the contribution of each layer's representation to the regularization term based on the similarity between local and global representations. We conduct extensive experiments on various datasets to show that FedIntR can achieve equivalent or higher performance compared to the state-of-the-art approaches. Our code is available at https://github.com/YLTun/FedIntR.