论文标题

Fedgradnorm:个性化的联合梯度差异化多任务学习

FedGradNorm: Personalized Federated Gradient-Normalized Multi-Task Learning

论文作者

Mortaheb, Matin, Vahapoglu, Cemil, Ulukus, Sennur

论文摘要

多任务学习(MTL)是一个新颖的框架,可以通过单个共享网络同时学习多个任务,在该网络中,每个任务都有其独特的个性化标题网络进行微调。 MTL也可以在联合学习设置中实现,其中任务分布在客户端。在联合设置中,由于本地数据集的非IID性质而引起的不同任务复杂性和数据异质性引起的统计异质性既可以降低系统的学习性能。此外,由于负转移效应,任务可能会对彼此的学习表现产生负面影响。为了应对这些挑战,我们提出了使用动态加权方法来使梯度规范标准化以平衡不同任务之间的学习速度。 Fedgradnorm在个性化的联合学习环境中提高了整体学习表现。我们通过证明其具有指数收敛速率来为Fedgradnorm提供收敛分析。我们还对多任务面部标志(MTFL)和无线通信系统数据集(RADCOMDYNAGIC)进行实验。实验结果表明,与均等加权策略相比,我们的框架可以实现更快的训练性能。除了提高训练速度外,Fedgradnorm还弥补了客户之间的不平衡数据集。

Multi-task learning (MTL) is a novel framework to learn several tasks simultaneously with a single shared network where each task has its distinct personalized header network for fine-tuning. MTL can be implemented in federated learning settings as well, in which tasks are distributed across clients. In federated settings, the statistical heterogeneity due to different task complexities and data heterogeneity due to non-iid nature of local datasets can both degrade the learning performance of the system. In addition, tasks can negatively affect each other's learning performance due to negative transference effects. To cope with these challenges, we propose FedGradNorm which uses a dynamic-weighting method to normalize gradient norms in order to balance learning speeds among different tasks. FedGradNorm improves the overall learning performance in a personalized federated learning setting. We provide convergence analysis for FedGradNorm by showing that it has an exponential convergence rate. We also conduct experiments on multi-task facial landmark (MTFL) and wireless communication system dataset (RadComDynamic). The experimental results show that our framework can achieve faster training performance compared to equal-weighting strategy. In addition to improving training speed, FedGradNorm also compensates for the imbalanced datasets among clients.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源