论文标题

用于联合学习应用程序的多任务和转移学习

Multi-Task and Transfer Learning for Federated Learning Applications

论文作者

Keçeci, Cihat, Shaqfeh, Mohammad, Mbayed, Hayat, Serpedin, Erchin

论文摘要

联合学习可以使许多应用程序受益于大量潜在数据持有客户的分布式和私人数据集。但是,不同客户通常就可以从数据中学到的任务具有自己的特定目标。因此,使用元学习工具(例如多任务学习和转移学习)来支持联合学习,将通过让不同但相关任务的客户共享任务不合时宜的模型,从而有助于扩大联合学习的潜在应用程序,然后可以由每个单独的客户来进一步更新和量身定制和量身定制其特定任务。在联合的多任务学习问题中,应为每个客户的各个目标进行微调,同时共享某些参数以提高概括性。我们建议训练一个深层的神经网络模型,其更广泛的层更接近输入,并且更具个性化的层距离输出。我们通过引入层类型(例如预训练,常见,特定任务和个人层)来实现这一目标。我们提供仿真结果,以突出特定的方案,在这种情况下,基于元学习的联合学习被证明是有用的。

Federated learning enables many applications benefiting distributed and private datasets of a large number of potential data-holding clients. However, different clients usually have their own particular objectives in terms of the tasks to be learned from the data. So, supporting federated learning with meta-learning tools such as multi-task learning and transfer learning will help enlarge the set of potential applications of federated learning by letting clients of different but related tasks share task-agnostic models that can be then further updated and tailored by each individual client for its particular task. In a federated multi-task learning problem, the trained deep neural network model should be fine-tuned for the respective objective of each client while sharing some parameters for more generalizability. We propose to train a deep neural network model with more generalized layers closer to the input and more personalized layers to the output. We achieve that by introducing layer types such as pre-trained, common, task-specific, and personal layers. We provide simulation results to highlight particular scenarios in which meta-learning-based federated learning proves to be useful.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源