论文标题
在边缘计算中快速且强大的联合学习的上下文模型聚合
Contextual Model Aggregation for Fast and Robust Federated Learning in Edge Computing
论文作者
论文摘要
由于沟通的复杂性和隐私保护范围低,因此Federated Learning是网络边缘分布式机器学习的主要候选人。但是,由于数据分布,计算和通信能力的相当异质性,现有算法面临着缓慢收敛和/或鲁棒性能的问题。在这项工作中,我们通过关注联合学习系统中模型聚合的关键组成部分来解决这两个问题,并研究执行此任务的最佳算法。特别是,我们提出了一种上下文聚合方案,该方案可以实现每一轮优化的损失减少的最佳上下文依赖性。上述与上下文相关的结合是从该回合的特定参与设备中得出的,并且对整体损耗函数的平滑度的假设。我们表明,这种聚合会导致每回合的损失函数明确降低。此外,我们可以将汇总与许多现有算法集成以获取上下文版本。我们的实验结果表明,与原始算法相比,上下文版本的收敛速度和鲁棒性有了显着改善。我们还考虑了上下文聚合的不同变体,即使在最极端的设置中也表现出强大的性能。
Federated learning is a prime candidate for distributed machine learning at the network edge due to the low communication complexity and privacy protection among other attractive properties. However, existing algorithms face issues with slow convergence and/or robustness of performance due to the considerable heterogeneity of data distribution, computation and communication capability at the edge. In this work, we tackle both of these issues by focusing on the key component of model aggregation in federated learning systems and studying optimal algorithms to perform this task. Particularly, we propose a contextual aggregation scheme that achieves the optimal context-dependent bound on loss reduction in each round of optimization. The aforementioned context-dependent bound is derived from the particular participating devices in that round and an assumption on smoothness of the overall loss function. We show that this aggregation leads to a definite reduction of loss function at every round. Furthermore, we can integrate our aggregation with many existing algorithms to obtain the contextual versions. Our experimental results demonstrate significant improvements in convergence speed and robustness of the contextual versions compared to the original algorithms. We also consider different variants of the contextual aggregation and show robust performance even in the most extreme settings.