论文标题
与任意客户参与的联邦学习的统一分析
A Unified Analysis of Federated Learning with Arbitrary Client Participation
论文作者
论文摘要
联合学习(FL)面临间歇性客户可用性和计算/沟通效率的挑战。结果,只有一小部分客户才能在给定时间参与FL。重要的是要了解部分客户的参与如何影响融合,但是大多数现有作品都考虑到理想化的参与模式,或者获得了对通用模式的非零最优性错误获得的结果。在本文中,我们为FL提供了统一的融合分析,并通过任意客户参与。我们首先引入了一个广义的联合平均版本(FedAvg),该版本以多个FL回合的间隔放大了参数更新。然后,我们提出了一项新颖的分析,该分析捕获了客户在单个学期中参与的效果。通过分析该术语,我们获得了广泛的参与模式的收敛上限,包括非传播和随机情况,它们匹配随机梯度下降(SGD)的下限或最先进的情况,会导致特定的设置。我们还讨论了各种见解,建议和实验结果。
Federated learning (FL) faces challenges of intermittent client availability and computation/communication efficiency. As a result, only a small subset of clients can participate in FL at a given time. It is important to understand how partial client participation affects convergence, but most existing works have either considered idealized participation patterns or obtained results with non-zero optimality error for generic patterns. In this paper, we provide a unified convergence analysis for FL with arbitrary client participation. We first introduce a generalized version of federated averaging (FedAvg) that amplifies parameter updates at an interval of multiple FL rounds. Then, we present a novel analysis that captures the effect of client participation in a single term. By analyzing this term, we obtain convergence upper bounds for a wide range of participation patterns, including both non-stochastic and stochastic cases, which match either the lower bound of stochastic gradient descent (SGD) or the state-of-the-art results in specific settings. We also discuss various insights, recommendations, and experimental results.