论文标题
自适应随机梯度下降,用于快速和沟通效率的分布式学习
Adaptive Stochastic Gradient Descent for Fast and Communication-Efficient Distributed Learning
论文作者
论文摘要
我们考虑主人想要在$ n $ Workers上运行分布式随机梯度下降(SGD)算法的设置,每个算法都有一个数据子集。分布式的SGD可能会遭受散乱者的影响,即导致延迟的缓慢或反应迟钝的工人。文献中研究的一种解决方案是在更新模型之前等待每次迭代的最快$ k <n $工人的响应,其中$ k $是固定参数。 $ k $的价值的选择提出了SGD的运行时(即收敛率)与模型错误之间的权衡。为了优化误差折衷,我们研究了在整个算法的运行时,都以自适应〜$ k $(即$ k $变化)调查分布式SGD。我们首先设计了一种自适应策略,用于改变$ k $,该策略是根据我们得出的墙上锁定时间的函数的上限来优化这种权衡的。然后,我们建议并实施一种基于统计启发式的自适应分布式SGD的算法。我们的结果表明,与非自适应实现相比,分布式SGD的自适应版本可以在更少的时间内达到较低的误差值。此外,结果还表明自适应版本是通信效率的,其中主人与工人之间所需的通信量小于非自适应版本的通信。
We consider the setting where a master wants to run a distributed stochastic gradient descent (SGD) algorithm on $n$ workers, each having a subset of the data. Distributed SGD may suffer from the effect of stragglers, i.e., slow or unresponsive workers who cause delays. One solution studied in the literature is to wait at each iteration for the responses of the fastest $k<n$ workers before updating the model, where $k$ is a fixed parameter. The choice of the value of $k$ presents a trade-off between the runtime (i.e., convergence rate) of SGD and the error of the model. Towards optimizing the error-runtime trade-off, we investigate distributed SGD with adaptive~$k$, i.e., varying $k$ throughout the runtime of the algorithm. We first design an adaptive policy for varying $k$ that optimizes this trade-off based on an upper bound on the error as a function of the wall-clock time that we derive. Then, we propose and implement an algorithm for adaptive distributed SGD that is based on a statistical heuristic. Our results show that the adaptive version of distributed SGD can reach lower error values in less time compared to non-adaptive implementations. Moreover, the results also show that the adaptive version is communication-efficient, where the amount of communication required between the master and the workers is less than that of non-adaptive versions.