论文标题

基于群集的模型中的异步完全分解的SGD

Asynchronous Fully-Decentralized SGD in the Cluster-Based Model

论文作者

Attiya, Hagit, Schiller, Noa

论文摘要

本文介绍了耐断层的异步随机梯度下降(SGD)算法。 SGD被广泛用于近似于成本函数$ Q $的最小值,这是优化和学习算法的核心部分。我们的算法是为基于群集的模型而设计的,该模型结合了通信和共享内存通信层。流程可能会因崩溃而失败,并且每个群集内部的算法仅使用读取和写入。 对于强凸函数$ q $,我们的算法可容忍任何数量的故障,并提供收敛速率,从而超过了顺序SGD的最佳收敛速率最大分布加速度。 对于任意函数,收敛速率具有附加项,取决于在相同迭代处参数之间的最大差异。 (这是根据$ Q $的标准假设的。)在这种情况下,该算法获得与顺序SGD相同的收敛速率,最高为对数因子。这是通过在每次迭代中使用的多维近似协议算法来实现的,该算法是为基于群集的模型量身定制的。 任意功能的算法要求至少大多数簇包含至少一个非故障过程。我们证明,在优化某些非凸功能时,这种情况是必要的。

This paper presents fault-tolerant asynchronous Stochastic Gradient Descent (SGD) algorithms. SGD is widely used for approximating the minimum of a cost function $Q$, as a core part of optimization and learning algorithms. Our algorithms are designed for the cluster-based model, which combines message-passing and shared-memory communication layers. Processes may fail by crashing, and the algorithm inside each cluster is wait-free, using only reads and writes. For a strongly convex function $Q$, our algorithm tolerates any number of failures, and provides convergence rate that yields the maximal distributed acceleration over the optimal convergence rate of sequential SGD. For arbitrary functions, the convergence rate has an additional term that depends on the maximal difference between the parameters at the same iteration. (This holds under standard assumptions on $Q$.) In this case, the algorithm obtains the same convergence rate as sequential SGD, up to a logarithmic factor. This is achieved by using, at each iteration, a multidimensional approximate agreement algorithm, tailored for the cluster-based model. The algorithm for arbitrary functions requires that at least a majority of the clusters contain at least one nonfaulty process. We prove that this condition is necessary when optimizing some non-convex functions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源