论文标题

通过组合多臂匪徒进行成本效益的分布式学习

Cost-Efficient Distributed Learning via Combinatorial Multi-Armed Bandits

论文作者

Egger, Maximilian, Bitar, Rawad, Wachter-Zeh, Antonia, Gündüz, Deniz

论文摘要

我们考虑分布式SGD问题,其中主节点在$ n $工人之间分配梯度计算。通过将任务分配给所有工人,只等待$ k $最快的工作,主要节点可以随着算法的发展而逐渐增加$ k $,可以权衡算法的错误。但是,这种策略被称为自适应$ k $ -sync,忽略了未使用的计算的成本和向揭示出散布行为的工人进行交流模型的成本。我们提出了一个成本效益的计划,将任务仅分配给$ k $工人,并逐渐增加$ k $。我们介绍了组合多臂强盗模型的使用,以了解哪些工人在分配梯度计算时是最快的。假设具有指数分布的响应时间以不同方式参数的工人,我们会以我们的策略的遗憾(即学习工人的平均响应时间花费的额外时间)提供经验和理论保证。此外,我们提出和分析适用于大量响应时间分布的策略。与自适应$ k $ -sync相比,我们的计划通过相同的计算工作和较小的下行链路通信较低,而在速度方面却较低,因此我们的计划明显降低了错误。

We consider the distributed SGD problem, where a main node distributes gradient calculations among $n$ workers. By assigning tasks to all the workers and waiting only for the $k$ fastest ones, the main node can trade-off the algorithm's error with its runtime by gradually increasing $k$ as the algorithm evolves. However, this strategy, referred to as adaptive $k$-sync, neglects the cost of unused computations and of communicating models to workers that reveal a straggling behavior. We propose a cost-efficient scheme that assigns tasks only to $k$ workers, and gradually increases $k$. We introduce the use of a combinatorial multi-armed bandit model to learn which workers are the fastest while assigning gradient calculations. Assuming workers with exponentially distributed response times parameterized by different means, we give empirical and theoretical guarantees on the regret of our strategy, i.e., the extra time spent to learn the mean response times of the workers. Furthermore, we propose and analyze a strategy applicable to a large class of response time distributions. Compared to adaptive $k$-sync, our scheme achieves significantly lower errors with the same computational efforts and less downlink communication while being inferior in terms of speed.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源