论文标题

Stochalm:一种随机替代线性化方法,用于分布式优化

StochaLM: a Stochastic alternate Linearization Method for distributed optimization

论文作者

Almeida, Inês, Xavier, João

论文摘要

我们提出了随机替代线性化方法(Stochalm),这是一种基于令牌的分布式优化方法。该算法通过求解一系列子问题,在特定锚点周围线性性地线性化,从而找到了共识优化问题的解决方案。可以将stochalm解释为双层坐标上升方法,其块组件是使用Ergodic Markov链的状态选择的。这种采样过程既不是随着时间的流逝,也不是独立的,也阻止了我们使用以前工作中完成的双块坐标上升方法的收敛证明。因此,我们方法的收敛证明也是新颖的。我们表明,如果成本强烈凸出并且网络已完全连接,那么,概率是,由stochalm生成的原始序列会收敛到解决方案。我们的方法对应用程序友好,因为它没有全局的超参数调节。任何超参数都可以由代理商在本地调整有关其私人成本和社区的信息。因此,我们的方法是从最真实的意义上分散的。数值实验证明,即使对这些方法的超参数调谐以获得最佳性能,我们的方法比其他基于令牌的方法更快地收敛到解决方案。

We present the Stochastic alternate Linearization Method (StochaLM), a token-based method for distributed optimization. This algorithm finds the solution of a consensus optimization problem by solving a sequence of subproblems where some components of the cost are linearized around specific anchor points. StochaLM can be interpreted as a dual block coordinate ascent method whose block components are selected using the state of an ergodic Markov chain. This sampling process is neither essentially cyclic nor independent over time, preventing us from using proofs of convergence of dual block coordinate ascent methods done in previous works. The proof of convergence of our method is, therefore, also novel. We show that, if the cost is strongly convex and the network is fully connected, then, with probability one, the primal sequence generated by StochaLM converges to the solution. Our method is application-friendly, as it has no global hyperparameters to tune; any hyperparameters can be tuned locally by the agents, using information regarding their private cost and neighbourhood. Our method is, therefore, decentralized in the truest sense. Numerical experiments evidence that our method converges to the solution faster than other token-based methods, even when these methods' hyperparameters are tuned for optimal performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源