论文标题

Dimix:马虎代理的混合减少

DIMIX: DIminishing MIXing for Sloppy Agents

论文作者

Reisizadeh, Hadi, Touri, Behrouz, Mohajer, Soheil

论文摘要

我们研究了非凸线分布式优化问题,其中一组代理协作解决了在随时间变化的网络上分布的可分离优化问题。解决这些问题的现有方法取决于(最多)一次时间尺度算法,在该算法中,每个代理在网络中的平均估计值下执行降低或恒定的阶梯尺寸梯度下降。但是,如果可能的话,可以交换评估这些平均估计值所需的确切信息,可能会引入大规模的通信开销。因此,要做出的合理实践假设是,代理只会收到相邻代理的信息的粗略近似。为了解决这个问题,我们介绍和研究A \ textit {两个时间级}通过\ textIt {time-time-varying}网络,具有广泛类别的\ textit {Lossit {Lossiy}信息共享方法(包括噪声,量化和/或压缩信息共享)。在我们的方法中,一个时间尺度抑制了来自相邻代理的(不完美的)传入信息,并且一个时间尺度在本地成本函数的梯度上运行。我们表明,通过适当选择阶梯尺寸的参数,该算法实现了$ \ Mathcal {o}({t}^{ - 1/3 +ε})$的收敛速率,用于非convex分布式优化问题,而不是时间变化的网络。我们的仿真结果支持本文的理论结果。

We study non-convex distributed optimization problems where a set of agents collaboratively solve a separable optimization problem that is distributed over a time-varying network. The existing methods to solve these problems rely on (at most) one time-scale algorithms, where each agent performs a diminishing or constant step-size gradient descent at the average estimate of the agents in the network. However, if possible at all, exchanging exact information, that is required to evaluate these average estimates, potentially introduces a massive communication overhead. Therefore, a reasonable practical assumption to be made is that agents only receive a rough approximation of the neighboring agents' information. To address this, we introduce and study a \textit{two time-scale} decentralized algorithm with a broad class of \textit{lossy} information sharing methods (that includes noisy, quantized, and/or compressed information sharing) over \textit{time-varying} networks. In our method, one time-scale suppresses the (imperfect) incoming information from the neighboring agents, and one time-scale operates on local cost functions' gradients. We show that with a proper choices for the step-sizes' parameters, the algorithm achieves a convergence rate of $\mathcal{O}({T}^{-1/3 + ε})$ for non-convex distributed optimization problems over time-varying networks, for any $ε>0$. Our simulation results support the theoretical results of the paper.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源