论文标题
几乎确定分布式优化与不完美的信息共享的融合
Almost Sure Convergence of Distributed Optimization with Imperfect Information Sharing
论文作者
论文摘要
为了设计降低沟通成本或满足速率限制并对通信噪声的算法,我们研究了凸分布的优化问题,其中一组代理有兴趣解决可分离的优化问题,而与不完善的信息共享在时间变化的网络上共享不完善的信息。我们研究了两次分散梯度下降算法的几乎确定的收敛,以在目标损失函数的优化器上达成共识。一次时尺度逐渐消失了来自相邻代理的不完善的传入信息,第二个缩放会调整局部损失功能的梯度。我们表明,在某些条件下,基础时间变化网络和时间尺度序列的连接性,动力学几乎可以肯定地收敛到损耗函数优化器集中支持的最佳点。
To design algorithms that reduce communication cost or meet rate constraints and are robust to communication noise, we study convex distributed optimization problems where a set of agents are interested in solving a separable optimization problem collaboratively with imperfect information sharing over time-varying networks. We study the almost sure convergence of a two-time-scale decentralized gradient descent algorithm to reach the consensus on an optimizer of the objective loss function. One time scale fades out the imperfect incoming information from neighboring agents, and the second one adjusts the local loss functions' gradients. We show that under certain conditions on the connectivity of the underlying time-varying network and the time-scale sequences, the dynamics converge almost surely to an optimal point supported in the optimizer set of the loss function.