论文标题

分散优化和机器学习的梯度跟踪和差异

Gradient tracking and variance reduction for decentralized optimization and machine learning

论文作者

Xin, Ran, Kar, Soummya, Khan, Usman A.

论文摘要

在许多信号处理和机器学习任务中,解决有限和最小化问题的分散方法很重要,因为数据是由于隐私和/或资源约束而在节点网络上分布和原始数据共享的。在本文中,我们回顾了分散的随机一阶方法,并提供了一个统一的算法框架,该算法将差异降低与梯度跟踪相结合,以实现稳健的性能和快速收敛。当目标函数平滑且强烈键盘时,我们提供相应方法的明确理论保证,并通过数值实验显示其对非凸问题的适用性。在整篇文章中,我们通过在感兴趣方法之间进行适当的权衡和比较来提供主要技术思想的直观插图,并通过强调对机器学习模型的分散培训的应用。

Decentralized methods to solve finite-sum minimization problems are important in many signal processing and machine learning tasks where the data is distributed over a network of nodes and raw data sharing is not permitted due to privacy and/or resource constraints. In this article, we review decentralized stochastic first-order methods and provide a unified algorithmic framework that combines variance-reduction with gradient tracking to achieve both robust performance and fast convergence. We provide explicit theoretical guarantees of the corresponding methods when the objective functions are smooth and strongly-convex, and show their applicability to non-convex problems via numerical experiments. Throughout the article, we provide intuitive illustrations of the main technical ideas by casting appropriate tradeoffs and comparisons among the methods of interest and by highlighting applications to decentralized training of machine learning models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源