论文标题

通过不精确的ADMM联合学习

Federated Learning via Inexact ADMM

论文作者

Zhou, Shenglong, Li, Geoffrey Ye

论文摘要

联邦学习中的关键问题之一是如何开发有效的优化算法。当前的大多数需要全面的设备参与和/或对收敛施加强有力的假设。与广泛使用的基于梯度下降的算法不同,在本文中,我们开发了一种不确定的交替方向乘数(ADMM),该方法既是计算和沟通效率,又能够打击散乱者的效果,并且在轻度条件下会收敛。此外,与几种用于联合学习的最新算法相比,它具有很高的数值性能。

One of the crucial issues in federated learning is how to develop efficient optimization algorithms. Most of the current ones require full device participation and/or impose strong assumptions for convergence. Different from the widely-used gradient descent-based algorithms, in this paper, we develop an inexact alternating direction method of multipliers (ADMM), which is both computation- and communication-efficient, capable of combating the stragglers' effect, and convergent under mild conditions. Furthermore, it has a high numerical performance compared with several state-of-the-art algorithms for federated learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源