论文标题
EF-BV:在分布式优化中偏置和无偏压缩的误差反馈和差异机制的统一理论
EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization
论文作者
论文摘要
在分布式或联合的优化和学习中,不同计算单元之间的通信通常是瓶颈和梯度压缩,可广泛用于减少在每个迭代方法的通信回合中发送的位数。有两类的压缩操作员和单独的算法利用它们。在具有有界方差的无偏随机压缩机(例如Rand-K)的情况下,Mishchenko等人的Diana算法。 (2019年)是实现降低差异技术来处理压缩引入的差异的技术,是当前的最新状态。在有偏见和承包压缩机(例如TOP-K)的情况下,Richtárik等人的EF21算法。 (2021)而不是实现错误反馈机制,是目前的最新状态。这两类的压缩方案和算法是不同的,具有不同的分析和证明技术。在本文中,我们将它们统一为一个框架,并提出了一种新算法,将Diana和EF21恢复为特定情况。我们的一般方法与新的,较大的压缩机类别一起使用,该类别具有两个参数,分别是偏见和差异,并且包括无偏见和偏见的压缩机作为特定情况。这使我们能够继承两个世界中最好的:像EF21一样,与Diana不同,可以使用有偏见的压缩机,例如Top-K,可以使用其在实践中的良好表现。与戴安娜(Diana)一样,与EF21不同,压缩机的独立随机性可以减轻压缩的影响,当平行工人的数量较大时,收敛速度提高。这是第一次提出具有所有这些功能的算法。我们在某些条件下证明了它的线性收敛。我们的方法朝着更好地理解两个SO-FAR不同的沟通效率分布式学习的世界迈出了一步。
In distributed or federated optimization and learning, communication between the different computing units is often the bottleneck and gradient compression is widely used to reduce the number of bits sent within each communication round of iterative methods. There are two classes of compression operators and separate algorithms making use of them. In the case of unbiased random compressors with bounded variance (e.g., rand-k), the DIANA algorithm of Mishchenko et al. (2019), which implements a variance reduction technique for handling the variance introduced by compression, is the current state of the art. In the case of biased and contractive compressors (e.g., top-k), the EF21 algorithm of Richtárik et al. (2021), which instead implements an error-feedback mechanism, is the current state of the art. These two classes of compression schemes and algorithms are distinct, with different analyses and proof techniques. In this paper, we unify them into a single framework and propose a new algorithm, recovering DIANA and EF21 as particular cases. Our general approach works with a new, larger class of compressors, which has two parameters, the bias and the variance, and includes unbiased and biased compressors as particular cases. This allows us to inherit the best of the two worlds: like EF21 and unlike DIANA, biased compressors, like top-k, whose good performance in practice is recognized, can be used. And like DIANA and unlike EF21, independent randomness at the compressors allows to mitigate the effects of compression, with the convergence rate improving when the number of parallel workers is large. This is the first time that an algorithm with all these features is proposed. We prove its linear convergence under certain conditions. Our approach takes a step towards better understanding of two so-far distinct worlds of communication-efficient distributed learning.