论文标题
拜占庭的有弹性分散的随机梯度下降
Byzantine-resilient Decentralized Stochastic Gradient Descent
论文作者
论文摘要
分散的学习已广受欢迎,可以提高学习效率并保留数据隐私。每个计算节点对协作学习深度学习模型做出同等的贡献。消除集中式参数服务器(PS)可以有效解决许多问题,例如隐私,性能瓶颈和单点 - 失败。但是,尽管在集中式系统中已经广泛研究了该问题,但很少探索如何在分散学习系统中实现拜占庭的容错性。 在本文中,我们提出了一项针对分散学习系统的拜占庭式弹性的深入研究。首先,从对抗性的角度来看,我们从理论上说明拜占庭式攻击在分散的学习系统中更加危险和可行:即使是一个恶意参与者也可以通过将精心制作的更新发送给其邻居来任意改变其他参与者的模型。其次,从国防的角度来看,我们提出了Ubar,这是一种新型算法,旨在增强拜占庭式容错的分散学习。具体而言,UBAR为良性节点提供了统一的拜占庭式聚合规则,以选择有用的参数更新并过滤掉每个训练迭代中的恶意。它可以保证,分散系统中的每个良性节点都可以在非常强的拜占庭式攻击下训练正确的模型,并具有任意数量的故障节点。我们对标准图像分类任务进行了广泛的实验,结果表明,与现有解决方案相比,UBAR可以有效地以更高的性能效率击败简单和复杂的拜占庭攻击。
Decentralized learning has gained great popularity to improve learning efficiency and preserve data privacy. Each computing node makes equal contribution to collaboratively learn a Deep Learning model. The elimination of centralized Parameter Servers (PS) can effectively address many issues such as privacy, performance bottleneck and single-point-failure. However, how to achieve Byzantine Fault Tolerance in decentralized learning systems is rarely explored, although this problem has been extensively studied in centralized systems. In this paper, we present an in-depth study towards the Byzantine resilience of decentralized learning systems with two contributions. First, from the adversarial perspective, we theoretically illustrate that Byzantine attacks are more dangerous and feasible in decentralized learning systems: even one malicious participant can arbitrarily alter the models of other participants by sending carefully crafted updates to its neighbors. Second, from the defense perspective, we propose UBAR, a novel algorithm to enhance decentralized learning with Byzantine Fault Tolerance. Specifically, UBAR provides a Uniform Byzantine-resilient Aggregation Rule for benign nodes to select the useful parameter updates and filter out the malicious ones in each training iteration. It guarantees that each benign node in a decentralized system can train a correct model under very strong Byzantine attacks with an arbitrary number of faulty nodes. We conduct extensive experiments on standard image classification tasks and the results indicate that UBAR can effectively defeat both simple and sophisticated Byzantine attacks with higher performance efficiency than existing solutions.