论文标题
Fednnnn:快速,准确的联合学习
FedNNNN: Norm-Normalized Neural Network Aggregation for Fast and Accurate Federated Learning
论文作者
论文摘要
联合学习(FL)是一种分布式学习协议,在该协议中,服务器需要汇总一组模型,学习了一些独立的客户来继续学习过程。目前,模型平均(称为FedAvg)是最广泛的聚合技术之一。但是,众所周知,它会产生以降级预测准确性和缓慢收敛的模型产生的模型。在这项工作中,我们发现来自不同客户的平均模型会大大降低更新向量的规范,从而导致学习速度缓慢和预测准确性较低。因此,我们提出了一种称为Fednnnn的新聚合方法。我们不是简单的模型平均,而是调整更新向量的规范,并引入动量控制技术以提高FL的聚合有效性。作为演示,我们评估了具有不同神经网络模型的多个数据集和场景上的Fednnn,并观察到高达5.4%的精度提高。
Federated learning (FL) is a distributed learning protocol in which a server needs to aggregate a set of models learned some independent clients to proceed the learning process. At present, model averaging, known as FedAvg, is one of the most widely adapted aggregation techniques. However, it is known to yield the models with degraded prediction accuracy and slow convergence. In this work, we find out that averaging models from different clients significantly diminishes the norm of the update vectors, resulting in slow learning rate and low prediction accuracy. Therefore, we propose a new aggregation method called FedNNNN. Instead of simple model averaging, we adjust the norm of the update vector and introduce momentum control techniques to improve the aggregation effectiveness of FL. As a demonstration, we evaluate FedNNNN on multiple datasets and scenarios with different neural network models, and observe up to 5.4% accuracy improvement.