论文标题

部分可观测时空混沌系统的无模型预测

Decentralized Event-Triggered Federated Learning with Heterogeneous Communication Thresholds

论文作者

Zehtabi, Shahryar, Hosseinalipour, Seyyedali, Brinton, Christopher G.

论文摘要

分布式学习研究的最新重点是联合学习(FL),其中数据收集设备进行了模型培训。现有对FL的研究主要集中在与同步(时间触发)模型训练巡回赛的星形拓扑学习体系结构上,其中设备的本地模型通过集中式协调节点定期汇总。但是,在许多情况下,这种协调节点可能不存在,激发了完全分散FL的努力。在这项工作中,我们通过异步,事件触发的共识迭代对网络图形拓扑提出了一种新颖的方法,用于分布式模型聚合。我们考虑每个设备上的异质通信事件阈值,这些设备权衡了本地模型参数的变化与可用本地资源的变化,以确定每次迭代时聚合的好处。通过理论分析,我们证明了我们的方法论在分布式学习和图形共识文献中,在标准假设下实现了渐近收敛到全球最佳学习模型,并且对基本拓扑没有限制性的连接性要求。随后的数值结果表明,与FL基准相比,我们的方法学获得了通信需求的实质性改善。

A recent emphasis of distributed learning research has been on federated learning (FL), in which model training is conducted by the data-collecting devices. Existing research on FL has mostly focused on a star topology learning architecture with synchronized (time-triggered) model training rounds, where the local models of the devices are periodically aggregated by a centralized coordinating node. However, in many settings, such a coordinating node may not exist, motivating efforts to fully decentralize FL. In this work, we propose a novel methodology for distributed model aggregations via asynchronous, event-triggered consensus iterations over the network graph topology. We consider heterogeneous communication event thresholds at each device that weigh the change in local model parameters against the available local resources in deciding the benefit of aggregations at each iteration. Through theoretical analysis, we demonstrate that our methodology achieves asymptotic convergence to the globally optimal learning model under standard assumptions in distributed learning and graph consensus literature, and without restrictive connectivity requirements on the underlying topology. Subsequent numerical results demonstrate that our methodology obtains substantial improvements in communication requirements compared with FL baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源