论文标题
通过在普遍计算中蒸馏的联合持续学习
Federated Continual Learning through distillation in pervasive computing
论文作者
论文摘要
联合学习已被引入新的机器学习范式,以增强本地设备的使用。在服务器级别,FL定期聚集在分布式客户端上本地学习的模型,以获得更通用的模型。当前的解决方案依赖于客户端的大量存储数据的可用性,以微调服务器发送的模型。在移动普遍计算中,这种设置不现实,在该计算中必须保持数据存储较低,并且数据特征可能会发生巨大变化。为了解释这种可变性,解决方案是使用客户定期收集的数据来逐步调整接收到的模型。但是这种天真的方法使客户面临着灾难性遗忘的众所周知的问题。为了解决这个问题,我们定义了一种联合的持续学习方法,该方法主要基于蒸馏。我们的方法可以更好地利用资源,从而消除了在新数据到达时从头开始重新审阅的需求,并通过限制存储的数据量来减少内存使用量。该提案已在人类活动识别(HAR)领域进行了评估,并已证明可以有效地降低灾难性的遗忘效果。
Federated Learning has been introduced as a new machine learning paradigm enhancing the use of local devices. At a server level, FL regularly aggregates models learned locally on distributed clients to obtain a more general model. Current solutions rely on the availability of large amounts of stored data at the client side in order to fine-tune the models sent by the server. Such setting is not realistic in mobile pervasive computing where data storage must be kept low and data characteristic can change dramatically. To account for this variability, a solution is to use the data regularly collected by the client to progressively adapt the received model. But such naive approach exposes clients to the well-known problem of catastrophic forgetting. To address this problem, we have defined a Federated Continual Learning approach which is mainly based on distillation. Our approach allows a better use of resources, eliminating the need to retrain from scratch at the arrival of new data and reducing memory usage by limiting the amount of data to be stored. This proposal has been evaluated in the Human Activity Recognition (HAR) domain and has shown to effectively reduce the catastrophic forgetting effect.