论文标题
联合的未学习推荐
Federated Unlearning for On-Device Recommendation
论文作者
论文摘要
推荐系统中日益增长的数据隐私问题使联邦建议(FedRecs)吸引了越来越多的关注。现有的FedRecs主要集中于如何从设备交互数据中有效,安全地学习个人利益和偏好。尽管如此,他们都没有考虑如何有效删除用户对联合培训过程的贡献。我们认为这样的双重设置是必要的。首先,从隐私保护的角度来看,``被遗忘的权利''要求用户有权撤回其数据贡献。没有可逆能力,联邦调查局可能会破坏数据保护法规。另一方面,使FedRec忘记特定用户可以提高其稳健性和对恶意客户攻击的抵抗力。为了支持FedRecs中的用户学习,我们提出了一种有效的学习方法FRU(Federated推荐学习),这是受数据库管理系统中的基于日志的交易机制的启发。它通过向后回滚和校准历史参数更新,然后使用这些更新来加快联合推荐重建的速度来删除用户的贡献。但是,存储所有历史参数更新有关资源约束的个人设备的更新是具有挑战性的,甚至是不可行的。鉴于这一挑战,我们提出了一种小型的负抽样方法,以减少项目嵌入更新的数量和一种基于重要性的更新选择机制,以仅存储重要的模型更新。为了评估FRU的有效性,我们提出了一种攻击方法,以通过一组受损的用户来打扰FedRecs,并使用FRU通过消除这些用户的影响来恢复推荐人。最后,我们对两个现实世界的推荐数据集进行了实验,并使用两个广泛使用的FedRecs来显示我们提出的方法的效率和有效性。
The increasing data privacy concerns in recommendation systems have made federated recommendations (FedRecs) attract more and more attention. Existing FedRecs mainly focus on how to effectively and securely learn personal interests and preferences from their on-device interaction data. Still, none of them considers how to efficiently erase a user's contribution to the federated training process. We argue that such a dual setting is necessary. First, from the privacy protection perspective, ``the right to be forgotten'' requires that users have the right to withdraw their data contributions. Without the reversible ability, FedRecs risk breaking data protection regulations. On the other hand, enabling a FedRec to forget specific users can improve its robustness and resistance to malicious clients' attacks. To support user unlearning in FedRecs, we propose an efficient unlearning method FRU (Federated Recommendation Unlearning), inspired by the log-based rollback mechanism of transactions in database management systems. It removes a user's contribution by rolling back and calibrating the historical parameter updates and then uses these updates to speed up federated recommender reconstruction. However, storing all historical parameter updates on resource-constrained personal devices is challenging and even infeasible. In light of this challenge, we propose a small-sized negative sampling method to reduce the number of item embedding updates and an importance-based update selection mechanism to store only important model updates. To evaluate the effectiveness of FRU, we propose an attack method to disturb FedRecs via a group of compromised users and use FRU to recover recommenders by eliminating these users' influence. Finally, we conduct experiments on two real-world recommendation datasets with two widely used FedRecs to show the efficiency and effectiveness of our proposed approaches.