论文标题

NLP的安全有效的联合学习框架

A Secure and Efficient Federated Learning Framework for NLP

论文作者

Deng, Jieren, Wang, Chenghong, Meng, Xianrui, Wang, Yijue, Li, Ji, Lin, Sheng, Han, Shuo, Miao, Fei, Rajasekaran, Sanguthevar, Ding, Caiwen

论文摘要

在这项工作中,我们考虑了设计安全有效的联邦学习(FL)框架的问题。现有的解决方案要么涉及可信赖的聚合器,要么需要重量级加密原始图,从而大大降低了性能。此外,许多现有的安全FL设计仅在限制性的假设下工作,即无法从培训协议中删除一个客户。为了解决这些问题,我们提出了SEFL,这是一个安全有效的FL框架,(1)消除了对受信任实体的需求; (2)与现有的FL设计相比,实现相似甚至更好的模型精度; (3)对于客户辍学是弹性的。通过对自然语言处理(NLP)任务的广泛实验研究,我们证明了SEFL与现有的FL溶液相比达到了可比的精度,并且提出的修剪技术可以提高运行时性能高达13.7倍。

In this work, we consider the problem of designing secure and efficient federated learning (FL) frameworks. Existing solutions either involve a trusted aggregator or require heavyweight cryptographic primitives, which degrades performance significantly. Moreover, many existing secure FL designs work only under the restrictive assumption that none of the clients can be dropped out from the training protocol. To tackle these problems, we propose SEFL, a secure and efficient FL framework that (1) eliminates the need for the trusted entities; (2) achieves similar and even better model accuracy compared with existing FL designs; (3) is resilient to client dropouts. Through extensive experimental studies on natural language processing (NLP) tasks, we demonstrate that the SEFL achieves comparable accuracy compared to existing FL solutions, and the proposed pruning technique can improve runtime performance up to 13.7x.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源