论文标题

FedGBF:通过梯度提升和装袋的有效垂直联合学习框架

FedGBF: An efficient vertical federated learning framework via gradient boosting and bagging

论文作者

Han, Yujin, Du, Pan, Yang, Kai

论文摘要

联邦学习,有利于解决数据隐私和安全问题,最近引起了人们越来越多的关注。但是,现有的联合提升模型依次构建了一个决策树模型,其基础学习者较弱,从而导致了冗余的提升步骤和高度交互式的交流成本。相比之下,联邦装袋模型通过并联建造多粉末树来节省时间,但遭受了性能损失。为了获得以更少的时间成本获得出色的性能,我们在垂直联合的环境中提出了一种新型模型,称为联合梯度增强森林(FedGBF)。 FedGBF同时通过并行建立决策树作为基础学习者来促进并集成了提升和行李的优势。在FedGBF之后,超参数调整的问题正在上升。然后,我们提出动态FedGBF,该动态FedGBF动态更改每个森林的参数,从而降低复杂性。最后,基于基准数据集的实验证明了我们方法的优越性。

Federated learning, conducive to solving data privacy and security problems, has attracted increasing attention recently. However, the existing federated boosting model sequentially builds a decision tree model with the weak base learner, resulting in redundant boosting steps and high interactive communication costs. In contrast, the federated bagging model saves time by building multi-decision trees in parallel, but it suffers from performance loss. With the aim of obtaining an outstanding performance with less time cost, we propose a novel model in a vertically federated setting termed as Federated Gradient Boosting Forest (FedGBF). FedGBF simultaneously integrates the boosting and bagging's preponderance by building the decision trees in parallel as a base learner for boosting. Subsequent to FedGBF, the problem of hyperparameters tuning is rising. Then we propose the Dynamic FedGBF, which dynamically changes each forest's parameters and thus reduces the complexity. Finally, the experiments based on the benchmark datasets demonstrate the superiority of our method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源