论文标题
2CP:分散的协议,以透明地评估区块链联合学习环境中的贡献
2CP: Decentralized Protocols to Transparently Evaluate Contributivity in Blockchain Federated Learning Environments
论文作者
论文摘要
联邦学习利用来自多个来源的数据来构建单个模型。虽然最初的模型可能仅属于演员将其带到网络进行培训的演员,但确定由联邦学习而导致的训练有素的模型的所有权仍然是一个悬而未决的问题。在本文中,我们探讨了如何使用区块链(特别是以太坊)来确定经过联合学习训练的模型的不断发展的所有权。 首先,我们使用分步评估指标来评估参与者在联合学习过程中的相对贡献。接下来,我们介绍2CP,这是一个框架,该框架包括两个用于阻塞联合学习的新颖协议,它们都基于其相对贡献性来奖励最终模型中股票的贡献者。众包协议允许演员提出模型进行培训,并使用自己的数据来评估对其贡献的贡献。即使在无信任的环境中,潜在的培训师也可以保证在由此产生的模型中得到公平份额。即使没有一方拥有初始模型并且没有评估者可用,该财团协议也可以为培训师提供相同的保证。 我们使用MNIST数据集进行实验,这些数据集通过奖励模型中具有更大份额的较大数据集来揭示两种协议所产生的声音贡献率。我们的实验还表明,必须将2CP与强大的模型聚集机制配对,以丢弃来自模型中毒攻击的低质量输入。
Federated Learning harnesses data from multiple sources to build a single model. While the initial model might belong solely to the actor bringing it to the network for training, determining the ownership of the trained model resulting from Federated Learning remains an open question. In this paper we explore how Blockchains (in particular Ethereum) can be used to determine the evolving ownership of a model trained with Federated Learning. Firstly, we use the step-by-step evaluation metric to assess the relative contributivities of participants in a Federated Learning process. Next, we introduce 2CP, a framework comprising two novel protocols for Blockchained Federated Learning, which both reward contributors with shares in the final model based on their relative contributivity. The Crowdsource Protocol allows an actor to bring a model forward for training, and use their own data to evaluate the contributions made to it. Potential trainers are guaranteed a fair share of the resulting model, even in a trustless setting. The Consortium Protocol gives trainers the same guarantee even when no party owns the initial model and no evaluator is available. We conduct experiments with the MNIST dataset that reveal sound contributivity scores resulting from both Protocols by rewarding larger datasets with greater shares in the model. Our experiments also showed the necessity to pair 2CP with a robust model aggregation mechanism to discard low quality inputs coming from model poisoning attacks.