论文标题
与激励性模型奖励的协作机器学习
Collaborative Machine Learning with Incentive-Aware Model Rewards
论文作者
论文摘要
协作机器学习(ML)是一种有吸引力的范式,可以通过培训许多当事方的汇总数据来构建高质量的ML模型。但是,这些政党只愿意在获得足够的激励措施时共享他们的数据,例如根据他们的贡献获得的公平奖励。这激发了衡量当事方的贡献并相应地设计激励奖励计划的必要性。本文提议根据Shapley的价值和信息获得的信息收益,以根据其数据来重视一方的奖励。随后,我们将各方作为奖励作为奖励。为了正式激励合作,我们定义了一些受合作游戏理论启发的理想属性(例如,公平和稳定性),但适合我们独特地自由复制的模型奖励。然后,我们提出了一种新颖的模型奖励方案,以通过可调参数来满足所需属性之间的公平性和权衡。通过我们方案确定的各方模型奖励的价值是通过以优化的噪声差异将高斯噪声注入聚合训练数据的。我们从经验上证明了我们方案的有趣属性,并使用合成和现实世界数据集评估了其性能。
Collaborative machine learning (ML) is an appealing paradigm to build high-quality ML models by training on the aggregated data from many parties. However, these parties are only willing to share their data when given enough incentives, such as a guaranteed fair reward based on their contributions. This motivates the need for measuring a party's contribution and designing an incentive-aware reward scheme accordingly. This paper proposes to value a party's reward based on Shapley value and information gain on model parameters given its data. Subsequently, we give each party a model as a reward. To formally incentivize the collaboration, we define some desirable properties (e.g., fairness and stability) which are inspired by cooperative game theory but adapted for our model reward that is uniquely freely replicable. Then, we propose a novel model reward scheme to satisfy fairness and trade off between the desirable properties via an adjustable parameter. The value of each party's model reward determined by our scheme is attained by injecting Gaussian noise to the aggregated training data with an optimized noise variance. We empirically demonstrate interesting properties of our scheme and evaluate its performance using synthetic and real-world datasets.