论文标题
用于凸出分布式学习的差异私有ADMM:通过多步近似提高精度
Differentially Private ADMM for Convex Distributed Learning: Improved Accuracy via Multi-Step Approximation
论文作者
论文摘要
乘数的交替方向方法(ADMM)是一种流行的分布式学习算法,其中节点网络通过与分布式数据和迭代交换相关的迭代局部计算进行了协作解决正常的经验风险最小化。当培训数据很敏感时,交换的迭代将引起严重的隐私问题。在本文中,我们旨在提出一种新的私有分布式ADMM算法,以提高精度,以解决广泛的凸学习问题。在我们提出的算法中,我们在本地计算中采用目标函数的近似值,以鲁棒性地将校准的噪声引入迭代更新中,并允许在每个迭代中每个节点的每个节点进行多个原始变量更新。我们的理论结果表明,我们的方法可以通过此类多个近似更新获得更高的实用性,并实现与最先进的误差界限,以差异私有经验的风险最小化。
Alternating Direction Method of Multipliers (ADMM) is a popular algorithm for distributed learning, where a network of nodes collaboratively solve a regularized empirical risk minimization by iterative local computation associated with distributed data and iterate exchanges. When the training data is sensitive, the exchanged iterates will cause serious privacy concern. In this paper, we aim to propose a new differentially private distributed ADMM algorithm with improved accuracy for a wide range of convex learning problems. In our proposed algorithm, we adopt the approximation of the objective function in the local computation to introduce calibrated noise into iterate updates robustly, and allow multiple primal variable updates per node in each iteration. Our theoretical results demonstrate that our approach can obtain higher utility by such multiple approximate updates, and achieve the error bounds asymptotic to the state-of-art ones for differentially private empirical risk minimization.