论文标题
安全的分布式/联合学习:多代理系统的预测私人关系权衡
Secure Distributed/Federated Learning: Prediction-Privacy Trade-Off for Multi-Agent System
论文作者
论文摘要
分散学习是一种有效的新兴范式,用于提高多个有界计算剂的计算能力。在大数据时代,在分布式和联合学习(DL和FL)框架内执行推断,中央服务器需要处理大量数据,同时依靠各种代理来执行多个分布式培训任务。考虑到分散的计算拓扑,隐私已成为一流的关注点。此外,假设代理商的信息处理能力有限,请要求确保有效计算的复杂\ textit {隐私保护权力下放}。为此,我们研究\ textIt {隐私意识服务器到多代理分配}问题,但要遵守与每个代理相关的信息处理约束,同时通过分布式的私人联盟学习(DPFL)方法来维护代理商收到有关全球终端的学习信息信息。为了找到一个两种代理系统的分散方案,我们根据与每个代理相关的压缩约束的质量来平衡隐私和准确性的优化问题。我们通过与自洽的方程式交替提出一种迭代融合算法。我们还在数字上评估了提出的解决方案,以显示隐私预测权衡,并证明了新方法在确保DL和FL中的隐私方面的功效。
Decentralized learning is an efficient emerging paradigm for boosting the computing capability of multiple bounded computing agents. In the big data era, performing inference within the distributed and federated learning (DL and FL) frameworks, the central server needs to process a large amount of data while relying on various agents to perform multiple distributed training tasks. Considering the decentralized computing topology, privacy has become a first-class concern. Moreover, assuming limited information processing capability for the agents calls for a sophisticated \textit{privacy-preserving decentralization} that ensures efficient computation. Towards this end, we study the \textit{privacy-aware server to multi-agent assignment} problem subject to information processing constraints associated with each agent, while maintaining the privacy and assuring learning informative messages received by agents about a global terminal through the distributed private federated learning (DPFL) approach. To find a decentralized scheme for a two-agent system, we formulate an optimization problem that balances privacy and accuracy, taking into account the quality of compression constraints associated with each agent. We propose an iterative converging algorithm by alternating over self-consistent equations. We also numerically evaluate the proposed solution to show the privacy-prediction trade-off and demonstrate the efficacy of the novel approach in ensuring privacy in DL and FL.