论文标题
私下学习马尔可夫随机领域
Privately Learning Markov Random Fields
论文作者
论文摘要
我们考虑在差异隐私的约束下学习Markov随机字段(包括原型示例,ISING模型)的问题。我们的学习目标包括结构学习,我们试图估计模型的基础图结构,以及参数学习的更难目标,在其中我们还估计了每个边缘上的参数。我们在各种隐私限制下为这两个问题提供算法和下限 - 即纯,集中和近似的差异隐私。虽然非主要的,但两个学习目标都具有大致相同的复杂性,但我们表明,在差异隐私下并非如此。特别是,只有在近似差异隐私下学习的结构学习才能维持对数据维度的非私有对数依赖性,而学习目标或隐私概念的变化将需要多项式依赖性。结果,我们表明,隐私约束在高维数据制度中实现了这两个学习问题之间的强烈分离。
We consider the problem of learning Markov Random Fields (including the prototypical example, the Ising model) under the constraint of differential privacy. Our learning goals include both structure learning, where we try to estimate the underlying graph structure of the model, as well as the harder goal of parameter learning, in which we additionally estimate the parameter on each edge. We provide algorithms and lower bounds for both problems under a variety of privacy constraints -- namely pure, concentrated, and approximate differential privacy. While non-privately, both learning goals enjoy roughly the same complexity, we show that this is not the case under differential privacy. In particular, only structure learning under approximate differential privacy maintains the non-private logarithmic dependence on the dimensionality of the data, while a change in either the learning goal or the privacy notion would necessitate a polynomial dependence. As a result, we show that the privacy constraint imposes a strong separation between these two learning problems in the high-dimensional data regime.