论文标题
量子样本的可学习性和复杂性
Learnability and Complexity of Quantum Samples
论文作者
论文摘要
给定量子电路,量子计算机可以比古典计算机在位数中呈指数级的输出分布。通过量子样本学习在生成模型中尚未建立类似的指数分离:从N量计算中给定样品,我们可以使用具有训练参数的模型来学习基础量子分布,这些模型在固定训练时间下在N中缩放多项式的量子?我们研究了四种生成模型:Deep Boltzmann机器(DBM),生成对抗网络(GAN),长期短期记忆(LSTM)和自回归GAN,用于由深层随机电路产生的学习量子数据集。我们证明了LSTM在学习量子样品中的领先性能,因此从随机量子电路中存在的量子分布中存在自回归结构。在DBM的情况下,数值实验和理论证明都表明,随着N的增加,实现固定准确性所需的学习代理参数的复杂性成倍增长。最后,我们通过对不同的量子和经典表示中可变度复杂度的概率分布的不同样本进行基准学习性来基准学习性,从而建立了可学习性与生成模型的复杂性之间的联系。
Given a quantum circuit, a quantum computer can sample the output distribution exponentially faster in the number of bits than classical computers. A similar exponential separation has yet to be established in generative models through quantum sample learning: given samples from an n-qubit computation, can we learn the underlying quantum distribution using models with training parameters that scale polynomial in n under a fixed training time? We study four kinds of generative models: Deep Boltzmann machine (DBM), Generative Adversarial Networks (GANs), Long Short-Term Memory (LSTM) and Autoregressive GAN, on learning quantum data set generated by deep random circuits. We demonstrate the leading performance of LSTM in learning quantum samples, and thus the autoregressive structure present in the underlying quantum distribution from random quantum circuits. Both numerical experiments and a theoretical proof in the case of the DBM show exponentially growing complexity of learning-agent parameters required for achieving a fixed accuracy as n increases. Finally, we establish a connection between learnability and the complexity of generative models by benchmarking learnability against different sets of samples drawn from probability distributions of variable degrees of complexities in their quantum and classical representations.