论文标题
量子神经网络
Quantum neural networks
论文作者
论文摘要
该博士学位论文结合了过去几十年中最令人兴奋的研究领域:量子计算和机器学习。我们介绍了用于完全量子学习任务的耗散量子神经网络(DQNN),它具有通用的量子计算,并且在训练时具有较低的内存需求。这些网络通过输入和所需输出状态的训练数据对进行了优化,因此可用于表征未知或不信任的量子设备。我们不仅使用经典模拟证明了DQNN的概括行为,还可以在实际的量子计算机上成功实现它们。为了了解这种量子机学习方法的最终限制,我们讨论了量子无免费的午餐定理,该定理描述了可以将量子设备建模为单位过程并用量子示例进行优化的量子设备的概率,从而为随机输入提供了错误的输出。此外,我们在两个方向上扩展了DQNN的应用领域。在第一种情况下,我们包括培训数据对以外的其他信息:由于量子设备始终是结构化的,因此所产生的数据也始终是结构化的。我们修改了DQNN的训练算法,以便在培训过程中包括有关培训数据对的图形结构的知识,并表明这可以导致更好的概括行为。原始的DQNN和包括图形结构的DQNN均经过数据对训练,以表征基础关系。但是,在算法的第二个扩展中,我们旨在学习一组量子状态的特征,以将其扩展到具有相似属性的量子状态。因此,我们建立了一个生成的对抗模型,其中两个DQNN(称为发电机和歧视者)以竞争方式进行训练。
This PhD thesis combines two of the most exciting research areas of the last decades: quantum computing and machine learning. We introduce dissipative quantum neural networks (DQNNs), which are designed for fully quantum learning tasks, are capable of universal quantum computation and have low memory requirements while training. These networks are optimised with training data pairs in form of input and desired output states and therefore can be used for characterising unknown or untrusted quantum devices. We not only demonstrate the generalisation behaviour of DQNNs using classical simulations, but also implement them successfully on actual quantum computers. To understand the ultimate limits for such quantum machine learning methods, we discuss the quantum no free lunch theorem, which describes a bound on the probability that a quantum device, which can be modelled as a unitary process and is optimised with quantum examples, gives an incorrect output for a random input. Moreover we expand the area of applications of DQNNs in two directions. In the first case, we include additional information beyond just the training data pairs: since quantum devices are always structured, the resulting data is always structured as well. We modify the DQNN's training algorithm such that knowledge about the graph-structure of the training data pairs is included in the training process and show that this can lead to better generalisation behaviour. Both the original DQNN and the DQNN including graph structure are trained with data pairs in order to characterise an underlying relation. However, in the second extension of the algorithm we aim to learn characteristics of a set of quantum states in order to extend it to quantum states which have similar properties. Therefore we build a generative adversarial model where two DQNNs, called the generator and discriminator, are trained in a competitive way.