论文标题
实验性量子对抗性学习,可编程超导码头
Experimental quantum adversarial learning with programmable superconducting qubits
论文作者
论文摘要
量子计算有望增强机器学习和人工智能。已经提出了不同的量子算法来改善各种机器学习任务。然而,最近的理论工作表明,与基于深层经典神经网络的传统分类器类似,量子分类器将遭受脆弱性问题的困扰:在合法的原始数据样本中添加微小的精心制作的扰动将有助于在明显较高的置信度下促进错误的预测。这将为未来的量子机学习应用在安全和关键安全方案中带来严重的问题。在这里,我们报告了量子对抗学习的第一个实验演示,并提供了可编程超导量子。我们训练量子分类器,该分类器建立在各种量子电路基础上,该电路由十个Transmon Qubits组成,其平均寿命为150美元$ $ s,平均单一和两分之二的门的平均保真度高于99.94%和99.4%,以及两种真实的图像(例如,医疗磁值图像)和量子数据。我们证明,这些经过良好训练的分类器(测试准确性高达99%)实际上可以被小型的对抗性扰动所欺骗,而对抗性训练过程将显着提高其对这种扰动的稳健性。我们的结果在实验上揭示了量子学习系统在对抗场景下的关键脆弱性方面,并展示了针对对抗性攻击的有效防御策略,该策略为具有近期和未来量子设备的量子人工智能应用提供了宝贵的指南。
Quantum computing promises to enhance machine learning and artificial intelligence. Different quantum algorithms have been proposed to improve a wide spectrum of machine learning tasks. Yet, recent theoretical works show that, similar to traditional classifiers based on deep classical neural networks, quantum classifiers would suffer from the vulnerability problem: adding tiny carefully-crafted perturbations to the legitimate original data samples would facilitate incorrect predictions at a notably high confidence level. This will pose serious problems for future quantum machine learning applications in safety and security-critical scenarios. Here, we report the first experimental demonstration of quantum adversarial learning with programmable superconducting qubits. We train quantum classifiers, which are built upon variational quantum circuits consisting of ten transmon qubits featuring average lifetimes of 150 $μ$s, and average fidelities of simultaneous single- and two-qubit gates above 99.94% and 99.4% respectively, with both real-life images (e.g., medical magnetic resonance imaging scans) and quantum data. We demonstrate that these well-trained classifiers (with testing accuracy up to 99%) can be practically deceived by small adversarial perturbations, whereas an adversarial training process would significantly enhance their robustness to such perturbations. Our results reveal experimentally a crucial vulnerability aspect of quantum learning systems under adversarial scenarios and demonstrate an effective defense strategy against adversarial attacks, which provide a valuable guide for quantum artificial intelligence applications with both near-term and future quantum devices.