论文标题
基准测试对抗性稳健的量子机学习
Benchmarking Adversarially Robust Quantum Machine Learning at Scale
论文作者
论文摘要
人工神经网络等机器学习(ML)方法在现代科学,技术和工业中迅速变得无处不在。尽管它们的准确性和复杂性,但神经网络仍可以通过精心设计的恶意输入(称为对抗性攻击)很容易被愚弄。尽管这种脆弱性仍然是经典神经网络的严重挑战,但在量子ML环境中尚未完全理解其存在的程度。在这项工作中,我们通过对简单和复杂的图像数据集进行了严格的训练以及各种高端对抗性攻击,对量子ML网络(例如量子变量分类器(QVC))的鲁棒性进行了基础。我们的结果表明,QVC通过学习特征未被经典神经网络检测到的特征,对经典的对抗攻击提供了显着增强的鲁棒性,这表明ML任务可能具有量子优势。相反,非常重要的是,相反的是不正确的,对量子网络的攻击也能够欺骗经典的神经网络。通过结合量子和古典网络成果,我们提出了一种新型的对抗攻击检测技术。传统上,通过提高准确性或算法的加速来寻求传统的量子优势,但是我们的工作通过ML模型的出色鲁棒性揭示了具有新的量子优势的潜力,ML模型的实用实现将解决严重的安全性问题和可靠性问题,这些ML算法在应用程序中使用了多种多样的自动驾驶措施,并确保了机器人的多种多样的机器人,并确保了机器人的自动级别。
Machine learning (ML) methods such as artificial neural networks are rapidly becoming ubiquitous in modern science, technology and industry. Despite their accuracy and sophistication, neural networks can be easily fooled by carefully designed malicious inputs known as adversarial attacks. While such vulnerabilities remain a serious challenge for classical neural networks, the extent of their existence is not fully understood in the quantum ML setting. In this work, we benchmark the robustness of quantum ML networks, such as quantum variational classifiers (QVC), at scale by performing rigorous training for both simple and complex image datasets and through a variety of high-end adversarial attacks. Our results show that QVCs offer a notably enhanced robustness against classical adversarial attacks by learning features which are not detected by the classical neural networks, indicating a possible quantum advantage for ML tasks. Contrarily, and remarkably, the converse is not true, with attacks on quantum networks also capable of deceiving classical neural networks. By combining quantum and classical network outcomes, we propose a novel adversarial attack detection technology. Traditionally quantum advantage in ML systems has been sought through increased accuracy or algorithmic speed-up, but our work has revealed the potential for a new kind of quantum advantage through superior robustness of ML models, whose practical realisation will address serious security concerns and reliability issues of ML algorithms employed in a myriad of applications including autonomous vehicles, cybersecurity, and surveillance robotic systems.