论文标题
量子SVM的实际应用改进:实践理论
Practical application improvement to Quantum SVM: theory to practice
论文作者
论文摘要
量子机学习(QML)已成为量子应用的重要领域,尽管有用的QML应用需要许多Qubits。因此,我们的论文旨在探讨量子支持向量机(QSVM)算法的成功应用,同时平衡了嘈杂的中间规模量子(NISQ)假设下的几个实用和技术考虑因素。对于NISQ下的量子SVM,我们使用量子特征图将数据转换为量子状态并从这些量子状态中构建SVM内核,并将其与经典的SVM与径向基函数(RBF)内核进行比较。由于数据集在某种意义上更为复杂或更加抽象,因此与QSVM相比,具有经典内核的经典SVM的准确性较低,因为具有典型经典内核的经典SVM无法轻易分离不同的类数据。同样,QSVM应该能够在更广泛的数据集中提供竞争性能,包括``简单''数据案例,其中需要更平滑的决策边界以避免任何模型方差问题(即过度拟合)。为了弥合``经典''决策边界和复杂的量子决策边界之间的差距,我们建议利用一般的浅层单一转换来创建具有旋转因子的特征图来定义可调量子内核,并增加正则化,以使分离超平面模型平滑。我们在实验中表明,这允许QSVM在某些常用的参考数据集中的数据集的复杂性和表现均超过SVM表现。
Quantum machine learning (QML) has emerged as an important area for Quantum applications, although useful QML applications would require many qubits. Therefore our paper is aimed at exploring the successful application of the Quantum Support Vector Machine (QSVM) algorithm while balancing several practical and technical considerations under the Noisy Intermediate-Scale Quantum (NISQ) assumption. For the quantum SVM under NISQ, we use quantum feature maps to translate data into quantum states and build the SVM kernel out of these quantum states, and further compare with classical SVM with radial basis function (RBF) kernels. As data sets are more complex or abstracted in some sense, classical SVM with classical kernels leads to less accuracy compared to QSVM, as classical SVM with typical classical kernels cannot easily separate different class data. Similarly, QSVM should be able to provide competitive performance over a broader range of data sets including ``simpler'' data cases in which smoother decision boundaries are required to avoid any model variance issues (i.e., overfitting). To bridge the gap between ``classical-looking'' decision boundaries and complex quantum decision boundaries, we propose to utilize general shallow unitary transformations to create feature maps with rotation factors to define a tunable quantum kernel, and added regularization to smooth the separating hyperplane model. We show in experiments that this allows QSVM to perform equally to SVM regardless of the complexity of the data sets and outperform in some commonly used reference data sets.