论文标题
任务:用于量子电路可靠性估计的图形变压器
QuEst: Graph Transformer for Quantum Circuit Reliability Estimation
论文作者
论文摘要
在不同的量子算法中,用于QML的PQC在近期设备上显示出承诺。为了促进QML和PQC研究,最近发布了一个名为Torchquantum的Python库。它可以使用高速和方便的调试支持来构建,模拟和训练PQC进行机器学习任务。除了用于ML的量子外,我们还希望在反向方向上引起社区的注意力:量子的ML。具体而言,Torchquantum库还支持使用数据驱动的ML模型来解决量子系统研究中的问题,例如预测量子噪声对电路保真度的影响并提高量子电路汇编效率。 本文介绍了量子部分的ML案例研究。由于估计噪声对电路可靠性是迈向理解和减轻噪声的重要步骤,因此我们建议利用经典的ML预测噪声对电路保真度的影响。受量子电路的自然图表示的启发,我们建议利用图形变压器模型来预测嘈杂的电路保真度。我们首先收集了一个带有各种量子电路的大数据集,并获得了它们对嘈杂的模拟器和真实机器的保真度。然后,我们将每个电路作为节点特征嵌入具有门和噪声属性的图形,并采用图形变压器来预测保真度。 在5000个随机和算法电路上进行了评估,Graph Transformer预测器可以以RMSE误差为0.04提供准确的保真度估计,并且平均比简单的基于神经网络的模型胜过0.02。它可以分别获得随机和算法电路的0.99和0.95 R $^2 $。与电路模拟器相比,预测变量具有超过200倍的速度来估计保真度。
Among different quantum algorithms, PQC for QML show promises on near-term devices. To facilitate the QML and PQC research, a recent python library called TorchQuantum has been released. It can construct, simulate, and train PQC for machine learning tasks with high speed and convenient debugging supports. Besides quantum for ML, we want to raise the community's attention on the reversed direction: ML for quantum. Specifically, the TorchQuantum library also supports using data-driven ML models to solve problems in quantum system research, such as predicting the impact of quantum noise on circuit fidelity and improving the quantum circuit compilation efficiency. This paper presents a case study of the ML for quantum part. Since estimating the noise impact on circuit reliability is an essential step toward understanding and mitigating noise, we propose to leverage classical ML to predict noise impact on circuit fidelity. Inspired by the natural graph representation of quantum circuits, we propose to leverage a graph transformer model to predict the noisy circuit fidelity. We firstly collect a large dataset with a variety of quantum circuits and obtain their fidelity on noisy simulators and real machines. Then we embed each circuit into a graph with gate and noise properties as node features, and adopt a graph transformer to predict the fidelity. Evaluated on 5 thousand random and algorithm circuits, the graph transformer predictor can provide accurate fidelity estimation with RMSE error 0.04 and outperform a simple neural network-based model by 0.02 on average. It can achieve 0.99 and 0.95 R$^2$ scores for random and algorithm circuits, respectively. Compared with circuit simulators, the predictor has over 200X speedup for estimating the fidelity.