论文标题

EP-PQM:有效的参数概率量子存储器,量子和门更少

EP-PQM: Efficient Parametric Probabilistic Quantum Memory with Fewer Qubits and Gates

论文作者

Khan, Mushahid, Faye, Jean Paul Latyr, Mendes, Udson C., Miranskyy, Andriy

论文摘要

可以通过计算输入模式与包含$ z $ a $ z $特征的$ r $模式的数据库之间的锤击距离,在量子计算机(QC)上进行机器学习(ML)分类任务,并使用概率量子内存(PQM)及其扩展名,参数PQM(P-PQM)。 为了进行准确的计算,必须使用单热编码对该功能进行编码,该编码对于具有$ a> 2 $的多属性数据集的内存密集型。我们可以通过用标签编码替换单热编码来轻松地在古典计算机上更紧凑地表示多属性数据。但是,在QC上替换这些编码方案并不直接,因为PQM和P-PQM在量子位级别运行。 我们提出了一个称为EP-PQM的增强的P-PQM,该PQM允许编码存储在PQM数据结构中的数据并减少数据存储和检索过程的电路深度。我们显示了理想QC和嘈杂的中间量子量子(NISQ)设备的实现。 我们的复杂性分析表明,EP-PQM方法需要$ o \ left(z \ log_2(a)\ right)$ qubits,而不是$ o(za)$ qubits p pqm。 EP-PQM还需要更少的门,将门计数从$ o \ left(rza \ right)$减少到$ o \ left(rz \ log_2(a)\ right)$。 对于五个数据集,我们证明,使用EP-PQM培训ML分类模型需要比P-PQM少48%至77%,用于$ A> 2 $。 EP-PQM根据数据集将电路深度降低到60%至96%。深度随分解的电路进一步降低,范围在94%至99%之间。 EP-PQM需要更少的空间;因此,它可以比NISQ设备上的以前的PQM实现进行训练并对更大的数据集进行分类。此外,减少门的数量会加快分类并减少与深量子电路相关的噪声。因此,EP-PQM使我们更接近NISQ设备上的可扩展ML。

Machine learning (ML) classification tasks can be carried out on a quantum computer (QC) using Probabilistic Quantum Memory (PQM) and its extension, Parameteric PQM (P-PQM) by calculating the Hamming distance between an input pattern and a database of $r$ patterns containing $z$ features with $a$ distinct attributes. For accurate computations, the feature must be encoded using one-hot encoding, which is memory-intensive for multi-attribute datasets with $a>2$. We can easily represent multi-attribute data more compactly on a classical computer by replacing one-hot encoding with label encoding. However, replacing these encoding schemes on a QC is not straightforward as PQM and P-PQM operate at the quantum bit level. We present an enhanced P-PQM, called EP-PQM, that allows label encoding of data stored in a PQM data structure and reduces the circuit depth of the data storage and retrieval procedures. We show implementations for an ideal QC and a noisy intermediate-scale quantum (NISQ) device. Our complexity analysis shows that the EP-PQM approach requires $O\left(z \log_2(a)\right)$ qubits as opposed to $O(za)$ qubits for P-PQM. EP-PQM also requires fewer gates, reducing gate count from $O\left(rza\right)$ to $O\left(rz\log_2(a)\right)$. For five datasets, we demonstrate that training an ML classification model using EP-PQM requires 48% to 77% fewer qubits than P-PQM for datasets with $a>2$. EP-PQM reduces circuit depth in the range of 60% to 96%, depending on the dataset. The depth decreases further with a decomposed circuit, ranging between 94% and 99%. EP-PQM requires less space; thus, it can train on and classify larger datasets than previous PQM implementations on NISQ devices. Furthermore, reducing the number of gates speeds up the classification and reduces the noise associated with deep quantum circuits. Thus, EP-PQM brings us closer to scalable ML on a NISQ device.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源