论文标题

用于文本分类的量子自我发项神经网络

Quantum Self-Attention Neural Networks for Text Classification

论文作者

Li, Guangxi, Zhao, Xuanqiang, Wang, Xin

论文摘要

量子计算的新兴方向是在人工智能的各个领域(包括自然语言处理(NLP))建立有意义的量子应用。尽管基于句法分析的某些努力为量子NLP(QNLP)的研究打开了大门,但诸如重度句法预处理和语法依赖性网络体系结构等局限性使它们在较大和现实世界中的数据集上不切实际。在本文中,我们提出了一种新的简单网络体系结构,称为量子自我发项神经网络(QSANN),可以弥补这些局限性。具体而言,我们将自我发场机制介绍到量子神经网络中,然后利用高斯投影的量子自我注意力,用作自我注意的明智量子版本。结果,QSANN在较大的数据集上具有有效且可扩展,并且具有可在近期量子设备上实现的理想属性。特别是,在公共数据集的文本分类任务中,我们的Qsann优于基于句法分析的最佳现有QNLP模型以及简单的经典自我发项神经网络。我们进一步表明,我们的方法表现出对低级量子噪声的鲁棒性,并展示了对量子神经网络架构的弹性。

An emerging direction of quantum computing is to establish meaningful quantum applications in various fields of artificial intelligence, including natural language processing (NLP). Although some efforts based on syntactic analysis have opened the door to research in Quantum NLP (QNLP), limitations such as heavy syntactic preprocessing and syntax-dependent network architecture make them impracticable on larger and real-world data sets. In this paper, we propose a new simple network architecture, called the quantum self-attention neural network (QSANN), which can compensate for these limitations. Specifically, we introduce the self-attention mechanism into quantum neural networks and then utilize a Gaussian projected quantum self-attention serving as a sensible quantum version of self-attention. As a result, QSANN is effective and scalable on larger data sets and has the desirable property of being implementable on near-term quantum devices. In particular, our QSANN outperforms the best existing QNLP model based on syntactic analysis as well as a simple classical self-attention neural network in numerical experiments of text classification tasks on public data sets. We further show that our method exhibits robustness to low-level quantum noises and showcases resilience to quantum neural network architectures.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源