论文标题
使用张量网络使用增强学习的自我校正量子多体控制
Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks
论文作者
论文摘要
量子多体控制是利用量子技术的中心里程碑。但是,希尔伯特(Hilbert)空间维度的指数增长具有量子位数量,使经典模拟量子多体系统的挑战,因此,设计可靠且可靠的最佳控制协议。在这里,我们提出了一个新颖的框架,用于有效控制基于强化学习(RL)的量子多体系统。我们通过利用矩阵产品状态(i)来代表多体状态,(ii)作为我们RL代理的可训练机器学习体系结构的一部分来解决量子控制问题。该框架用于准备量子链的基态,包括关键区域中的状态。它使我们能够控制远大于仅神经网络架构允许的系统,同时保留了深度学习算法的优势,例如通用性和可训练的噪声鲁棒性。特别是,我们证明了RL代理能够找到通用控制,学习如何最佳地转导体以前没有看到的多体状态,并在量子动力学受到随机扰动的情况下进行调整控制方案。此外,我们将QMPS框架映射到可以在嘈杂的中间量子量子设备上执行的混合量子古典算法,并在实验相关的噪声来源的存在下进行测试。
Quantum many-body control is a central milestone en route to harnessing quantum technologies. However, the exponential growth of the Hilbert space dimension with the number of qubits makes it challenging to classically simulate quantum many-body systems and consequently, to devise reliable and robust optimal control protocols. Here, we present a novel framework for efficiently controlling quantum many-body systems based on reinforcement learning (RL). We tackle the quantum control problem by leveraging matrix product states (i) for representing the many-body state and, (ii) as part of the trainable machine learning architecture for our RL agent. The framework is applied to prepare ground states of the quantum Ising chain, including states in the critical region. It allows us to control systems far larger than neural-network-only architectures permit, while retaining the advantages of deep learning algorithms, such as generalizability and trainable robustness to noise. In particular, we demonstrate that RL agents are capable of finding universal controls, of learning how to optimally steer previously unseen many-body states, and of adapting control protocols on-the-fly when the quantum dynamics is subject to stochastic perturbations. Furthermore, we map the QMPS framework to a hybrid quantum-classical algorithm that can be performed on noisy intermediate-scale quantum devices and test it under the presence of experimentally relevant sources of noise.