arXiv Analytics

Sign in

arXiv:2403.00940 [quant-ph]AbstractReferencesReviewsResources

Scalable Quantum Algorithms for Noisy Quantum Computers

Julien Gacon

Published 2024-03-01Version 1

Quantum computing not only holds the potential to solve long-standing problems in quantum physics, but also to offer speed-ups across a broad spectrum of other fields. However, due to the noise and the limited scale of current quantum computers, may prominent quantum algorithms are currently infeasible to run for problem sizes of practical interest. This doctoral thesis develops two main techniques to reduce the quantum computational resource requirements, with the goal of scaling up application sizes on current quantum processors. The first approach is based on stochastic approximations of computationally costly quantities, such as quantum circuit gradients or the quantum geometric tensor (QGT). The second method takes a different perspective on the QGT, leading to a potentially more efficient description of time evolution on current quantum computers. While the main focus of application for our algorithms is the simulation of quantum systems, the developed subroutines can further be utilized in the fields of optimization or machine learning. Our algorithms are benchmarked on a range of representative models, such as Ising or Heisenberg spin models, both in numerical simulations and experiments on the hardware. In combination with error mitigation techniques, the latter is scaled up to 27 qubits; into a regime that variational quantum algorithms are challenging to scale to on noisy quantum computers without our algorithms.

Related articles: Most relevant | Search more
arXiv:2009.04417 [quant-ph] (Published 2020-09-09)
Mitiq: A software package for error mitigation on noisy quantum computers
arXiv:2102.05566 [quant-ph] (Published 2021-02-10)
Layer VQE: A Variational Approach for Combinatorial Optimization on Noisy Quantum Computers
arXiv:2404.07802 [quant-ph] (Published 2024-04-11)
Synergy between noisy quantum computers and scalable classical deep learning