{
"id": "2009.07268",
"version": "v1",
"published": "2020-09-15T17:58:25.000Z",
"updated": "2020-09-15T17:58:25.000Z",
"title": "An improved quantum-inspired algorithm for linear regression",
"authors": [
"András Gilyén",
"Zhao Song",
"Ewin Tang"
],
"comment": "18 pages",
"categories": [
"cs.DS",
"quant-ph"
],
"abstract": "We give a classical algorithm for linear regression analogous to the quantum matrix inversion algorithm [Harrow, Hassidim, and Lloyd, Physical Review Letters '09] for low-rank matrices [Chakraborty et al., ICALP'19], when the input matrix $A$ is stored in a data structure applicable for QRAM-based state preparation. Namely, if the input model supports efficient $\\ell_2$-norm importance sampling, and given $A \\in \\mathbb{C}^{m\\times n}$ with minimum singular value $\\sigma$ and $b \\in \\mathbb{C}^m$ as input, we can output a description of an $x$ such that $\\|x - A^+b\\| \\leq \\varepsilon\\|A^+b\\|$ in $\\tilde{\\mathcal{O}}\\left(\\frac{\\|A\\|_{\\mathrm{F}}^6\\|A\\|^2}{\\sigma^8\\varepsilon^4}\\right)$ time, improving on previous \"quantum-inspired\" algorithms in this line of research by a factor of $\\frac{\\|A\\|^{14}}{\\sigma^{14}\\varepsilon^2}$ [Chia et al., STOC'20]. The algorithm is stochastic gradient descent, and the analysis bears similarities to results of [Gupta and Sidford, NeurIPS'18]. Unlike earlier works, this is a promising avenue that could lead to feasible implementations of classical regression in a quantum-inspired setting, for comparison against future quantum computers.",
"revisions": [
{
"version": "v1",
"updated": "2020-09-15T17:58:25.000Z"
}
],
"analyses": {
"keywords": [
"linear regression",
"quantum-inspired algorithm",
"input model supports efficient",
"quantum matrix inversion algorithm",
"minimum singular value"
],
"note": {
"typesetting": "TeX",
"pages": 18,
"language": "en",
"license": "arXiv",
"status": "editable"
}
}
}