New publication: Making the Cut

Just last week, Duarte Magano, Yasser Omar and myself published “Making the cut: two methods for breaking down a quantum algorithm” to the ArXiv. You can find the paper here.

Since a picture is worth a thousand words, here’s an (informal) visual explanation for our proposal:

Visual summary of the publication.

A non-technical explanation follows:

Quantum computers can be shown, in theory, to be able to solve problems that are not expected to be classically tractable. However, physical effects disturb the the operation of current-generation quantum computers, which have not yet reached a sufficient degree of fidelity to be able to robustly correct for these disturbances. So, effectively, practical quantum computations must run for a limited time before a measurement is made. This results in that a quantum algorithm may be known for a particular problem, with provable computational advantage over an analogous classical approach, but that this algorithm cannot be ran in practice because of the aforementioned real-world limitations. However, note that nothing prevents running multiple such quantum computations; in fact, since the measurements yield classical data, we may operate in this limited regime by performing a limited quantum computation, reading out some data, classically treating it, and preparing a new limited quantum computation based on the previous results, repeating this procedure for as many rounds as necessary to produce a final answer. Such an approach would typically be described as a “(quantum-classical) hybrid algorithm”. The question is, then, how much more efficient is this procedure than a purely classical approach, and how much less efficient it is than a purely quantum approach. If a purely quantum algorithm is known to have advantage over any purely classical algorithm, we expect the performance of some hybrid algorithm to be somewhere in-between the purely quantum and the purely classical algorithms. Nonetheless, to certify this in its generality is challenging, and so we adopt a stricter (but nonetheless expressive) model, the “query model": we consider a limited quantum computation to be one which can only read the input at most a fixed number of times before being forced to yield a measurement. But, even under this model, it is not clear how to practically derive a “good” hybrid algorithm: one whose performance sits between the purely quantum and the purely classical limits.

In this paper, we give a concrete answer to the latter point, by proposing two different design methods to go from a purely quantum algorithm to a quantum-classical hybrid algorithm. We dub these methods “parallelization”, and “interpolation”. When doing “parallelization”, one seeks to divide the input data into disjoint subsets, such that each subset can be handled separately. Afterwards, the resulting answers are combined to produce the final answer. Because each of the considered input subsets is of smaller size, it is expected that our limited quantum computer will be able to handle each subproblem. Conversely, one may seek to divide the input data as much as needed in order to accommodate the limitations of the available quantum computer.

When doing “interpolation”, the input data is always considered in its entirety, but one now seeks to divide the original quantum algorithm into repetitions of a shorter routine. We argue that this is a particular straightforward process if the original quantum algorithm can be phrased as a so-called “Quantum Singular Value Transformation” (QSVT). Under this framework, we can connect the “interpolation” procedure with the ability to replace an evaluation of a high-degree polynomial with multiple evaluations of a low-degree polynomial. This is because QSVT connects higher-degree polynomials with deeper circuits.

Although, to the best of our knowledge, these two methods had not been presented and analysed side-by-side before, we give examples of hybrid algorithms in the literature that conform to the proposed descriptions. Furthermore, we show that the two methods are significantly different. We do so by presenting two problems for which a quantum advantage (in the query model) is known, and such that: for the first problem, “parallelizing” yields a more performant algorithm than “interpolating”, and; for the second problem, “interpolating” yields a more performant algorithm than “parallelizing”.

This paper followed naturally from our previous work, when we started looking at the result of interpolating certain algorithms, and comparing with other approaches we could find (like parallelizing).

I am excited to see further theoretical guarantees being developed for hybrid algorithms, and hope that this work will aid in future designs of hybrid quantum algorithms.