Quantum computers promise to complete certain calculations in seconds where a regular supercomputer might take billions of years. But keeping up with a quantum computer is not easy. These machines create vast amounts of data that must be decoded in real time. It’s a serious problem. As we scale, it could stop quantum computers in their tracks and prevent them from ever achieving anything useful.
The Riverlane team has developed a new method to address this issue by showing that the decoders for quantum error correction can be parallelised. This parallelisation essentially enables efficient universal quantum computers.
Full details are available in the Nature Comms paper: Parallel window decoding enables scalable fault tolerant quantum computation.
Quantum error correction and decoding
The field of quantum computing is maturing as the community continues to ask and address the engineering challenges that must be solved to build a large-scale, error-corrected quantum computer.
The crux of the issue is that the physical qubits within a quantum computer are prone to noise and decoherence – and these errors must be corrected to unlock the potential of quantum computers.
Quantum Error Correction (QEC) provides the path for useful quantum computers. It is a set of techniques used to protect the information stored in qubits from errors and decoherence caused by noise.
Quantum error correction generates a continuous stream of data, and we use a sophisticated algorithmic process called “decoding” to process this data. We’re not there yet with our decoding solutions. But we are getting there: Riverlane recently released the world’s most powerful decoder, and you can find out more about how we’re developing this technology here.
A little-known fact of quantum error correction is that if the decoder infrastructure cannot keep up, a data backlog builds up and the quantum computer runs exponentially slower.
Today's leading approaches to quantum error correction are not scalable as existing decoders typically run slower as the problem size is increased, inevitably hitting the backlog problem. That is: the current leading proposal for fault-tolerant quantum computation is not scalable.
The data backlog problem
The problem of decoding in real time and fast enough has become a recent focus point in the quantum computing community. Indeed, after we first put out a pre-print of the parallel window paper, nearly simultaneously there was a similar pre-print paper from the Alibaba team. This year also saw two new implementations of fast decoders using large, easily configurable electronic devices called ‘FPGAs’ - with papers from Riverlane and Yale University.
In our Nature Comms paper, we tackle the specific issue where a continuous stream of data must be processed at the rate it is received, which can be as fast as 1 MHz in superconducting quantum computers and where 1 MHz is equivalent to processing one million bits of information per second, per qubit. The method works across every qubit type – but we focused on superconducting qubits in the paper because they are the most challenging (fastest) systems for real-time decoding.
Our work provides a solution to the backlog problem and shows efficient quantum computation is possible at any scale. The backlog problem says that if you don’t process quantum error correction data fast enough, you are forced to exponentially slow down your quantum computation. By using parallelisation, we find that we can always process fast enough.
To carry out parallel window decoding, we break up the decoding task into chunks of work that we call “windows”. But we don’t wait for the first window to finish processing the data before moving to the next window. Instead, we decode multiple non-overlapping windows in parallel.
The reason that we use non-overlapping windows is to avoid the data processing bottlenecks that occur between adjacent windows. This is known as parallelisation in time, meaning that we don’t have to decode measurement results in the order that they were measured.
It’s analogous to parallel computing in classical computers: we are breaking down larger problems into smaller, independent parts that can be executed simultaneously by multiple processors communicating via shared memory with the goal of reducing the overall computation time.
At each decoding step, a number of syndrome rounds (a window) is selected for decoding (orange region in left columns in the diagram below), and tentative corrections are acquired. The corrections in the older part of the window (green region in right columns below) are of high confidence and are committed to. The window is then moved up to the edge of the commit region and the process repeated. Finally, we commit everything to complete the calculation.
This paper is a significant step forward towards useful quantum computing. In previous years, there was genuine concern in the field that even if we could build the qubits for a quantum computer, the sheer volume of real-time data that needs to be processed in a large quantum computer would prove too challenging, and so qubits would remain noisy and of limited value.
A few years ago, I remember being in the audience at an international conference on Quantum Error Correction, QEC 2017, when Google’s Austin Fowler reported that he’d optimised an FPGA decoder as much as he could - and that it was still 10x too slow. Many experts left that conference with the impression that real-time decoding was a serious, potentially impossible, problem.
With better understanding of the parallelisation potential demonstrated in this paper, and our related work this year on FPGA and ASIC decoding, this past concern is giving way to a new sense of optimism that we can decode fast enough to scale quantum computers to the size needed to do something useful for society.
While I share this optimism, there is still a mountain to climb.
Current decoders support a single logical qubit (where a logical qubit is a group of physical qubits that have the collective processing power of one, error-free qubit) and that qubit is logically idle (not involved in the computation).
But we need an integrated network of decoders working in concert to decode multiple logical qubits while they are performing computations.
Building this will be another huge engineering leap – but one that I’m confident the team at Riverlane will make.