Skip to content

Q2B Paris 2023 - Decoding Fault-Tolerant Quantum Computers

Q2B Paris 2023 - Decoding Fault-Tolerant Quantum Computers
20 June, 2023

Head of Architecture Earl Campbell explains why we need fast, real-time decoders and dives into Riverlane's scientific methods from our Nature Comms paper: Parallel window decoding enables scalable fault tolerant quantum computation and how these findings help us to build our quantum decoder Deltaflow.Decode.

Quantum computers promise to complete certain calculations in seconds where a regular supercomputer might take billions of years. But keeping up with a quantum computer is not easy. These machines create vast amounts of data that must be decoded in real time. It’s a serious problem. As we scale, it could stop quantum computers in their tracks and prevent them from ever achieving anything useful. 

The Riverlane team has developed a new method to address this issue by showing that the decoders for quantum error correction can be parallelised. This parallelisation essentially enables efficient universal quantum computers.  

Full details are available in the Nature Comms paper: Parallel window decoding enables scalable fault tolerant quantum computation.

Today's leading approaches to quantum error correction are not scalable as existing decoders typically run slower as the problem size is increased, inevitably hitting the backlog problem. That is: the current leading proposal for fault-tolerant quantum computation is not scalable. 

The data backlog problem 

The problem of decoding in real time and fast enough has become a recent focus point in the quantum computing community. Indeed, after we first put out a pre-print of the parallel window paper, nearly simultaneously there was a similar pre-print paper from the Alibaba team. This year also saw two new implementations of fast decoders using large, easily configurable electronic devices called ‘FPGAs’ - with papers from Riverlane and Yale University. 

In our Nature Comms paper, we tackle the specific issue where a continuous stream of data must be processed at the rate it is received, which can be as fast as 1 MHz in superconducting quantum computers and where 1 MHz is equivalent to processing one million bits of information per second, per qubit. The method works across every qubit type – but we focused on superconducting qubits in the paper because they are the most challenging (fastest) systems for real-time decoding. 

Our work provides a solution to the backlog problem and shows efficient quantum computation is possible at any scale. The backlog problem says that if you don’t process quantum error correction data fast enough, you are forced to exponentially slow down your quantum computation. By using parallelisation, we find that we can always process fast enough. 

Explaining parallelisation 

To carry out parallel window decoding, we break up the decoding task into chunks of work that we call “windows”. But we don’t wait for the first window to finish processing the data before moving to the next window. Instead, we decode multiple non-overlapping windows in parallel. 

The reason that we use non-overlapping windows is to avoid the data processing bottlenecks that occur between adjacent windows. This is known as parallelisation in time, meaning that we don’t have to decode measurement results in the order that they were measured.   

It’s analogous to parallel computing in classical computers: we are breaking down larger problems into smaller, independent parts that can be executed simultaneously by multiple processors communicating via shared memory with the goal of reducing the overall computation time. 

At each decoding step, a number of syndrome rounds (a window) is selected for decoding (orange region in left columns in the diagram below), and tentative corrections are acquired. The corrections in the older part of the window (green region in right columns below) are of high confidence and are committed to. The window is then moved up to the edge of the commit region and the process repeated. Finally, we commit everything to complete the calculation. 

Looking forward 

With better understanding of the parallelisation potential demonstrated in this paper, and our related work this year on FPGA and ASIC decoding, this past concern is giving way to a new sense of optimism that we can decode fast enough to scale quantum computers to the size needed to do something useful for society. 

While I share this optimism, there is still a mountain to climb.   
Current decoders support a single logical qubit (where a logical qubit is a group of physical qubits that have the collective processing power of one, error-free qubit) and that qubit is logically idle (not involved in the computation).

But we need an integrated network of decoders working in concert to decode multiple logical qubits while they are performing computations.

Building this will be another huge engineering leap – but one that I’m confident the team at Riverlane will make. 

You can find out more about our quantum error decoder here and access the paper Parallel window decoding enables scalable fault tolerant quantum computation here.  

Back to listing