Qubits' strength is to exist in a superposition between a 0 and 1 – but this also leaves qubits in an extremely fragile state where the slightest amount of noise causes them to break down. This is where quantum error correction (QEC) comes in – and a new paper demonstrates how using all the data coming from a qubit can enhance its QEC protection.

The arXiv paper, Reducing the error rate of a superconducting logical qubit using analog readout information, demonstrates the positive impact of soft-information-aware decoding on a superconducting quantum computer run by our partners at Delft University of Technology.

Let me start by explaining what this soft information is – and then how it’s used to improve our quantum error correction capabilities.

**Soft information decoding**

Quantum error correction works by using multiple physical qubits on the device to represent a single logical qubit, thereby introducing redundancy into the system. This redundancy provides protection against errors. To learn about the errors that have occurred so that we can correct for them, we must make measurements. However, if you directly observe a qubit, this act of observation destroys its quantum state and renders the qubit useless.

That’s why you need large numbers of ‘syndrome’ qubits that you can observe to infer – and then correct – data errors on the qubits.

The process of inferring the errors based on the syndrome qubit measurements is known as decoding and is performed by a decoder. The decoder takes a model of the possible errors that can occur on the device and their effect on the measurement outcomes and uses them to work out the most likely error to have occurred. Large-scale quantum computers will generate terabytes of measurement data every second that must be decoded as fast as it’s acquired to stop errors propagating and rendering calculations useless.

Typically, quantum decoders rely only on ‘hard’ digitised measurement outcomes – 0s and 1s – and ignore the valuable information embedded in the analogue ‘soft’ measurement signal. This information provides insight into the likelihood of a measurement error having occurred and thus can be used by the decoder to improve its performance.

*Figure 1: Measurement response of |0⟩ and |1⟩ states in IQ space for an example qubit. *

When we measure a qubit, we do not directly obtain a value of 0 or 1. Instead, in the case of superconducting qubits, we obtain IQ voltages, as shown in the figure above. This figure shows the measurement response of |0⟩ and |1⟩ states in IQ space for an example qubit.

These IQ voltages form the ‘soft’ measurement signal. Each dot corresponds to data obtained when measuring a qubit during calibration, with the colour indicating the state the qubit was in before measurement. We use these calibration results to decide how to classify a measurement during the real error correction experiment. If the values fall to the left of the vertical dashed line, we say that the measurement is a 0; if it falls to the right, we say that the measurement is a 1.

Clearly, if the values are far over to the left or right, we can be pretty confident that we have classified the measurement correctly. However, if the values are close to the dashed line, we will be more uncertain in our classification. This is useful information for the decoder. Therefore, a method for using the soft data in the decoding processed was proposed, and it has previously been observed to improve decoding of quantum error correction experiments run on a real quantum computer.

In our work, we analysed data from QEC experiments run by collaborators at Delft University of Technology using their 17-qubit superconducting quantum computer. The group had previously demonstrated high-fidelity logical operations with a distance-2 surface code.

We focused on decoding methods where the model of the errors that is passed to the decoder is directly learned from the experimental data, with our collaborators using a neural-network decoder while we used a graph-based decoder. Our results show an improvement of up to 6.8% in the extracted logical error rate, a measure of the error correction performance, with the use of soft information.

While this improvement is limited, we anticipate that with faster measurement times, larger code distances or improved physical error rates, the benefits of using soft information on logical performance will become more pronounced.

You can read the full paper here.