
Today’s quantum computers have high error rates – around one error in every few hundred operations. These errors occur primarily due to the fragile nature of qubits where environmental disturbances and decoherence affect their quantum state.
Once we reduce this error rate to one in a million (referred to as the MegaQuOp regime), truly useful applications will start being unlocked, with larger algorithms requiring one in a billion or even one in a trillion error rates. However, it is unlikely that qubit and/or quantum algorithm improvements alone will be enough to run algorithms with billions of operations reliably. For that, we will need quantum error correction (QEC).
QEC is a set of techniques used to protect the information stored in qubits from errors and decoherence caused by noise.
It combines many unreliable physical qubits in a way that if one qubit in the pack throws an error, others can help. With this, we can combine many physical qubits to act like a few logical qubits that are strongly resistant to noise.

Figure 1: Unreliable physical qubits are turned into a useful, logical qubit.
We need to be careful because, in quantum mechanics, a measurement collapses the state of a qubit. Therefore, we need to receive information about what errors have occurred without directly measuring the logical qubit. Instead, we measure the collective properties of groups of qubits that give us clues about where errors could have occurred. By analysing these clues, sophisticated algorithms called decoders can identify and correct errors that occur during computation.
This is incredibly challenging but also an essential technology that needs to be developed before the quantum computing revolution can start. Let’s dive into why.
Improving qubits
Different qubit types have different strengths and weaknesses. Some have fast response times but operate at extreme temperatures, near 0K. Others have unmatched stability and high gate fidelities but long gate operation times. Some qubits have limited connectivity, where each qubit can talk to only a few neighbours, while others are easily reconfigurable, allowing any qubit to interact with any other with little overhead.
Nevertheless, every qubit type has seen impressive improvements in error rates and qubit numbers over the last two decades, as Figures 2 and 3 show.

Figure 2: Number of physical qubits for trapped ions (Ions), neutral atoms, superconducting (SC) and silicon qubit technologies. The green and orange dashed lines for neutral atoms and SC (fixed) quantum computers show the public roadmaps of companies developing the respective technology.
These improved error rates and qubit numbers are predominately driven by improvements in fabrication methods for quantum hardware, the precision of qubit control and the scalability of quantum enabling technologies such as electronic components, readout cabling and cryogenics.
But for quantum computers to start outperforming classical supercomputers on useful problems, we will likely need error rates of one in a million (10^-6) or lower. Today’s best machines have error rates of around one in a thousand (10^-3), leaving a daunting gap of multiple orders of magnitude.
There are many methods to help close this significant gap – and some will take us further than others.
Suppression, mitigation and correction
Quantum error suppression, quantum error mitigation and quantum error correction are different schemes to deal with noise in quantum computers.
Quantum Error Suppression (QES) refers to a set of techniques to try and make qubits less noisy by improving the way qubits are controlled. This is done by attempting to anticipate the errors and adjusting the qubit operations accordingly.
But as algorithms require more resilient qubits, these schemes become increasingly complex with diminishing returns. QES only takes us so far, which is when quantum error correction and quantum error mitigation become important.
Quantum Error Mitigation (QEM) attempts to reduce errors by adjusting the algorithms to noise and using many trials of the noisy computation to extract a useful signal.
While effective for smaller circuits, the number of trials grows exponentially with the circuit depth and the number of qubits. This limits the impact QEM can have, especially in bigger algorithms that we want to be running on quantum computers.
Therefore, if we want to reach errors below one in a million and unlock the transformative power of quantum computers, we need a way to more strongly suppress errors and one that scales with the system. This is what QEC does for us.
To make QEC possible, we need qubits and operations between those qubits that are good enough, i.e. with noise below a certain threshold. Most qubit modalities are now approaching this threshold (see Figure 3).
Once this is satisfied, QEC allows us to suppress the errors exponentially with the system size.

Figure 3: Two-qubit gate error rates (best or average) for trapped ions (Ions), neutral atoms, superconducting (SC) and silicon qubit technologies. The green dashed line for neutral atoms shows the public roadmaps of companies developing that technology. The practical QEC threshold is widely regarded as about 99.9% physical two-qubit gate fidelity (i.e. an error rate of 10^-3), which is when quantum hardware companies can start to implement additional QEC techniques. This threshold is examined further in The QEC Report 2024.
Correcting errors
In a classical computer, a repetition code is used to correct errors. This method is not suitable for a quantum computer, but it does introduce some core concepts.
The classical repetition code encodes information by repeating it. For example, with a ‘distance 3’ repetition code: a 0 becomes 000, and a 1 becomes 111. If any of these three bits are corrupted, we can detect the error by comparing the bits with one another.
In quantum, we cannot just read a qubit's state without destroying it. So, we use a workaround where we add additional qubits, called auxiliary qubits, to compare the state of two qubits without ever directly measuring them.
This can be understood by examining the parity of a set of bits, where parity means counting whether there is an even or odd number of ‘1 bits’ in some subsets. This technique allows us to correct errors without learning the protected logical information.
The semi-circles in the figure below represent the parity of a pair of bits.

Figure 4: A bit-flip error in one of the bits. The numbers below the panels represent the information redundantly stored in the yellow bits, either 0 or 1. Green boxes highlight the bit values affected.
Regardless of the stored information, we have an even parity (00) when there are no errors. However, if there is a bit-flip on the third bit (a 0 becomes a 1), the parity check returns an odd parity (01).
Measuring parity checks enables us to identify the presence of errors without needing to know the stored information. This approach is particularly well-suited to quantum systems, as we do not need to detect errors by directly measuring the data qubits, which would collapse their state.
In this way, QEC codes are, essentially, lists of checks designed to identify errors in quantum computers.
Arguably, the most mature and well-studied QEC code today is the surface code, represented in Figure 5 and described by the measured parity checks. It has a strong resistance to errors and only needs qubits to be in a grid interacting with their neighbours, making it well-suited for most qubit types.

Figure 5: Numerous physical qubits make up a logical qubit.
The coloured fields in Figure 5 represent a parity check on a set of two or four yellow data qubits adjacent to the field, with the green auxilary qubits responsible for performing the parity check.
The two types of checks identify ‘phase-flip’ (green) and ‘bit-flip’ (orange) errors. These are the two most common error types on a quantum computer where:
- Bit-flip errors (also known as X errors) occur when the qubit state is flipped, e.g. from a |0> to a |1> or vice versa.
- Phase-flip errors (known as Z errors) involve a change to the qubit’s phase sign, i.e. a |1> changes to a -|1>, but a |0> stays as a |0>.
So, we need two types of auxiliary qubits: some which will measure the bit-flip errors and others that will measure the phase-flip errors.
If we ignore the green (phase-flip) checks in Figure 5 for now, we can examine how errors affect the orange (bit-flip) parity checks.
A single bit-flip error, shown in Figure 6 with an X, causes two parity checks to report an ‘odd’ outcome, represented as the large green balls.

Figure 6: A single bit-flip error ‘X’ causes two parity checks to report the “odd” outcome. Every place where two parity check can flip in such a way is represented with an edge (connection) between the checks. These connections form a decoding graph.
The surface code is designed in a way that any single bit (or phase) flip error will be reported by a pair of nearby auxiliary qubits. We can visualise this by connecting all such pairs into a network called the “decoding graph” where the nodes represent the auxiliary qubits detecting the errors and the edges connecting them represent the potential errors on data qubits.
So, when an error occurs in a data qubit, it triggers the connected auxiliary qubits, generating an 'odd' parity outcome.
The decoder then analyses these signals and uses the graph to identify the most probable locations and types of errors. As fewer errors are more likely than many, this can be done by finding the shortest paths along the graph that connect all triggered nodes.
What this means is that we can think of decoding the surface code as a ‘graph matching problem’. This approach simplifies the complex task of error identification and correction in QEC by abstracting away qubit-specific details.
For instance, in Figure 7 below, a pair of bit-flips next to each other (along a ‘string of errors’) causes a pair of parity checks to become odd. Here, we can again easily identify the shortest path now consisting of two edges.

Figure 7: The odd-valued parity checks appear at the end of the error string.
However, these figures are simplified versions of the decoding problems encountered in QEC.
As quantum computers scale, so must the surface code – and the number of errors that need correcting increases!
Now, try to spot the solution on the much bigger graph in Figure 8, pairing up the ‘odd parity vertices’ to find the most likely (shortest) path.

Figure 8: As qubit numbers scale and more errors appear, spotting a solution to ’the graph problem’ becomes increasingly difficult.
As we can see, this is extremely difficult. Unfortunately, things are significantly more complicated because we must consider more than just bit-flip errors.
Measuring parity checks can also give the wrong answer as errors can also happen on the auxiliary qubits we are using to read them out. The readout itself can also be wrong due to noise in electronics, and errors can even happen in the middle of the quantum operations used to measure the parity checks.
When we add all these errors and continuously repeat the parity checks, the decoding takes place on a massive 3D graph, which looks more like this:

Figure 9: A representation of the 3D graph used for decoding.
Here, each slice of the graph is a round of checks that need to be continuously performed to not give qubits time to accumulate errors. Remember: nodes are checks and edges represent all possible errors that we need to be able to correct.
Even this 3D graph is an oversimplification: using logical qubits in computation and accounting for complicated errors requires evermore complex decoding problems to be solved.
In practice, on large-scale devices, the data will be continuously streaming at an extremely high bandwidth, scaling to 100TB/s. Additionally, we have a limited time to solve the graph problem, as we must decode each QEC round faster than it takes to generate the next round of QEC data to be processed – less than 1μs for the fastest superconducting qubits. We must also respond promptly at well-determined times to avoid delays that would slow down the computation and lead to an accumulation of errors.
There are four broad challenges that QEC needs to address to be practical:
- Having high fidelity qubits: Once we reach a qubit fidelity of 99.9% (10^-3 error rate), we can introduce a set of classical QEC technologies to solve this complex, vast, real-time data processing problem.
- Developing ever-better decoders: Next, you need to pair high-quality qubits with high-accuracy decoders. QEC involves sophisticated inference algorithms to determine the most likely error that might have occurred given the measured checks.
- Performing at high speeds and low latencies: Next, to do the computation, each QEC round must be fast (<1μs) and deterministic (respond promptly at well-determined times) to avoid delays that lead to uncorrectable errors. This will likely require dedicated decoding hardware closely coupled to the system controlling the quantum processor.
- Dealing with massive data volumes: Next, when the system reaches full scale, we need to tackle the data volume problem. These algorithms require an extremely high instruction bandwidth, scaling to 100TB/s – equivalent to a single quantum computer processing and correcting the equivalent of Netflix’s total global streaming data every second.

Figure 10: An overview of quantum error correction in action with a QPU.
In conclusion, the need for quantum error correction in quantum computers is clear. Today's machines suffer from error rates far too high for practical applications.
While improvements in qubit technology and algorithms will help, they alone won't achieve the one-in-a-million error rates needed to surpass supercomputers.
Crossing into the million error-free quantum operations (MegaQuOp) regime presents a pivotal moment in quantum computing, where the power of quantum computers is expected to go beyond the reach of any classical supercomputer. The MegaQuOp is a landmark goal that the whole quantum community is now aiming for.
We can only reach it with QEC.