Skip to content

2024's Quantum Error Correction Highlights (aka the 12 Days of QEChristmas)

Blog
2024's Quantum Error Correction Highlights (aka the 12 Days of QEChristmas)
19 December, 2024

I couldn’t let what was, arguably, quantum error correction’s biggest year slip by without celebrating the progress the community has made in 2024.  

So, I hope you can indulge a little festive cheer, as I’d like to shoehorn count down 12 significant announcements from the world of quantum error correction (QEC) based on one of my favourite Christmas carols... 

All together now...

On the twelfth day of QEChristmas, my quantum computer gave to me...

Twelve logical qubits... 

...achieving error correction in a demonstration from Microsoft and Quantinuum in September, representing the first demonstration of beneficially combining computation and error correction. 

Quantinuum and Microsoft also claimed to have demonstrated logical error rates 800x times lower than physical error rates on a four-logical trapped-ion qubit device in April, although using post-selection techniques that do not scale as well as standard error correction.  

Eleven roadmaps mapping... 

...towards the same goal. Eleven quantum hardware companies (at last count) have now released public roadmaps showing their planned approach to scaling up quantum computing with QEC at their heart. These roadmaps include: Alice & Bob, Google, IBM, Infleqtion, IonQ, IQM, Pasqal, Quandela, Quantinuum, QuEra and Rigetti.  

2024 was a turning point where these companies shifted from targeting physical qubits to logical qubits with every roadmap now including quantum error correction. These companies predict deploying real-time QEC capabilities by 2028 at the latest. 

Ten(th) fewer qubits... 

...needed thanks to a qLDPC error-correcting code from IBM that, for quantum memory, performed as well as established error correction protocols (but crucially needed only about one-tenth of the qubits). This work saw QEC on the cover of Nature.  

Ben Barber, a staff quantum scientist at Riverlane and one of the authors behind our Ambiguity Clustering qLDPC decoder (see the third highlight), said: “This really raised the profile of qLDPC codes and their potential to reduce qubit overheads – but lots of details of decoding and computation still need to be worked out.” 

Nines never-ending... 

The world's quantum hardware companies have been working to improve the quality of qubits for decades, and the figure below shows how this improves over time. An interesting point to note is that this progress is happening across all the major qubit types – a few years ago we expected one to race ahead of the competition, but this is not currently the case. 

We're now at a point where qubits are crossing something called 'the practical QEC threshold' - where there are diminishing returns for reducing physical error rates. It's at this threshold of a physical two-qubit gate fidelity of 99.9% - or the three nines - where an additional layer of classical QEC technologies is required to detect and correct errors and allow quantum computers to scale. 

However, many engineering challenges lie ahead to demonstrate such performance at scale and enable effective QEC on large-scale quantum computers.  

Figure 1: Two-qubit gate error rates (best or average) for trapped ions (Ions), neutral atoms, superconducting (SC) and silicon qubit technologies. The green dashed line for neutral atoms shows the public roadmaps of companies developing that technology.

Eight qubits stabilising... 

In October, Riverlane demonstrated fast feedback and real-time decoding with a scalable FPGA decoder integrated into Rigetti’s superconducting QPU’s (quantum processing unit’s) control system. This was the world’s first low-latency QEC experiment.  

We performed an 8-qubit stability experiment with up to 25 decoding rounds and a mean decoding time per round below 1μs, showing that we avoided the backlog problem even on superconducting hardware with the strictest speed requirements. We also observed logical error suppression as the number of decoding rounds increased, which is the hallmark feature of stability experiments. 

Finally, we implemented and timed a fast-feedback experiment demonstrating a decoding response time of 9.6μs for a total of nine measurement rounds. This response time included all contributions such as control system latencies. 

Seven Septillions times faster... 

I couldn’t not mention Google’s Willow chip, a fantastic result for the team at Google that they demonstrated through two achievements. 

For me, the biggest achievement here was the clear evidence of operating well below the QEC threshold. Though the second achievement - a 10 septillions times speedup - was the highlight that caught the media headlines. 

A point to note though: the chip isn’t actually 10 septillion times faster than the best supercomputer. It performed a computation that, by Google's estimation, would have taken the most advanced supercomputer 10 septillion years to complete. But the computational task was to literally run a random sequence of operations. And no such dramatic speedup has been demonstrated for a practical computation. 

Google’s been very active in the QEC space this year and I also wanted to mention a result that’s not got as much press as Willow. 

T-state cultivation, as presented in the paper Magic State Cultivation: Growing T States as Cheap as CNOT Gates, is a process for efficiently preparing high-fidelity |T> magic states, which are crucial for fault-tolerant quantum computing. It refines existing ideas to make magic state preparation cost-effective and scalable within surface code frameworks.  

Compared to traditional methods like magic state distillation, cultivation requires fewer resources and achieves logical error rates as low as 10^-9 to 10^-11 under typical noise conditions. This efficiency could potentially eliminate the need for additional distillation steps in practice. 

Six(ish) syndromes extracting... 

This year, there were several papers (not six...and I feel the song link is getting tenuous now) on the topic ‘how many rounds of syndrome extraction do you need per logical operation when performing transversal logic?’.  This has been a topic of discussion during conference coffee breaks for many years, and it is exciting to see concrete results and papers start to come through. 

Harvard and QuEra sparked a lot of conversation in the field (and at Riverlane) claiming that, indeed, transversal logic could be implemented at speed with similar conclusions coming from a Universal Quantum and Google team. A team at Yale and Duke took a different stance.  

This is one of those rare cases where there is a fair bit of disagreement on how to interpret simulation results, and I had an interesting panel discussion on this topic at the Simons Institute for the Theory of Computing in October. Watch this space! 

Five cats coding! 

In September, Amazon Web Services (AWS) implemented a distance-5 repetition code just below the threshold with five cat qubits and four auxiliary transmon qubits.  

Having previously worked at AWS when this experiment was at an early stage, I’m super happy to see my ex-colleagues' efforts come to fruition. Congrats! 

This scheme broadly follows Alice & Bob’s chief of theory, Jérémie Guillaud, and director of research at Inria Paris, Mazyar Mirrahimi’s proposal in 2019 and was also inspired by work on bosonic codes at Yale University. The idea is to use a carefully engineered cat qubit that is naturally robust against bit-flip errors and then handle phase errors using conventional QEC codes.  

The current AWS approach diverges from the original vision of cat qubits as they use a mixture of cats (as data qubits) and conventional transmons (as auxiliary qubits). The use of transmons makes it easier to realise two qubit gates thereby enabling this landmark result, though it comes at the expense of limiting the extent to which bit-flips can be suppressed.   

It will be interesting to see how this approach plays out against pure cat or pure transmon architectures. 

Four X reducing ... 

In a new arXiv paper, Local Clustering Decoder: a fast and adaptive hardware decoder for the surface code, the team at Riverlane presented an FPGA implementation of our Local Clustering Decoder (LCD). This solution balances both the accuracy and speed required to create a real-time decoder, paving the way for a million error-free operations (aka the MegaQuOp - more on that shortly). 

Essentially, the LCD reduces the number of physical qubits required to support a logical qubit by four times (4 x) when using a leakage-dominated noise model – and while decoding in under 1μs on real hardware. 

The LCD will form the heart of the Deltaflow 2, representing a major step forward on Riverlane’s roadmap. We are now integrating our LCD into Deltaflow 2 into our existing partner labs, and it will be available in new installations in early 2025. 

(Zero point) Three% error rates... 

OK, I’m starting to think this 12 Days of QEChristmas link might start to be getting even more tenuous now. But let’s carry on... 

The Riverlane team presented the world’s best qLCPC decoder this year. Ambiguity Clustering is a drop-in replacement for BP-OSD.  In tests on IBM’s recent bivariate bicycle codes, we saw up to a 27x speedup at a realistic 0.3% circuit-level error rate. We can decode IBM’s Gross code on a laptop in 135μs per round of measurements, already fast enough to keep up with neutral atom and ion trap systems. 

In fact, forgive a moment of indulgence (it is Christmas, after all) but the Riverlane team really has published some brilliant papers on decoding over the last 12 months.  

In May, for example, teams from Riverlane and TU Delft tackled leakage-aware decoding. Leakage is a type of noise where a qubit stops being a qubit and jumps to a higher energy state. This decoder set new records in handling this type of noise, yielding a big reduction in the estimated QEC overhead. 

I also wanted to highlight a recent arXiv paper To reset, or not to reset – that is the question, which reveals that, as we move from memory to logical operations, our previous assumptions around resetting qubits are incorrect. 

While this post is a retrospective, the Riverlane team is always looking forward to a QEC future where we move from memory to logic experiments – not to mention how to work in hardware, not just software, which is vital to reach the MegaQuOp. Speaking of which... 

Two MegaQuOp moments... 

...from the quantum community. The QuOp has, for a long time, been how Riverlane has benchmarked the Deltaflow QEC Stack.  

In 2024, the QuOp (error-free quantum operation) and, more specifically, the MegaQuOp (one million error-free quantum operations) has gained momentum within the quantum community. It’s the over-arching goal of Riverlane’s roadmap, which was released in July, to give one example. 

During December’s Q2B conference, John Preskill, Professor of Theoretical Physics at Caltech, also highlighted its significance. In his talk, he described a MegaQuOp machine as "a compelling challenge for the quantum community," emphasising that progress will require innovation across all levels of the stack and offering an opportunity for co-design. 

A few years ago, John coined the phrase NISQ, utterly changing the entire quantum computing field and research direction. We can expect MegaQuOp QEC and applications (aka early fault tolerant quantum computing) to become massive in 2025. 

And a ‘QEC Era’ under the tree!

In October, Riverlane announced ‘The QEC Era is here’ in its inaugural industry report on its specialism, quantum error correction.  

It’s a contentious question – in our recent webinar, two of the report’s 12 expert interviews gave conflicting views on what we’ll see in the next year, along with my own predictions. (Disclaimer: this was filmed in November.) 

Are we in the QEC Era? I would say yes. And whether you agree or disagree that The QEC Era is here, it’s clear that 2024 has put error correction into the spotlight.  

You’ll have to read the full QEC Report 2024 to decide whether you agree with this proposal – or maybe find out more about what will, I predict, be quantum’s biggest talking point not just for 2025 but for the next few years: quantum error correction. 

Merry QEChristmas everyone! 


Back to listing