Skip to content

QEC23: Six key takeaways on the state of quantum error correction

Blog
QEC23: Six key takeaways on the state of quantum error correction
10 November, 2023

I’ve just got back from QEC23 in Sydney, the leading global conference in Quantum Error Correction (QEC). Phew, what a week. When talking about the conference, the responses were uniformly: "It was great!". 

If you missed out (or just want some more insights into quantum error correction), I’m going to give a summary of the themes and emerging trends from the conference.  

It’s quite an extensive list of insights. Here are some of the highlights: 

  1. QEC has arrived: quantum error correction has cemented itself as the core challenge for fault-tolerant quantum computing; 
  2. Neutral atom shock: progress with these qubits is fast and opening up many options for future QEC experiments; 
  3. The unexpected rise of erasure qubits: with three talks on this emerging topic; 
  4. Real progress in real-time decoding: with our head of decoding, Neil Gillespie, explaining our parallel decoding paper and details around our FPGA and ASIC implementations + PsiQuantum’s impressive simulations; 
  5. Circuits not codes: QEC thinking has pivoted to a circuit-centric way of thinking with Floquet codes leading the way; 
  6. qLDPC codes move to the mainstream: but there’s still a long way to go to realise practical experiments. 

Figure 1: QEC 2024 conference attendees, just outside conference venue at Darling Harbour, Sydney. 

Location + vibes = An amazing experience 

Before I dive into the science, I just want to mention QEC23's location. Sometimes when you attend a conference, the website showcases beautiful vistas of the local area but the actual venue is actually a dull, grey conference centre. 

Doltone House was the exception. Located right on Darling Harbour, it was spectacular and featured an abundant supply of restaurants for those all-important post-conference meals and conversations.  

Figure 2:  Doltone house on Darling Harbour, the main venue for QEC23. 

The most striking part of QEC23 was the excitement and joy from everyone I spoke with. It was fantastic to see the community together again and at such an exciting time in the field.    

QEC has been a semi-regular conference, with the last one organised by UCL’s Dan Browne and myself in London 2019 (recordings are available here).  

Originally, QEC was meant to happen at least once every two years, and Sydney was agreed as the venue for QEC in 2021.    

Of course, we all remember what happened in 2020. Australia’s lockdowns meant a significant delay. But QEC23 was an overwhelming success. One especially enthusiastic attendee told me, with real fire in his eyes, how this was his first conference since QEC19 and that he was blown away by the experience. 

I couldn't agree more - and here are my scientific highlights.

1. QEC has arrived 

Passage of time wasn’t the only factor behind the anticipation surrounding QEC23. The shift from ideas to reality for the field of QEC in the last four years has been monumental, as reflected in the selection of talks and attendees. More than half the invited talks presented results from new QEC experiments.   

QEC has finally arrived. 

The first day included talks from Natalie Brown and Ciaran Ryan-Anderson on the plethora of QEC experiments that they’ve been running on the impressive Quantinuum racetrack device, with a teaser of unpublished/in-progress surface code results.

Figure 3:  Natalie Brown presents QEC results from Quantiuum and explores what improvements in physical error rates would be needed for them to hit break even. 

On the last day, Mike Newman gave a wonderful talk on the Google "milestone 2" result on suppressing logical errors by making the surface code bigger. The presentation included some neat videos of rare cosmic ray events and an appropriate sprinkling of memes (see my video tweet below).   

Insightful remarks from Mike included that Google’s qubits are actually not that good in terms of coherence (T1) times, with other superconducting teams routinely getting 10x-100x better coherence times.   

How then is Google leading in superconducting QEC experiences?  Google’s researchers have very tuneable qubits, with lots of control parameters, and are extremely proficient at optimising these control knobs. This leaves me hopeful that combining QEC experiment design skills with high quality device fabrication will lead to much better QEC demonstrations in the next few years. These, and other, experimental talks that reviewed such published results were fantastic, though familiar.   

2. Neutral atoms

The biggest surprise/buzz/(to some degree) shock was sparked by the neutral atom experimental results presented by Dolev Bluvstein of Harvard University.  

Partly, these neutral atom talks were surprising because many of the results have not yet appeared on the arXiv, unlike most of the other experimental talks at QEC23.

The other striking aspect is the pace of progress in neutral atom, which was barely visible as a platform at QEC19. This progress is grounded in several technological breakthroughs, including advances in atom rearrangement and a new approach to performing two qubit gates.   

Atom shuffling, in particular, opens up many options for QEC experiments that are unavailable to solid-state devices. There is even hope that neutral atom platforms will realise qLDPC (quantum Low-Density Parity Check) codes. 

On the incoming flight to Sydney, I read the recent Harvard team proposal paper for realising qLDPC codes. To be honest, before the conference I was a bit sceptical. For 20 years I’ve read various claims that architectures would be able to support long-range, high-fidelity gates, without much progress (except in ion traps). 

What I got from QEC23 (that I didn’t get from reading the papers) was video footage of atoms being rearranged to execute QEC codes. These videos are compelling. They can really shuffle tens or hundreds of atoms with incredible precision. 

Of course, every platform has its drawbacks. In these experiments, every qubit was only measured once and destroyed in the process. So, continual atom reloading needs to be developed to achieve repeated QEC, and the readout process could also benefit from being faster. 

To pick a personal favourite, it was cool to see neutral atoms implement "the smallest, interesting colour code" (aka the [[8,3,2]] code) which I wrote a blog post about - and it seems to have been more widely read and cited than some of my other papers.   

What’s neat about [[8,3,2]]? It is a small code (the smallest!) with an exotic property where you can fault-tolerantly implement a Toffoli gate (like one-bit addition) using the code.    

The smallest, interesting colour code was also recently implemented by the Quantinuum team.   Some larger codes with similar exotic properties appear in my synthillation paper and I plan to look back and see which of these might make an exciting demo. Watch this space. 

3. Erasure qubits 

Erasure qubits have the potential to reduce the overheads associated with fault tolerance. One approach to noise resilience is to engineer these "erasure qubits" where the dominant noise is an erasure error – a type of error that takes the qubits out of the computational space and whose location and occurrence can be detected. 

There were three talks on erasure qubits, which is a big increase relative to previous QECs. These included talks from Aleksander Kubica (representing Amazon Quantum) and two talks from Yale University. 

Figure 4: Here is Aleksander Kubica presenting the Amazon Quantum team's work on erasure qubits. On the last day, we changed building into this high-ceilinged space that was also used for the conference banquet. 

Erasure is a type of heralded error, so called because a herald (some form of measurement result) makes it clear which qubits have been erased. This bonus information makes erasure errors easier to decode, and we can usually handle twice as many erasure errors as conventional Pauli errors.  

An erasure qubit is any architecture that has been engineered so that erasure errors are dominant over non-erasure errors. Some architectures are naturally biased to erasure errors, such as optical photons where photon loss cause an erasure error.

Other systems like superconducting qubits are not naturally biased to erasure but do suffer amplitude damping (T1) processes that convert the |1> state to the |0> state. By using two qubits suffering amplitude damping, one can encode into the |10>, |01> subspace. Amplitude damping events map these to the |00> state, which can be heralded given a suitable measurement gadget.   

4. Real-time decoding 

At Riverlane, real-time decoding is a core focus and (I’d argue, of course, that) we’re leading the field here. So, it was great to see a range of talks on real-time decoding, including a talk by our own head of decoding, Neil Gillespie.   

Neil gave an overview of the backlog problem that needs to be solved, mentioning our parallel decoding paper, and then dived into the detail of our FPGA and ASIC implementations. 

During the talk, Neil had a resin-encased version of an ASIC from our older decoder design. You can see it on the podium below – but this tiny chip also generated a lot of interest and discussion during QEC23.  

Figure 5: Neil Gillespie presenting the Riverlane real-time decoding paper. An encased ASIC decoder sits on the podium just in front of Neil. 

The backlog problem and real-time decoding appeared in other talks, as the Alibaba team presented an invited talk on their sandwich decoder, which is very similar to Riverlane’s parallel window approach.   

Sam Roberts of PsiQuantum presented their work on modular decoding that builds on the ideas of parallel/sandwich decoding. The parallel and sandwich decoders both give an explicit schema for parallelisation in time, and Sam sketched how this can be extended to spatial directions.    

What made the PsiQuantum work incredible was that they gave an explicit schema for modularising the decoding problem (cutting it into suitable windows).  And they used this technique to perform decoding to simulation in entire magic state distillation factory. That is a big simulation and very impressive!    

Though the PsiQuantum simulations were purely in software and not in real-time, they help signpost a path towards real-time decoding of large complex algorithms. And I’m confident there is still a lot of room for improvement in the choice of windows and buffer regions. 

Figure 6: A slide from the PsiQuantum talk by Sam Roberts on modular decoding. The slide features a space-time diagram broken up into logical blocks such as the GHZ and GHX blocks, which then form the basis for their windowing and parallelised simulations 

The need for real-time processing was also highlighted in some of the experiments discussed. For instance, the first talk of the conference by Riddhi Gupta presented work on magic state preparation on IBM devices. The magic state preparation protocol requires a classical feedback decision to be performed mid-circuit.   

I learned through this talk that the IBM control system is not ideally suited to this sort of experiment: instead of being performed by fast, deterministic FPGA logic, the feedback instructions had to pass through a slow, non-deterministic CPU. The result was that qubits de-phased slightly while waiting for a random period to receive the classical feedback.  

In this case, this de-phasing was mild enough that it did not significantly degrade the experiment.   

For real-time QEC, we are going to need many more deterministic feedback operations. So, this talk also highlighted that even large companies have a long way to go in making their control systems QEC ready. 

5. Circuits not codes   

Much recent progress in QEC has been driven by a new way of thinking where circuits (and not codes) form the foundation of current research directions. This thinking focuses on circuits that are fault-tolerant and protect logical information, rather than simply measuring the stabilisers of some fixed code. In other words, we’re now thinking in terms of "physical circuits with noisy qubits" being used to simulate "logical circuits with more reliable logical qubits".    

QEC codes are a useful tool for designing good circuits to achieve this simulation. But it is evident that QEC codes have been relied on too much as a pedagogical tool, and I’m excited to see the community turning to a circuit-centric way of thinking. 

So-called Floquet codes are leading this new approach. Despite being called codes, they are dynamical codes where the logical operators dynamically evolve over time. Hastings and Haah provided a theoretical framework for Floquet codes and that framework is proving to be fertile ground for new research directions — we saw this reflected by no less than four talks on Floquet codes!   

The main advantage of Floquet codes is that they can be realised on devices with relaxed hardware constraints such as hexagonal or heavy-hex lattices, which is especially appealing for superconducting qubits, spin qubits and (possibly) Majorana qubits. 

We also heard from Matthew McEwen about Google’s work on relaxed hardware constraints, which presented modified circuits that were inspired by the surface code but improved by a trial-and-error tinkering process. These modifications can be realised with less connectivity. There are even versions that natively use iSWAP gates instead of CZs as two-qubit gates, which is preferred as iSWAP gates suffer less leakage and are generally better than their CZ counterparts.   

Figure 7: Closing slide from Matthew McEwen's (Google) at the end of a great talk on relaxing hardware requirements for QEC. Do circuits, not just codes! 

McEwen highlighted the role of Crumble (another Gidney-produced SW tool) when trying to get the circuits right. Unlike the Floquet codes, we don’t have a good theoretical underpinning.  Rather, McEwen took a ‘tinker and see what happens’ approach. It remains to be seen: why do these circuits work?  And is there a more systematic approach to find other and, potentially, better fault-tolerant circuits? 

I hope we’ll see an experimental realisation soon from the Google team and I’m eager to see if it will outperform the usual surface code approach! 

6. Quantum Low-Density Parity Check codes 

The words qLDPC hung in the air throughout the whole conference in a way they never have before.  The idea of qLDPC codes is almost as old as the field of QEC itself.  But since QEC19, there have been a number of breakthrough results for qLDPC codes.  

Several speakers began their talks by introducing the surface code, followed by remarks such as: "And here is the surface code, everyone’s favourite code, but maybe not anymore...".  Why not anymore? The speakers barely needed to explain to this audience – we all know that there are many contenders trying to steal surface code’s crown (qLDPC included). 

Notable qLDPC talks included Pavel Panteleev on their breakthrough "good qLDPC" code results, Nikolas Breuckmann on connections between qLDPC codes and complexity theory, and several talks on how to do logic with qLDPC codes. 

While there was an energy around qLDPC codes like never before, they are still quite far behind on the experimental front. I predict that it will be some years before we see a demonstration.   

Many talks also ended with a list of open problems, and they always included  bullet points like: "How to do logic in qLDPC codes" and "How to implement in a realistic architecture". 

In Pablo Bonilla’s talk (Harvard), we saw a proposal for qLDPC codes in which logic is performed using our old friend the surface code. Looks like it’ll be clinging onto that crown for a few more years. 

qLDPC codes have been progressing solidly for the last 20 years, and it is exciting to see them move into the main stream.  However, there are many super-interesting academic questions to be answered before we fully understand how qLDPC codes would work in practice. 

What next? 

Fate allowing, QEC will happen every one or two years and the next venue is not (yet) decided.  Given that it takes more than a year to plan one of these events, the next QEC will likely be in 2025.    

The steering committee is looking for expressions of interest to host the next QEC. If interested, you should get in contact with Stephen Bartlett. Given the big North American presence at QEC, and that the last American QEC was in 2017 in Maryland, I expect the next event will be in North America. 

If you can’t wait two years, I suggest checking out the FTQT 2024 workshop: https://www.benasque.org/2024ftqt/ - I’ll be there and expect to see many other familiar QEC faces attending too. 

Lastly, you might be interested to follow me on Twitter/X. For QEC23, I tweeted like I’d never tweeted before, and I did my best to give all round coverage. I will confess that my tweet energy decreased through the week and coverage got a bit patchy. That’s why I wanted to condense all of my insights into this blog post. 

There’s also the Riverlane Twitter account (with other researchers on the team now tweeting regularly on their work and the other quantum events) — and if you’re interested in joining the team and building the world’s leading Quantum Error Correction Stack, you can find out more about working at Riverlane here. 


Back to listing