The technical hurdles are significant, but there is a growing consensus that there are no fundamental challenges that cannot be overcome on the path to fault-tolerant quantum computers. That was the overarching theme of a world-expert panel discussion on quantum error correction, a formidable technical hurdle which must be overcome to unlock the full power of quantum computing, organised by Riverlane as part of IEEE Quantum Week 2021.
By harnessing the principles of quantum physics to encode and manipulate information, quantum computers have the potential to run powerful algorithms that are not possible on even the fastest classical computers. But implementing these algorithms involves setting up delicate quantum states which are exceptionally sensitive to interference from other parts of the computer and its environment. This means that today’s quantum computers are highly error-prone, a problem which places strict limits on the number of operations which can be performed. For quantum computers to be reliable enough to tackle society’s biggest challenges, from drug discovery to clean energy, they need to be able to detect errors as they occur and correct them, a process called quantum error correction.
At Riverlane, we know that error correction is the grand challenge in quantum computing. We are building the best error correction team in the world; we are tackling error correction across the quantum computing stack as part of our quantum operating system Deltaflow.OS®; and, working with our partners, we are obsessed with implementing quantum error correction across various hardware platforms. When we decided to submit a panel proposal for IEEE Quantum Week 2021, it was a no-brainer: it should be about error correction! Recognising that input from all the communities is needed, we wanted to hear and learn from top academics, and industry researchers working on different qubit platforms as well as classical computer architectures. Indeed, we had experts on ion traps, superconducting qubits, photonics and topological qubits, with viewpoints spread across hardware, software and error correction code development.
A major theme of the discussion was the challenge of taking error correction from theory to practice. The past twenty years have seen the development of a rich theory of quantum error correcting codes, methods for encoding quantum information in a way that protects it from interference and noise. But implementing error correction on a real machine, building a so-called fault-tolerant quantum computer, is much harder. “How do you operate [error correction] on a real system? You need to [consider] all the imperfections that come along with actually executing the protocols,” said Naomi Nickerson, who leads the fault tolerance team at quantum computer manufacturer PsiQuantum. “Every gate has error; every measurement has error. We need … a protocol for doing error correction which tolerates faults in every gate.”
Krysta Svore, who leads the quantum software and systems engineering work at Microsoft agreed, noting similarities with her early work on artificial intelligence when algorithms that worked beautifully on paper did not hold up so well on real hardware.
But trying things out in practice also leads to opportunities. “We are on the brink of being able to try these [machine learning] heuristics, [approaches that work in practice but have not yet been proven theoretically], “in quantum error correction,” Svore said, hinting that a combination of deep knowledge of error correction theory and practical engineering ingenuity will lead to the best solutions.
Every hardware implementation also brings its own advantages and challenges for implementing error correction. “For [trapped ion quantum computing] the errors are really from the gates. Most error correcting codes are built assuming your errors come from memory, but that [doesn’t apply here],’said Ken Brown, Professor of Physics at Duke University and a pioneer inion trap quantum computing.
Paul Gleichauf, Senior Principal Research Engineer at chip design company Arm, might have seemed somewhat of an outsider, coming from the classical computing world. But decoding processes are mainly carried out by classical hardware and must be done fast: it is estimated that a fault-tolerant computer using superconducting qubits would need to complete over 2 million error correction cycles per second. Designing chips for control systems, the component that diagnoses and corrects the errors, is also challenging because more complex chips increase the likelihood of interference with qubits. “With a 5-year lead time on processor development, people generally want Arm to add more features to the processor, but with [error correction] we ideally want to take away features so that we minimise the impact on the qubits,” Gleichauf said.
Marco Ghibaudi, Riverlane VP Engineering, built on this theme, noting that we need to optimise classical computing whilst considering different hardware requirements to implement error correction at scale: “Even if we find a great … approach for a decoder, we still need to think about feasibility. It might require more classical power than we can squeeze into the fridge!” he said.
The panelists certainly agreed on one thing: many big questions remain unanswered. What constraints that the various hardware architectures impose do we need to understand so that we can efficiently implement error correction? What are the best codes, and will this still hold five years from now? Should we prioritise developing better codes or engineering architectures that can sustain the ones available now?
Will we ever get to a large-scale fault-tolerant quantum computer? Ken Brown is optimistic: “What we see from all of these experiments is that there are many technical challenges but not fundamental ones.”
And there are exciting signs of progress too. John Martinis is a leader in superconducting qubit architectures and was a key part of the team behind Google’s Sycamore quantum chip. A recent experiment from the Google team impressively demonstrated an example of how to do error correction as it would be done on real hardware: doing measurements repeatedly and with lots of qubits. “This is really hard, so they could only actually implement it on a linear chain and only looking at bit flip and phase flip errors separately,” Martinis said. “But as they went to higher order error correction the error rate did go down[exponentially]. That’s what you want from the [error correcting] codes. Not just making it a little better, you need [really low] error rates.”
One thing that will help us solve error correction faster is the collaboration and cooperation that the quantum computing industry has built into its DNA: delivering functional fault-tolerant quantum machines is not a one-man show; it really will take a village.
With thanks to all of our panelists, and Sophia Economou (Professor of Physics at Virginia Tech) for deftly facilitating the discussion – we look forward to seeing where this exciting field goes next!
Panelists
Sophia Economou, moderator (Virginia Tech)
Krysta Svore (Microsoft)
John Martinis (UCSB)
Ken Brown (Duke University)
Marco Ghibaudi (Riverlane)
Naomi Nickerson (PsiQuantum)
Paul Gleichauf (Arm)