There are two very real requirements for quantum error correction (QEC) at scale: we need real-time QEC on real hardware.
I'm proud to say Riverlane and Rigetti have made significant steps towards both in our latest arXiv paper: Demonstrating real-time and low-latency quantum error correction with superconducting qubits.
The paper marks the world’s first low-latency QEC experiment on hardware. We demonstrated low-latency feedback with a scalable FPGA decoder integrated into the control system of one of Rigetti’s superconducting quantum processors.
A slice of lattice surgery
When you want to start applying logical gates and full algorithms, then a new era of so-called' ‘fast logic’ is entered. This has a dual meaning, depending on the type of qubits. It can either mean:
- Lattice surgery in a solid-state 2D architecture (superconducting) using two logical qubits, and a Hadamard (logic) gate.
- Transversal CZ gates in a reconfigurable AMO (atomic, molecular, optical) system, between four logical surface code patches, transversal H gates, and logical qubit shuffling.
Both enable the movement of logical information between two or more separated logical qubits.
Superconducting qubits use lattice surgery, which the below figure shows. Here, we represent one logical qubit by a patch composed of physical qubits. A logical qubit can interact with another logical qubit when you can perform two-qubit gates between the physical qubits along the edges of the logical qubit patches.
Lattice surgery involves merging logical qubits into a larger logical qubit and then splitting them apart again.
In this paper we, essentially, focused on running experiments on the connecting bar between the qubits (the ‘merge’ stage’). This helped us understand a vital portion of lattice surgery and how our decoder responds with real hardware.
(a) 2 × 2 stability patch. White dots correspond to data qubits; black dots correspond to ancilla qubits used to measure the stabilisers. Green coloured half-disks are weight-2 Z-stabilizers, and the orange square is a weight-4 X-stabiliser. The dotted lines indicate the hardware connectivity needed to execute a stability experiment. (b–d): Spacetime diagrams for (b) a stability experiment, (c) lattice surgery and (d) logical patch moving. Orange sheets represent locations where errors can flip the value of a logical observable; green sheets represent places where an error can cause an isolated defect, therefore allowing error strings to terminate; the dashed line shows an example string of undetectable errors that flip the logical observable.
The stability experiment (shown in c) is a small slice of the full lattice surgery experiment, and so can serve as a testing ground for what happens during lattice surgery.
We performed an 8-qubit stability experiment with up to 25 decoding rounds and a mean decoding time per round below 1μs, showing that we avoid the backlog problem even on superconducting hardware with the strictest speed requirements. We also observed logical error suppression as the number of decoding rounds increased. Finally, we implemented and timed a fast-feedback experiment demonstrating a decoding response time of 9.6μs for a total of 9 measurement rounds. This response time includes all contributions such as control system latencies.
For Riverlane, this pushes us further along our roadmap, helping us to understand how to keep the qubits alive forever, aka achieve streaming high-fidelity memory, which is the key requirement for our QEC Stack, Deltaflow. We have built a streaming decoder. The next step is demonstrating a decoder with real qubits, which both streams and achieves a fast response time.
It’s another exciting step forward and will help unlock the next generation of quantum experiments that go beyond purely keeping logical qubits alive and into demonstrating building blocks of fault-tolerant computation, such as lattice surgery and magic state teleportation.
QEC’s recent surge
It’s been an exciting few weeks in the world of QEC. For example, AWS, Quantinuum and Microsoft have all shared results to push the field further forward.
With reference to our paper, I couldn’t not mention Google’s paper too: Quantum error correction below the surface code threshold. This proved that QEC works in practice, not just in theory - a landmark achievement. It is a clear demonstration of sub-threshold quantum error correction with logical error rates below physical error rates.
Google’s paper used a software QEC decoder rather than running on dedicated hardware and ran this in a streaming fashion but with a significant latency. This is where the low-latency findings in our paper complement Google’s work.
I look forward to seeing what happens next in the world of QEC.
You can read our full arXiv paper here.