by Earl Campbell and Maria Maragkou
Quantum computing is at an inflection point. Innovation and investment are accelerating, driven by the pivot from the first successful generation of noisy intermediate scale quantum computers to a new generation of quantum computers integrating errorcorrection technology.
Collectively, the quantum computing community is setting a path to large ‘fault tolerant’ systems – and making faster progress than most predicted.
This optimism is spurred by a series of breakthroughs in qubit quality, algorithms and quantum error correction, as well as bold roadmaps from our many quantum computing hardware partners and the world’s governments.
Most coalesce on the year 2035 to unlock fully fault tolerant quantum computing. Increasingly this is defined by a single metric: the TeraQuOp i.e. a trillion (Tera) reliable Quantum Operations (QuOps). At that scale, we begin unlocking a range of applications in different science and engineering fields, opening a new age of human progress.
There are critical milestones along the way. The next major one is a MegaQuOp quantum computer, i.e. one that performs one million reliable quantum operations.
At a MegaQuOp, a quantum computer can run simulations that no supercomputer can. Will this unlock McKinsey’s predicted $1 trillion+ economic value from quantum computing? No, not yet.
It’s hard to say for sure what applications we’ll unlock at the MegaQuOp until such a system is in the hands of innovators. But what’s undeniable is the industry can’t push toward Giga and, ultimately, TeraQuOp systems without first reaching the MegaQuOp threshold.
We’ve set ourselves the goal of building the first prototype of a MegaQuOpscale QEC stack by the end of 2026. We are calling the final product Deltaflow Mega, with intermediate releases every year.
There are many reasons for our rational exuberance. Recently, we’ve seen a series of fastpaced and impressive demonstrations of quantum error correction. Breakthroughs from Quantinuum, ETH Zürich, Google, Harvard University, Yale University, IBM, Microsoft, Alice & Bob, and Riverlane (to name a few) have pushed the quantum error correction field further forward than anyone anticipated.
We’ve seen quantum error correction starting to extend the time a single logical qubit can stay alive. And we’ve seen impressive demonstrations of logical operations on small collections of qubits. These demonstrations (and others) are technical masterpieces, achieving an exquisite control of nature coupled with a scale and precision that some people doubted would ever be possible.
Yet, none of these demonstrations have integrated the fast, scalable realtime decoding processes needed for errorcorrected quantum computation. Indeed, it was only recently that Riverlane cracked the fastdecoding problem, meeting the MHz speed requirements of the fastest qubit types.
When will such technology be demonstrated with real qubits?
Soon. (Watch this space.)
Enter the MegaQuOp
To understand the MegaQuOp fully, let’s introduce the notion of quantum operations, or QuOps, where a QuOp = number of logical qubits (N) x depth of logical operations (D).
QuOps work as a proxy for a quantum algorithm’s complexity. The maxQuOp capacity is a useful measure to assess the power of a quantum computer. The QuOp also plays a similar role to FLOPs, commonly used to rank the relative processing power of supercomputers.
In a similar vein, we can rank quantum computers by the number of QuOps they can achieve. This levels the playing field compared to, for example, qubit volume. That’s because QuOps take into account a range of factors to achieve fault tolerance, while building in the inherent need for every quantum computer to correct errors at both speed and scale.
The MegaQuOp scale is up to one million operations, with 50100 logical qubits and 1,000 to 10,000 steps in logical depth. To get the logical error rates low enough, we’ll likely need a few hundred physical qubits per logical qubit. So, a MegaQuOp device would be supporting roughly thousands to tens of thousands of physical qubits.
The qubit counts in the roadmap also line up with the projected qubit counts for many of our hardware partners. It’s an ambitious but feasible goal for the whole quantum ecosystem. And the MegaQuOp is a landmark goal that the whole community can aim for. Simulators tracking the full wavefunction for deep circuits have not exceeded 40 qubit systems. Supercomputers capable of handling higher qubit numbers can be simulated in very special settings. These include shallow circuits with a structure amendable to tensor network simulations, or circuits with very few (hundreds) of nonClifford gates.
However, the combination of over 50 logical qubits and over a thousand nonClifford gates would put a MegaQuOp scale device comfortably outside the supercomputing regime.
In other words, crossing into the MegaQuOp regime presents a pivotal moment in quantum computing, where the power of quantum computers goes beyond the reach of any classical supercomputer.
So, how are we going to get there?
From memory to universal logic
Deltaflow will catalyse the transition from small scale noisy quantum machines to error corrected quantum computers that can sustain logical operations with universal gate sets.
As the QPUs scale and mature, Deltaflow 2 will keep logical qubits alive for an indefinite amount of time, supporting streaming quantum memory.
In 2025, Deltaflow 3 will cross the barrier to streaming logic, supporting a broad set of operations belonging to the Clifford gate set. Deltaflow 3 will support Clifford operations with a few (24) logical qubits, and up to 1,000 physical qubits.
Deltaflow Mega will unlock the first operations that cannot be efficiently classically simulated with classical supercomputers, specifically by supporting nonClifford logic for the first time.
This is an important step. In many quantum error correction schemes, Clifford gates are easy to implement as they require few resources to implement faulttolerantly, whereas nonClifford gates are quite costly (both from a qubit and operations perspective) when requiring fault tolerance.
In other words, achieving nonClifford logic is one of the keys to unlock fault tolerance.
The decoder sits at the heart of Deltaflow
In 2023, Deltaflow 1 solved the backlog problem. This states that if you don’t process quantum error correction data fast enough, you are forced to exponentially slow down your quantum computation.
While solving the backlog problem is an important step forward, we also need to extend the stability and memory functionalities in time. In other words, the decoding process must happen continuously and in tandem with the qubits performing QEC.
To achieve this, a streaming decoder is required to break up the syndrome data into batches, called windows.
To ensure smooth operation in realtime, these windows cannot be completely independent and must overlap, which requires extra functionalities for the decoder to cope with this.
Our sliding window approach allows the decoder to keep pace with the syndrome data by only using one decoder instance at a time. Deltaflow 2 will include our nextgeneration Local Cluster Decoder (LCD), which will be more accurate, scalable and flexible than Collision Clustering, its predecessor, which is included in Deltaflow 1.
Our sliding window decoding method is currently being implemented in FPGAs to enable streaming decoding, along with the additional feature to support the movement of QEC data.
The Riverlane team is always investigating ways to make decoders faster, more accurate and reliable for subsequent releases. We will continue to improve the accuracy of the decoder to achieve higher rates of error suppression as we scaleup the number of qubits used per logical qubit.
Our goal is to achieve a MegaQuOp using fewer physical qubits. Several features will contribute to these improvements, including correlated decoding (taking account of socalled correlated errors); adaptive decoding in response to additional data (such as leakage); and tailoring to special types of noise specific to differing qubit modalities.
As mentioned, leakage is another type of noise that we must account for. This is where a qubit stops being a qubit and jumps to a higher energy state. It’s not a 0 or a 1; it’s a 2.
A recent paper from Riverlane exploits some of the additional information a qubit returns to reduce leakage. As a result, the next generation of our decoders will set new records in handling leakage, leading to a significant reduction in the estimated overhead for QEC.
We are also exploring qLDPC codes, which, although lacking in maturity, require fewer physical qubits than the surface code.
We recently developed our proprietary Ambiguity Clustering method based on qLDPC codes. Early results demonstrated that this method is 150x faster than the industry standard BPOSD method. Ambiguity Clustering is now available in our analytics tool, QEC Explorer, and further exciting work into qLDPC codes continues. You can find out more about how we’re developing Deltaflow across every qubit type here.
Tight integration with the control systems
Deltaflow is a modular solution, working across different qubit modalities and adaptable to meet the needs (and hardware maturity) of individual systems. Therefore, a certain degree of bespoke interfacing may be required between the quantum decoder, down to the triggers or pulses and digital readout signals, including the QEC data.
Decoding requires that the QEC data (syndrome and, for example, any required leakage) is transferred from the qubits to the decoder. Streaming decoding will require tight integration between the decoder and components of the control system. To achieve this, we have worked with our partners to model the noise characteristics in their quantum machines to enable stability and memory experiments to be simulated. We can also interface Deltaflow 1 with their control systems.
Going further, we believe that it is crucial to define a clear border between the control system and Deltaflow, while meticulously designing the communication channels and digital readout channels between the two. Our goal is to increase interoperability as well as the speed of innovation for stakeholders across the whole ecosystem.
Computation beyond simulation with supercomputers
With the release of Deltaflow 3, our customers and partners can demonstrate perpetual logical operations with Clifford gates.
In other words, we will enter the regime of streaming logic for the first time.
This socalled ‘fast logic’ has a dual meaning, depending on the type of qubits. It can either mean:
 Lattice surgery in a solidstate 2D architecture (superconducting) using two logical qubits, and a Hadamard (logic) gate.
 Transversal CZ gates in a reconfigurable AMO (atomic, molecular, optical) system, between four logical surface code patches, transversal H gates, and logical qubit shuffling.
We will deliver the world’s first realtime demonstration of lattice surgery with two logical qubits based on solidstate qubit architectures, and the first demonstration of transversal gates and logical qubit shuffling in AMO systems.
Both milestones enable the movement of logical information between two or more separated logical qubits, which has never been shown before. It will constitute the most advanced demonstration of the potential of quantum computers to overtake classical supercomputers.
To support this extremely complicated functionality, we will develop a new programming language and level of abstraction to enable the orchestration and execution of such logical operations. As such, fast logic with either lattice surgery or transversal gates will include the sliding window feature, and so will be done in “realtime”.
However, we need another magic ingredient to increase the power of quantum computers. As a minimal requirement, we must support a noisy Tgate to achieve universality.
Our current work on software implementations of lattice surgery and Hadamard logic without windowing will be extended and included in the Deltaflow Mega release to introduce to the world universal quantum computation based on the surface code and deployed in a 2D architecture.
Precise orchestration of massive data loads
As quantum computers scale and we enter the era of logic, the implementation of complicated processes, such as lattice surgery for 2D architectures, and support of all relevant operations will introduce new challenges, in terms of synchronisation, connectivity and data flow, to name a few.
To tackle these issues, we are currently developing the methods and software tools to:
 Translate logical level unitary circuits into a faulttolerant instruction set;
 Translate the instruction set to physical level operations, given a particular error correction scheme and qubit topology;
 Distribute these instructions efficiently over the control systems.
Deltaflow 3 will implement our proprietary techniques to parallelise decoding over multiple components and be realised as FPGAs or ASICs.
Of course, more logical qubits will translate into more complexity for the user and the quantum computer. So, we will introduce an increased level of abstraction (to the circuit level) with the capacity to handle large volumes of commands as we strive to implement dynamic largescale data orchestration in Deltaflow Mega to keep up with the expected computational demand.
What next?
The MegaQuOp is a critical milestone, but it's just the first step on the journey to full fault tolerant quantum computing. Some projections estimate we’ll need a TeraQuOp and millions of physical qubits to unlock the higher value applications of quantum computing.
So, the fundamentals of Deltaflow are founded on scalable principles to take us there in the long run. As the ecosystem continues to pull together and quantum computers mature, so will Deltaflow.
But for the next few years, we’ll sprint with our eyes firmly fixed on one prize: building the Quantum Error Correction Stack with MegaQuOp errorcorrected applications in mind.
We hope you’ll join us on this exciting journey.
If you’d like to find out more about Deltaflow, click here.
Riverlane's Quantum Error Correction roadmap
2023: 
2024: 
2025: 
2026: 

1,000 QuOps 
10,000 QuOps 
100,000 QuOps 
1,000,000 QuOps 
Fast decoding Solving the backlog problem 
Streaming high fidelity memory Keeping the qubits alive forever

Streaming logic Enabling perpetual operations 
Logic at scale First fully errorcorrected quantum applications 
Stability and memory

Quantum memory

Quantum gates

Universal gate set
