Skip to content

Engineering Error Correction: Riverlane patent reduces quantum’s power bills

Blog
Engineering Error Correction: Riverlane patent reduces quantum’s power bills
12 March, 2024

In the next decade, quantum computers will exceed the computational capacity of any future supercomputer. They will solve previously intractable problems in fields including quantum chemistry, material science, cryptography and more.  

Before that, they will reach a more humble but still amazing feat: allowing us to reduce the energy bills associated with some important computations.  

Power consumption of quantum computers is a complex challenge – and one I would like to tackle in this blog, highlighting a recent patent from Riverlane. 

Powerful thinking 

At Riverlane, we are currently building the Quantum Error Correction Stack, called Deltaflow, to allow quantum computers to run one million (Mega) error-free Quantum (Qu) Operations (Ops) by 2026. Today’s machines are only capable of a few hundred error-free operations. 

Once we reach this so-called MegaQuOp threshold, then quantum computers will start to tackle real-world problems that are impractical for a classical machine. Simply put, these calculations would take too long, the results would be inaccurate, and the supercomputers running these calculations would use too much power. 

On the latter point, a significant efficiency boost is approaching with researchers predicting that classical computations running at the terajoule scale (equivalent to 4 Ton/CO2 eq.) could use orders-of-magnitude less energy, running at the gigajoule scale (4 Kg/CO2 eq.) on quantum computers. This comparison is described more fully in this work.  

This is a complex issue. Broadly speaking, the (breakeven) point where quantum computers become more power efficient that classical computers depends on three factors: 

  1. Quality of the computation: quantum computers must perform sufficiently long operations in an error free (or with low error probability) manner to even allow a comparison between the two technologies. 
  2. Duration of the computation: the longer it takes to complete a computation, the more power we use. 
  3. Consumption per unit of time: if we can improve the efficiency of the hardware, electronics and software running on a quantum computer, then we will consume less power per second. 

Riverlane’s Deltaflow, balances all three factors. Our latest quantum decoder, Deltaflow.Decode, is one technology in the QEC Stack. It balances the speed, accuracy, cost, hardware and power requirements to provide a practical route to error-corrected quantum computing.   

The team is now developing a streaming decoder, which can process continuous streams of measurement results as they arrive and not after the experiment has finished. This reduces the duration of the computation.  

Our recent parallelisation paper, for example, demonstrates how we can always process QEC data fast enough without slowing down the quantum computation. 

When it comes to power consumption per unit time, we’re currently developing both ASIC and FPGA decoding solutions. We regard ASICs as the future of our decoder chips thanks, in part, to their low power consumption rates. Our current ASIC decoder chip is carefully crafted for size and speed and operates at just 8mW, outperforming commercial FPGA by several orders of magnitude.  
 
In this blog, I’d like to focus on a more recent example: our recently granted patent for in-memory generation of control pulses, which centres around another technology in the QEC Stack: Deltaflow.Control. 

Deltaflow.Control is a customisable solution for generating high accuracy, high speed pulse sequences to control qubits using affordable off-the-shelf hardware. Essentially, it generates low-level electrical pulses that dynamically change millions of times a second. These pulses are carefully crafted (via calibration routines) to maximise the performance of each qubit. The number of these pulses is expected to grow linearly with the number of qubits, at least in the near future, up to the 10,000 to 50,000 qubit scale.  

This will have a significant impact on the power consumption of future quantum computers. In today’s quantum computer (superconducting and silicon), most of the power is consumed to keep the qubits in their quantum state by the dilution fridge. But as we scale these systems, this reality might change (Figure 1).

Figure 1: Consumption as a function of the number of qubits for a superconducting qubit system. General expectation is that large scale dilution fridges will allow up to 10,000-20,000 qubits (power consumption 15KW). The control stack electronics consumption grows linearly with the number of qubits and with FPGA-based solutions (in blue) it be the largest contributor to the total consumption. Our invention (in green) can lead to a massive power optimisation and halve the consumption of the system. 

Whilst looking at ways to reduce the total cost of ownership of the electronics used to control a quantum computer, we came up with an interesting thought: what is the minimum set of components that we need to generate the above-mentioned electrical signals?  

Usually, you need memory to store the digital version of the electrical pulses. This is essentially a component that converts the signals from digital to analog across the wires that control the qubits. 

This chain is then implemented by an FPGA (optionally connected to a commercial DRAM) that either integrates or interfaces with multiple Digital to Analog Converters. But if we look at the market landscape, an alternative option exists: in-memory pulse generation.  

Here, an optimised high-capacity memory could be augmented with control logic to transmit pulses to external Digital to Analog Converters via high-speed serial links (Figure 2).  

Figure 2: A different take on the challenge of generating electrical pulses for qubit control. (a) state-of-the-art approach vs (b) an in-memory compute approach. 

This idea is inspired by some approaches seen in two different fields. First, in-memory processing is a promising solution in the AI market for memory management to reduce power consumption.  

Second, in the GPU market, HBM (High Bandwidth Memory) is another promising route, leveraging high-speed serial links to move more pixels with the same power footprint, as seen in the NVIDIA ‘Hopper’ H100 GPU. 

Our solution provides internal logic to allow either the reconfiguring of the memory content (for example, after qubit calibration) or runtime playback of different pulses. As Figure 2 shows, these operations are driven by a lower-speed interface that can be coupled to a low-complexity/energy footprint FPGA. 

The advantage of this idea is that it lowers the total power footprint of the solution as Table 1 shows by providing a route towards a x2 reduction in total power consumption for the MegaQuOp era, leading to about 1 Kg of CO2 reduction per day, for each quantum computer. 

 

Memory  

Logic  

Transmission 

DAC  

State-of-the-art  

0.5  

 

In-memory pulse generation 

0.05  

0.01  

0.1  

 

Table 1: Normalised power consumption [Watts/qubit]. State-of-the-art solution (FPGA with integrated DACs and external DRAM) vs In-memory pulse generation scheme. Representative numbers from state-of-the-art devices. HBM memories is generally as 1 order of magnitude less consuming. 

Something I would like to emphasise, is the complexity of engineering a commercially viable quantum computer. We must balance many (often conflicting) requirements to make sure these machines have the best speed, accuracy, cost, hardware and power requirements. 

At Riverlane, we have a tagline that we are building the Quantum Error Correction Stack to unlock useful quantum computing, faster. It’s a nice line but, in reality, we are building the Quantum Error Correction Stack to unlock useful quantum computing, faster, with best accuracy and with the lowest cost, hardware and power requirements.  

We can find out more about our work here. 


Back to listing