Riverlane has come second in the Fujitsu Quantum Simulator Challenge for simulating a little-understood quantum noise phenomenon called leakage. Understanding every aspect of quantum noise is critical to address quantum error correction and unlock useful quantum computing, sooner.

Qubits (quantum bits) are the building blocks of quantum computers, but they are prone to noise: where the slightest environmental disturbance can render them unable to run any useful quantum calculations.

This is where a set of techniques called quantum error correction can help, essentially correcting these errors before they have a chance to destroy the calculation. We need quantum computers to run approximately a trillion reliable quantum operations – a TeraQuop – to unlock the true potential of these machines. This will not be achievable without quantum error correction.

The project objective was to use the world’s fastest quantum circuits simulator from Fujitsu to simulate several quantum error correction codes of interest. These quantum error correction codes use many noisy qubits to build more reliable logical qubits.

Classical simulators are vital tools to aid the development of quantum computers. While classical simulators can never provide the computational advantage that a quantum computer will, they do allow us to test small-scale quantum algorithms and simulate quantum error correction protocols.

There are a series of benchmarks that test readiness for quantum error correction. Memory experiments, which check how well logical observables are preserved through time, are a well-established benchmark. Essentially, a memory experiment measures how long you can keep a qubit alive.

Stability experiments are another benchmark that check how well logical observables are preserved through space. In the stability experiment, what you do is more related to logical computation, rather than logical memory. While in memory experiments we can improve performance by increasing the size of the experiment, in stability experiments performance improves when you increase the duration of the experiment. This means you can achieve good performance with fewer qubits. For this project, the team ran simulations of the stability experiment.

Riverlane’s simulation used a fully quantum-mechanical noise model. Full quantum-mechanical simulations are expensive – otherwise quantum computers would not be worth building.

Riverlane's noise model features leakage – a pernicious source of noise that takes qubits out of the computational space (where the state of a qubit no longer exists between the |0⟩ and |1⟩ states and could, instead, leak into a |2⟩ state). This means the team had to simulate qutrits (or quantum trits), which are units of quantum information where the state could be a |0⟩, |1⟩ or |2⟩ or any superposition of these three quantum states.

This was the first fully quantum-mechanical simulation of a stability experiment. The Riverlane team tested a family of noise models, all inspired by superconducting qubits, and studied leakage-reduction methods that have been previously proposed in the literature.

**The damage of leakage**

Leakage is a very damaging type of noise, so people have invented methods to remove it from the system efficiently. The Riverlane team found some interesting and surprising results, including that some leakage-reduction methods are in fact counter-productive under certain noise models where leakage moves efficiently between qubits.

When a qubit has leaked, the behaviour of the quantum circuit that was meant to run changes radically. This is potentially disastrous for a quantum computer. Although leakage typically happens less frequently than other errors, a leaked qubit can remain leaked for a long time, introducing a lot of errors. For this reason, one usually wants to use specific devices to remove leakage frequently. These are called leakage-reduction units (LRUs).

The physics of leakage and its mitigation is platform dependent. In this work, the Riverlane team used a noise model inspired by several types of superconducting qubits.

The team studied a particular LRU called “wiggling” that was introduced last year by Google. In this LRU, all qubits in the quantum computer are reset frequently. Half the qubits are measured at each round of error correction. A reset then removes leakage from the system by taking a measured qubit (being in state |0⟩, |1⟩, or |2⟩) and putting it to the |0⟩ state. In the next round, another half of the qubits are measured and reset.

To facilitate this study, the team extended the capabilities of the Fujitsu simulator, from qubits to qutrits. The simulator naturally deals with qubits, which only have two states: |0⟩ and |1⟩. To include the possibility of leakage, qubits were grouped in pairs, with each pair being able to represent up to four states: |0⟩, |1⟩, |2⟩, and |3⟩. This was a significant technical expansion of the capability of the simulator.

To make maximal use of the supercomputer, several techniques in the literature were combined, including “recycling qubits”. When you recycle a qubit, you are basically trading space for time: you make one qubit consecutively play the role of several others, and this lets you simulate a larger system – but that takes longer.

The main finding of this work is that the wiggling LRU does enhance the performance of the error-correction protocol, except when leakage moves efficiently through the system.

Quite surprisingly, the simulation also suggested that high leakage mobility itself plays a role of an LRU, which can be more effective than wiggling.

“Leakage is an important source of noise in several platforms, one that is often neglected in the standard simulations that check performance of error-correction protocols. One reason why it is neglected is that it is harder to simulate than other sources of error,” explained Joan Camps, senior quantum scientist at Riverlane.

**Classical versus quantum**

It's incredibly hard to simulate quantum computers using their conventional ("classical") counterparts. One of the reasons is that you generally need 2^N complex numbers to fully describe a quantum state of N qubits.

This means that the amount of memory needed to just store the state of the system grows exponentially with the number of qubits. Typically, you need 16 bytes (or 128 bits) to store a single complex number. Then, for a system with N qubits you need (2^N) x 16 = 2^(N+4) bytes of memory.

For example, for 30 qubits you need approximately 16 GB of random-access memory (RAM), which is a typical amount for a modern laptop.

“This doesn't sound too terrible, but keep in mind that you need to double the number of laptops every time you add a single qubit to the system,” explained Alexander Gramolin, a research scientist at Riverlane. “Therefore, for 40 qubits you need about a thousand laptops, for 50 qubits – a million, and for 60 – a billion. This is a back-of-the-envelope estimate that accounts for only the memory requirements, but it provides pretty accurate predictions.”

The Fujitsu simulator contains 512 Fujitsu A64FX processors, which are also used in one of the most powerful supercomputers, Fugaku (although Fugaku has more of them, by a factor of 300). Each processor features 32 GB of RAM, and the full 512-processor system can simulate up to 39 qubits.

“In our experiments, we didn't go beyond 17 qutrits (which required 34 qubits), but we repeated our simulations several hundred thousand times, in order to collect better statistics,” Gramolin added.

The project ran over summer 2023 and relied on the work of a range of Riverlane’s multidisciplinary team – not to mention the computational resources of Fujitsu, who developed and gave participants access to its 39-qubit CPU-based quantum simulator for the duration of the project to test its performance with real-world applications.

“What I enjoyed about this project is that it felt interdisciplinary – it was done with a tight deadline and a number of us pitched in,” Camps concluded. “Some of the team generated the circuits, some investigated the physics of leakage, some wrote the code and performed the simulations, and others analysed the data. After all the code was written and tested, we ran the simulations and had to analyse the data in real time under the pressure of the deadline – and bugs and insights were found in the process.”

“It was a fun project to work on and provided us with valuable insights, which we will share more details of soon. Everyone was hugely grateful to Fujitsu for providing us with time on its world-leading quantum simulator.”

The Fujitsu Quantum Simulator Challenge ran from February to September 2023. During this global competition, Fujitsu called members of the industry and academia to test Fujitsu’s 39-qubit quantum simulator on novel problems and applications. Fujitsu officially announced four winning teams during a ceremony held at the Fujitsu Quantum Day on 25th January 2024 at De Oude Bibliotheek Academy in Delft, the Netherlands.

If you’d like to find out more about the quantum error correction technologies that every quantum computer will need, click here.