Skip to content

Riverlane’s latest research into quantum’s first useful use cases

Technical update
Riverlane’s latest research into quantum’s first useful use cases
2 April, 2025

by Nick Blunt, Marius Bothe and Aleksei Ivanov 

In quantum computing, the questions of what these machines will do in the future and when are often asked. They are fair questions but also not simple to answer.  

In this update, we will share some insights into four recent papers from the Riverlane team, focusing on what quantum computers can achieve in the near future and when we reach the TeraQuOp regime (and beyond).  

To reach the TeraQuOp, we need quantum error correction (QEC). This is Riverlane’s main focus, building the QEC Stack, Deltaflow, to unlock useful quantum computing, sooner.  

For context, today’s quantum computers are limited in the number of quantum operations (QuOps) that can be performed before errors overwhelm their calculations. We estimate that the truly transformative applications in quantum will be unlocked in the TeraQuOp regime, representing trillions of error-free operations, which is significantly beyond today’s most powerful devices. 

Recently, our team has made significant progress in understanding what quantum computers could do in the future, with four papers appearing within a few weeks of one another, one on the arXiv, and three in peer-reviewed publications. 

The challenge of chemical simulations 

Chemistry is expected to be one of the first fields where quantum has a significant impact – the team we work in is investigating this area, parallel to the other teams developing Deltaflow. 

However, simulating how molecules behave at the quantum level is complicated, and scientists use two main methods: first and second quantization.  

In quantum mechanics, electrons are described using a wavefunction, leading to complex calculations. This is especially true when we have many electrons because we need to consider interactions and specific properties like the Pauli exclusion principle. The Pauli exclusion principle states that no two identical fermions, such as electrons, can occupy the same quantum state simultaneously.  

In first quantization, the Pauli exclusion principle is directly encoded in the wavefunction resulting in intricate descriptions of molecules or materials. 

Second quantization simplifies the manipulation of the wavefunction by using mathematical entities called ‘field operators’ that can create or eliminate particles. The Pauli exclusion principle defines the properties of these field operators. 

The choice between first and second quantization affects the number of QuOps needed to run useful calculations, and in practice one can choose the option that provides the lowest number of QuOps. 

Introducing large basis sets 

In materials simulations, using ‘large basis sets’ is important for achieving accurate results.  

Basis sets are sets of functions used to describe the behaviour of an electron, and a larger set allows for a more detailed representation of the system. This detail is crucial because it leads to more trustworthy calculations of properties like energy and structure. 

Common types of basis sets include plane waves, which are simple functions that can represent electrons over a wide area, and atomic orbitals, which more accurately mimic the specific shapes of electron clouds around atoms. Each type of basis set has its strengths and weaknesses, depending on the particular system being studied. 

On the one hand, while plane waves are efficient for describing delocalized electrons in solids, they can be less effective for localized electrons in molecules.  

On the other hand, atomic orbitals can provide better detail for individual atoms but may require more computational resources to capture interactions in materials with certain properties.  

The balance between accuracy and computational efficiency is a constant challenge in building effective basis sets for simulations. Previously, the state-of-the-art work on quantum algorithms in first quantization was specialised to only plane wave basis sets, and could therefore not take advantage of many other basis sets developed for quantum chemistry calculations. 

This is where two of our papers present new solutions using a technique called linear-combination-of-unitaries (LCU) decomposition that can be used for any basis set. 

LCU is used in quantum computing to express complex operations including non-unitary ones as a combination of unitary ones. In essence, LCU takes a complicated operation, which cannot be represented directly as a quantum circuit, and breaks it down into a sum of individual unitary operations which can be easier to implement on quantum computer.  

This technique is a crucial component of many efficient quantum algorithms developed to calculate the energy of chemical systems.  

In our recent papers, Quantum Simulations of Chemistry in First Quantization with any Basis Set, and Pauli decomposition via the fast Walsh-Hadamard transform, published in npj Quantum Information and in the New Journal of Physics, respectively, we leverage the LCU technique to develop methods that work with any basis set in quantum simulations.  

We derived a formula for LCU of generic matrices and applied it to chemical Hamiltonians in first quantization, allowing us to break down complex quantum operations into simpler, more manageable parts. This new approach not only makes it easier to conduct calculations but also enhances the efficiency of quantum algorithms. 

As a result, in some instances, we achieved significant reductions in the number of QuOps, meaning calculations can be done more quickly and with less energy.  

This approach opens up exciting possibilities for further reductions in the number of QuOps by exploiting more intricate basis sets or incorporating techniques that aim to reduce computational resources such as the projector augmented-wave method. 

Introducing the Projector Augmented-Wave (PAW) method for quantum computation 

Our next paper takes an established classical technique, the Projector Augmented-Wave (PAW) method and adapts it for use in quantum computers. 

PAW is a classical computational technique used in simulations to effectively manage the interactions between electrons and nuclei in a material or molecule.  

In quantum mechanics, electrons are often treated as waves that surround the dense core of positively charged nuclei (the protons and neutrons) and accurately calculating how these electrons interact with the nuclei is crucial for understanding the properties of materials. The PAW method simplifies this complex task by allowing researchers to use a simpler representation of the electronic structure while still accurately capturing the essential effects of these interactions.  

By incorporating the PAW technique, simulations can achieve high precision with reduced computational complexity.  

This approach allows scientists to efficiently calculate forces and energies in a variety of systems, leading to better predictions of how materials will behave under different conditions.  

Translating the Projector-Augmented Wave (PAW) method directly to quantum computing presents several challenges, primarily due to the differences in how these two computational frameworks handle quantum mechanics.  

One major issue is that the PAW method involves some non-unitary transformations that aren't easily compatible with quantum computing. Quantum computers rely on unitary operations, making it difficult to directly apply techniques like PAW that don't conform to these requirements. 

As a result, finding ways to adapt and reformulate the PAW method for quantum settings is a significant challenge for researchers seeking to leverage the power of quantum computers for complex simulations. 

In a recent arXiv paper, Quantum Computation of Electronic Structure with Projector Augmented-Wave Method and Plane Wave Basis Set, we developed the unitary projector augmented-wave (UPAW) method to adapt the well-established PAW technique for use in quantum computers, enhancing the ability to simulate materials accurately.  

By generalising the PAW approach to many-body wavefunctions, we created a unitary version that maintains the necessary mathematical properties of orbitals while simplifying the computational requirements. This new method reduces the resources required by quantum computers and also attains the accuracy of the results. 

To ensure the effectiveness of the UPAW method, we conducted classical simulations to estimate errors associated with the approach. By down-sampling and assessing the energy estimates within a chemical accuracy limit, we could evaluate how well this new approach performed compared to traditional methods. This work included calculations for challenging systems, such as nitrogen-vacancy defect centres in diamonds, which are difficult for classical algorithms to address.  

Overall, our UPAW method promises to leverage quantum computing's potential by achieving higher efficiency and precision in material simulations. While the current work addressed quantum simulations in second quantization, the implementation of UPAW in first quantization is an ongoing project. 

Combining Quantum Monte Carlo and Quantum Computing 

Quantum Monte Carlo (QMC) methods are advanced computational techniques used to perform quantum chemistry calculations. They are among the state-of-the-art approaches for performing quantum chemistry on conventional computers. 

QMC methods harness the principles of randomness to solve complex problems. By using random sampling to explore possible configurations of particles—such as electrons in an atom—QMC can provide highly accurate estimates of energies and other important properties.  

This approach is especially valuable because it can often handle the strong interactions among particles that many other methods struggle with, allowing researchers to gain deeper insights into chemical systems and phenomena. 

The accuracy of QMC methods largely depends on the quality of the trial wave function, which serves as a ‘best guess’ of the actual solution to a quantum problem. Just like making predictions based on a hypothesis, if the trial wave function isn’t close to the truth, the results can be less reliable.  

One promising direction is using quantum computers to refine these trial wave functions. By leveraging the unique capabilities of quantum technology, there is potential to develop more accurate wave functions that better represent the true quantum state of a system. 

If such wave functions could be accurately utilized by QMC methods, it would allow for more precise simulations of complex molecules and materials, ultimately leading to a deeper understanding of chemical processes and improved predictions in various applications. 

Our paper, A quantum computing approach to fixed-node Monte Carlo using classical shadows, published recently in The Journal of Chemical Theory and Computation, introduces such a method, combining QMC methods and quantum computing. 

Our paper extends an approach first developed in this paper [1], which also seeks to use quantum computers to improve QMC methods. The approach used in [1] uses a particular type of QMC called auxiliary-field quantum Monte Carlo (AFQMC). 

In these methods, information about the trial wave function must be passed from the quantum computer to the conventional computer to perform QMC. This information is obtained using the classical shadows procedure. However, the approach of [1] identified an exponential scaling bottleneck in this approach, depending on how the shadows are measured. Subsequent papers have investigated approaches to avoid this scaling bottleneck. 

Our paper investigates another such approach to overcome this exponential scaling step. In particular, we developed a new procedure approach based on fixed-node Monte Carlo, which removes this step while maintaining many benefits of the original method. We performed numerical calculations to assess its performance and demonstrated good accuracy for small model systems. 

Although there are some remaining scaling issues to solve, there are several benefits: suitability for nearer-term devices; ready methods to include dynamical correlation, which is often ignored; and the ability to calculate observables beyond the energy, which are more expensive to calculate by some other fault-tolerant algorithms.  

This paper is an alternative to our usual research direction (covered in the previous three papers), looking at nearer-term applications and non-conventional methods. Such alternative and novel approaches are important to investigate as quantum computing continues to scale in both size and complexity across all layers of the stack, from algorithms to QEC stacks, qubits and everything in between. 

In conclusion, these papers demonstrate that efficient quantum algorithms are essential for unlocking the full potential of quantum computing in chemistry, especially as we look toward the transformative capabilities expected at the TeraQuOp regime. 


Back to listing