From Von Neumann to systems engineering: what quantum can learn from classical computing
Building a large-scale error-corrected quantum computer, capable of computational tasks that even the most powerful classical computers find impossible, is one of the greatest scientific and engineering challenges in human history. Fortunately, we are building on firm foundations: 70 years of designing, testing and thinking about classical computers.
From the earliest work in theoretical computer science, pioneered by Alan Turing here in Cambridge where Riverlane is based, to the latest developments in modern hardware and software, we have a rich history that we can draw upon to get us to useful quantum computers sooner. I believe the work we’re doing at Riverlane on error correction, quantum computing’s defining challenge, can accelerate useful quantum computing by a decade.
Why do we need quantum computers as soon as possible? The answer comes from thinking about the classical computers we have now – what they can do and what they can’t.
Computers can perform complex calculations to simulate and predict the world around us. In the aerospace industry, for example, we use simulations to design new aircraft: the Wright brothers discovered flight by painful trial and error, but now we design aircraft by simulating them on a computer.
These simulations are possible because we have an accurate mathematical model from classical Newtonian physics that describes how air flows, the Navier-Stokes equations, and the computational power (provided by a classical supercomputer) to solve it.
When trying to design better drugs, vaccines, batteries or catalysts, by contrast, classical physics cannot accurately explain the way these complex molecular systems behave. To simulate quantum-scale systems we need a different set of equations, those which emerge from the beautiful mathematical theory of quantum mechanics developed in the last century. We also need the computational power to solve these equations, something that is impossible to achieve with any classical computer. This is why we need quantum computers, devices that naturally operate according to quantum mechanical principles. They will enable mankind to accurately simulate the molecular world and thus rapidly accelerate our ability to design transformational new products.
Scientists have been thinking about how quantum computers work for decades, since long before it was possible to build even a single working qubit. One crucial lesson classical computing has taught us is that getting the right ‘model of computation’ – the way we think about how computers work – is vitally important. The Von Neumann model of computing, a design for a computer developed in the 1950s in which the memory is separated from the program, revolutionised how people thought about computers. Combined with the invention of the transistor, this conceptual model kick-started an exponential growth in computing power, famously described by Moore’s Law, which has continued ever since.
In quantum computing we often think about the circuit model of computation, where qubits are represented by wires and operations by gates. This model has helped us devise many of the seminal quantum algorithms. But when you look at a real quantum computer it’s not actually made of wires that we apply gates to. The real picture looks more like qubits in a dilution refrigerator with layers of classical computer electronics on top. It’s not enough just to think about qubits and algorithms, we need to optimise the layers in between too!
To run programs, you need a control system to send signals to the qubits and read back the response; you need to precisely tune thousands of system parameters and ensure they stay calibrated; and you need to orchestrate all of these lower-level components so that they interact in perfect synchrony. In short, quantum computers need an operating system –that’s what we are building at Riverlane.
The most important task quantum computers pose for an operating system is how to deal with errors. Qubits are highly error-prone, a problem that means today’s devices can only perform around 1000 operations before the system is overwhelmed by noise. To put this in context, transformationally powerful quantum computers would need to run billions of operations reliably!
The problem of system noise was identified early in the history of quantum computing, because of the similarity of qubits in superposition to bits in classical analog computers, devices which do not scale because of accumulation of errors. Fortunately, researchers have developed a rich theory of quantum error correction, inspired by classical error correcting codes, which allows us to reduce error rates by spreading information across many redundant physical qubits.
The true challenge comes in practically implementing error correction on a real quantum computer. We need to scale up our hardware 10,000 times, whilst also reducing errors 10,000 fold! Just like building an SSD driver, we need to join up lots of noisy components and use error correction to suppress errors at a systems level. This requires a systems engineering approach, starting with optimisation and unit testing of individual components before joining them together to form a larger system.
We are already starting to see progress across the industry in demonstrating some of these unit tests, from the ability to read out a pattern of errors to examples of successful error suppression on very small numbers of qubits. What we need now is to scale: to carry out a hugely complex error correction calculation millions of times per second; to coordinate the millions of components required to do this; and to implement error correction efficiently without requiring too many qubits.
That’s our goal at Riverlane, to build an operating system for a large-scale error-corrected quantum computer, getting us to useful quantum computers that usher in an era of human progress as important as the digital and industrial revolutions far sooner than previously imaginable.