Skip to content

Quantum Moore’s Law

Blog
Quantum Moore’s Law
2 April, 2018

From the invention of the transistor in 1947, technological progress in their design and manufacture has been exponential. The well-known Moore’s law observes that the number of transistors which can be squeezed onto a silicon chip doubles about every two years. Is it reasonable to expect the same from quantum computers?

In the realm of classical computers, there are many metrics from which one can draw a Moore-like relationship. From internet bandwidth to the storage capacities of our hard drives, we observe and expect exponential progress. Likewise, there are many metrics by which one could measure progress in quantum computers, such as the number of addressable qubits or the quality of interactions between those qubits. Alone these measures are insufficient — a perfectly controllable single-qubit register can be easily simulated by a desktop calculator and a large array of poorly interacting qubits can’t perform useful calculations over the noise of errors. IBM recently proposed a suitable metric — the quantum volume. This measure takes into account both error rates and qubit numbers to evaluate the quantum computational usefulness of a system. For the last decade, the most mature paradigms of quantum computation — those of superconducting and trapped ion qubits) faced bottlenecks in the quality of interactions between qubits. However, the last two years have seen order of magnitude improvements in the engineering of interactions. Scaling processors to incorporate many qubits with high-fidelity interactions is now a target of companies and research groups. Riverlane maintains data sets of historic results from the past twenty years of academic experimental research. Through analysis of this data in the context of its paradigms, we can infer the future exponential growth of quantum computing, predicting the future capabilities, costs and impacts.


Back to listing