However promising, today’s quantum computers are nevertheless still in their infancy, unable yet to handle high-complexity problems. As with rearing a child, it takes a whole ‘village’ to raise a quantum computer – an entire community of quantum hardware and software experts, mathematicians, electronic engineers, quantum physicists and quantum chemists, to name just a few. Hardware developers are working tirelessly to increase the number and quality of qubits (the physical units of a quantum computer that store quantum information) in their quantum processors. This isn’t just improving the physical devices, but also improving classical hardware and software interfacing, and managing qubits and algorithms to mitigate or correct computational and memory errors. Unleashing quantum computers’ full computational potential depends on all these factors.
But how do we make improvements? We first have to be able to measure how well we’re doing with respect to the things that matter for performance. When we assess each component in various quantum computers, we can innovate more rapidly and engineer towards greater success.
Calibrating quantum computing’s ecological pyramid
To monitor and quantify progress, the quantum computing community is busy developing benchmarking algorithms and metrics across multiple quantum platforms. Such metrics are invaluable to the entire quantum-computing ecosystem, supporting all levels of its ecological pyramid.
At a foundational level, they enable the primary producers – the hardware manufacturers who make qubits – to refine their hardware-development roadmaps, compare hardware performance against other technologies and validate and showcase their systems’ capabilities to interested users. At the top level, they allow the consumers such as enterprise companies to navigate how futuristic solutions relate to realistic measures of different platforms’ technological readiness, helping predict when integrating quantum computers into their workflows will translate into beneficial returns.
Measuring a quantum computer’s performance
Different benchmarks measure different levels of a quantum computer’s performance. For example, quantum-process tomography fully characterises the noise introduced by a unitary operator (the operation a quantum computer performs) by testing a complete set of input states and measuring all the obtained outputs. It’s akin to estimating the performance of an athlete in a sport by carefully monitoring all of their individual characteristics and actions.
Randomised benchmarking is another widely used algorithm, executing a sequence of randomly chosen quantum operations on a quantum system and calculating the fidelity’s decay rate as a function of sequence length. The average error rate can then be estimated by averaging the results on several iterations and fitting them to a known model. This is like estimating an athlete’s capabilities by seeing how well they perform over a whole season, where many specific factors (who they face, what they ate that morning, etc.) get averaged out and a simpler result remains. While providing a less complete characterisation of the quantum system’s ‘noise’, randomised benchmarking is more scalable as it requires fewer measurements of the system overall as the number of qubits gets large.
Riverlane’s integrated software suite
To help quantum-hardware manufacturers and academic labs towards the error-corrected quantum computers of the future, Riverlane has developed a software suite that brings together implementations of key benchmarking protocols. These include industry-standard elements like Clifford randomised benchmarking, direct randomised benchmarking, state tomography, and gate-set tomography.
Solutions for mitigating or correcting quantum-processing errors depend on understanding and quantifying system noise. Riverlane’s software enables users to quantify architectural-based improvements to a specific platform’s performance metrics and run benchmarking circuits on any quantum hardware back-end (and manufacturer-specific or generic emulators running on classical hardware) for testing and cross-technology comparisons.
Scale, speed and quality
Other metrics can quantify three key parameters more directly interpretable by end-users: scale, speed, and quality. Largely quantified based on the quantum machine’s qubits count, scale is typically the most advertised metric in hardware developers’ roadmaps. However, it only provides a limited picture of a quantum computer’s actual capabilities.
An intuitive metric to quantify a quantum computer’s speed is the number of quantum operations that each qubit can execute per second. Such parameter can provide more complete information on the system performance, accounting not only for the speed of an individual quantum gate but also for the time required by instructions and results to move between quantum and classical components.
To quantify a quantum computer’s quality, IBM introduced the Quantum Volume metric, which relates to the maximum ‘square’ circuit that can be executed while ensuring the probability of a correct answer exceeds a fixed threshold. Moreover, this parameter depends on other quantum-stack elements, e.g. those responsible for qubits’ calibration, error mitigation, and circuit compilation, as well as the qubits’ performance. IonQ has proposed an alternative quality metric, using Algorithmic Qubits to define the maximum number of qubits (N) that can be used to successfully execute a quantum circuit with N^2 two-qubit gates. In this case, the quantum circuits do not comprise random gates but represent algorithms pertinent to key end-user applications.
Quantifying a quantum computer’s utility
Testing quantum circuits representing practically relevant tasks helps clarify a quantum computer’s ultimate usefulness. Thus, while industry-standard benchmarks provide the diagnostics needed to improve today’s hardware, newer application-oriented benchmarks equip enterprises with a reliable tool for assessing the potential benefits to their businesses. Application-benchmark suites such as those developed by the US Quantum Economic Development Consortium or the Super.tech start-up (recently acquired by ColdQuanta) offer circuit sets of increasing size, reproducing increasingly complex algorithms addressing chemistry, materials, finance, and other key variables. However, more work is needed to expand these algorithms and their application areas and define benchmarks that can follow quantum computing’s journey as it evolves towards full-scale fault-tolerant systems.
Ultimately, no classical computer is capable of confirming the most important quantum computing outputs once such computers reaches quantum computational advantage. At this point, metrics and benchmarks that validate quantum results via classical simulations will become redundant, and new benchmarks will need designing – possibly running different quantum platforms against each other. Hopefully, this technology’s reliability will be evidenced by its real-life application, with qubits delivering on their promise via the more efficient battery or drug designs enabled by quantum computers.