Skip to content

IEEE23: 3 Key Themes from the Riverlane Team

Blog
IEEE23: 3 Key Themes from the Riverlane Team
28 September, 2023

by Kenton Barnes and Alex Moylett

The Riverlane team went out in force to IEEE Quantum Week 2023 which is always a key event for the quantum computing community. 

Our decoder announcements were at the centre of many of the conversations we had – just before IEEE23 we introduced the world’s most powerful decoder chip, published our decoder IP, launched our roadmap for decoder success, and posted an arXiv pre-print, which dives deep into our current generation decoder (DD1) for quantum error correction. 

We hosted a one-day workshop ‘Towards Controlling Fault-Tolerant Quantum Computers’. Here, speakers from the world’s leading quantum hardware companies, academic groups and national laboratories explored the current successes in controlling small error-corrected devices, whilst uncovering the early challenges that have arisen when scaling up to larger experiments. 

Alongside our decoder chip, we presented two more posters during the week. First, we introduced our tangled syndrome extraction circuit, which allows us to measure long stabilisers without extra connectivity. Second, we estimated the resources required to run even a simple chemistry application on a fault tolerant quantum computer. 

There were a lot of interesting takeaways from the five-day event. It’s by no means an exhaustive list, but here are some of the main insights from the team: 

1. Quantum error correction moves to the mainstream  

Riverlane is building the quantum error correction stack. This, essentially, connects the qubits to the human interface, processing the output of the quantum computer and predicting errors so that corrections can be made with speed and accuracy. 

Our CEO and founder, Steve Brierley, has long stated that the key to unlocking ‘useful’ quantum computing at scale is quantum error correction.  

This used to be a radical view. First, because quantum error correction is an incredibly hard problem. Second, because many believed today’s quantum computers (called NISQ, meaning noisy intermediate scale quantum computers) had great potential.    

But our longstanding belief that quantum computers must include quantum error correction to live up to the technology’s full promise is now becoming more prevalent in the industry. There was an increased presence of quantum error correction at the conference compared to last year – with attendees remaining realistic about the challenges that lie ahead.  

Robin Blume-Kohout from Sandia National Labs, for example, stated that fault tolerant applications (like a 2048-bit Shor's Algorithm) require around 100,000 times more qubits than what we currently have. Robin said this was the equivalent in size to starting with a rowboat and scaling up to an aircraft carrier! 

This is a startling analogy – but it’s also important to remember that reaching ‘useful’ quantum computing does not just rely on the number of qubits you have. It’s also about the quality of those qubits – how stable and interconnected they are – and this is where quantum error correction is vital. 

Our senior quantum scientist, Ophelia Crawford, sat on the quantum resource estimation panel. During the session, it was highlighted how (when estimating the resources needed in the fault-tolerant regime) we must remember that the hardware does not exist yet. We need to make assumptions about how it can be represented and communicate these assumptions clearly. 

What’s more, when doing estimation tasks, it's important to keep the audience of the final results in mind. Different metrics and ways of communicating the data make sense for different audiences.  

At Riverlane, we use the TeraQuop as our key metric (for more details see our roadmap). A Quop is simply short for a reliable Quantum Operation. In other words, one logical qubit doing one logical (useful) thing. A TeraQuop is where a quantum computer can perform a trillion reliable operations. 

We believe the TeraQuop is a solid metric because it takes into account a broad range of factors to represent the scale at which quantum computers start to solve problems that are intractable for any supercomputer.  

2. Real-time decoding solutions are needed, now 

Real-time decoding is now a well-recognised problem within the community, with more quantum hardware companies understanding it's something that needs to be addressed.  

At Riverlane, we don’t develop qubits. That’s why our partnerships are key. Our quantum decoder, for example, continues to be developed and validated using Rigetti’s superconducting hardware But every hardware company will need to solve error correction - regardless of their approach to the physics of their qubit, which is why we’re partnering with companies across different qubit types. However, each quantum hardware company (regardless of their qubits) has one challenge to address to reach useful quantum computing: scale. 

Our head of silicon, Kauser Johar, took part in the real-time decoding panel and noted that the quantum control systems we see in-use today will not be able to scale to fault tolerant quantum computation in the future. One direction the control systems could go is to co-locate the decoder with the rest of the control system, within the cryo chamber. However, a very low power solution is needed, and we are years away from developing a solution. 

Many companies also talked about how 1,000 qubit control is the real benchmark/challenge for the industry right now. But, for real-time decoding to work, it’s not just about hitting metrics. Instead, it's about building a decoder that can be integrated into a partner's quantum computing stack. 

3. For any of this to work, collaboration is key 

The need for collaboration across the quantum computing community was a core topic for many. It was also a trend identified from the start of the event - at IEEE23’s first keynote by David Awschalom from Q-Next and the University of Chicago. 

When asked about the interplay between industry and academia, Awschalom said that he believes a non-competitive information sharing environment should be created for staggered near-term challenges. And that this will then allow competitors to advance their own corner in order to build their competitive edge. 

Our senior product manager, Rossy Nguyen, sat on the software for fault tolerance panel and spoke about how co-design is an essential part of our process for achieving fault tolerance. At Riverlane, we work with hardware partners to understand their device, from the topology to detailed noise characterisation, and all this feeds into the development of the best code and decoder for their system. 

The panel discussed how the software we need to build now should focus on enabling exploration, rather than standardisation, at this stage. Standardisation will be required (especially at the API side when collaboration between HPC and quantum computers increases). But, for now, it’s important for all stakeholders in the quantum computing space to continue talking and trying to figure out how to make quantum computing useful, sooner. 

It’s not an easy task. Building an error-corrected quantum computer is incredibly hard, a real moon-landing effort. But through partnership and collaboration, we can get there. 

We’ll be in London on November 2nd at the National Quantum Technologies Showcase (NQTS) to carry on meeting with the quantum community and showcase our work in quantum error correction.  

But you don’t have to wait until then – please reach out to [email protected] and we’ll put you in contact with the best person in the Riverlane team. 


Back to listing