Quantum Error Correction via Noise Guessing Decoding

Authors: Diogo Cruz, Francisco A. Monteiro, Bruno C. Coutinho

arXiv: 2208.02744v1 - DOI (quant-ph)

Abstract: Quantum error correction codes (QECCs) play a central role both in quantum communications and in quantum computation, given how error-prone quantum technologies are. Practical quantum error correction codes, such as stabilizer codes, are generally structured to suit a specific use, and present rigid code lengths and code rates, limiting their adaptability to changing requirements. This paper shows that it is possible to both construct and decode QECCs that can attain the maximum performance of the finite blocklength regime, for any chosen code length and when the code rate is sufficiently high. A recently proposed strategy for decoding classical codes called GRAND (guessing random additive noise decoding) opened doors to decoding classical random linear codes (RLCs) that perform near the capacity of the finite blocklength regime. By making use of the noise statistics, GRAND is a noise-centric efficient universal decoder for classical codes, providing there is a simple code membership test. These conditions are particularly suitable for quantum systems and therefore the paper extends these concepts to quantum random linear codes (QRLCs), which were known to be possible to construct but whose decoding was not yet feasible. By combining QRLCs and a newly proposed quantum GRAND, this paper shows that decoding versatile quantum error correction is possible, allowing for QECCs that are simple to adapt on the fly to changing conditions. The paper starts by assessing the minimum number of gates in the coding circuit needed to reach the QRLCs' asymptotic performance, and subsequently proposes a quantum GRAND algorithm that makes use of quantum noise statistics, not only to build an adaptive code membership test, but also to efficiently implement syndrome decoding.

Submitted to arXiv on 04 Aug. 2022

Explore the paper tree

Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant

Also access our AI generated Summaries, or ask questions about this paper to our AI assistant.

Look for similar papers (in beta version)

By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.