We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Decoding algorithms for topological codes

00:00

Formale Metadaten

Titel
Decoding algorithms for topological codes
Serientitel
Anzahl der Teile
48
Autor
Lizenz
CC-Namensnennung - keine kommerzielle Nutzung - keine Bearbeitung 3.0 Deutschland:
Sie dürfen das Werk bzw. den Inhalt in unveränderter Form zu jedem legalen und nicht-kommerziellen Zweck nutzen, vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
I will talk about the problem of decoding a topological code, that consists of identifying the optimal recovery operation given the syndrome of an error, or equivalently of inferring the most likely world-line homology given a defect configuration. I will describe a new decoding algorithm [Phys. Rev. Lett. 104 050504 arXiv:0911.0581 and arXiv:1006.1362] for Kitaev's toric code (KTC) that runs in a time proportional to the log of the number of particles, an improvement over the previously known polynomial-time decoding algorithm. This algorithm also achieves a higher threshold on the depolarizing channel. Moreover, we have recently shown that all two dimensional topological stabilizer codes can be mapped onto each other by local transformations [arXiv:1103.4606, arXiv:1107.2707]. This local mapping enables us to use any decoding algorithm suitable for one of these codes to decode other codes in the same topological phase. We illustrate this idea with the topological color code that is found to be locally equivalent to two copies of KTC and we extend it to decode the topological subsystem color code.
CodierungTopologieMapping <Computergraphik>ÄquivalenzklasseKollaboration <Informatik>ResultanteStellenringSelbst organisierendes SystemFehlererkennungKartesische KoordinatenMereologieQuantisierung <Physik>Rechter WinkelAttributierte GrammatikComputeranimation
Quantisierung <Physik>Konforme FeldtheoriePartikelsystemSymmetrieCodierungGruppenkeimZellularer AutomatMaßerweiterungMinkowski-MetrikGruppenoperationElement <Gruppentheorie>ProgrammschleifeKommutativgesetzStochastische AbhängigkeitCodierungZeichenketteStabilitätstheorie <Logik>DatenflussPunktgitterMereologieVerschlingungNatürliche ZahlÜberlagerung <Mathematik>DatenstrukturEigenwertproblemKnotenmengeOrdnung <Mathematik>QubitTorusMinkowski-MetrikQuantisierung <Physik>PunktMultiplikationsoperatorTopologieLoopPhysikalisches SystemFrequenzSummierbarkeitNichtlinearer OperatorPartikelsystemInteraktives FernsehenSoundverarbeitungRandwertModelltheorieFehlertoleranzFehlererkennungKondensation <Mathematik>QuantencomputerSymmetrieArithmetisches MittelHomologieMathematische LogikKartesische KoordinatenResultantePermutationTermMengentheoretische TopologieInterpretiererInformationsspeicherungInstantiierungWort <Informatik>BeobachtungsstudieGenerator <Informatik>TransportproblemWorkstation <Musikinstrument>Weg <Topologie>Dreiecksfreier GraphTwitter <Softwareplattform>PhysikalismusKurvenanpassungMapping <Computergraphik>Umsetzung <Informatik>SystemaufrufFormation <Mathematik>PhysikerComputeranimation
ZeichenkettePrädikatenlogik erster StufeElementargeometrieEnergiedichteFehlermeldungKette <Mathematik>AnnulatorPartikelsystemGruppoidFehlermeldungÄhnlichkeitsgeometriePartikelsystemMinkowski-MetrikMathematikRichtungEigenwertproblemVakuumSchnittmengeKette <Mathematik>BitNichtlinearer OperatorNummerungAbstandEnergiedichteWort <Informatik>Sigma-AlgebraCodierungKonstanteQubitElementargeometrieZeichenketteFehlererkennungShape <Informatik>Web SiteKonfigurationsraumDickeInstantiierungPunktgitterHalbleiterspeicherEreignishorizontSoundverarbeitungDiffusorPhysikalisches SystemFreewareGeradeHamilton-OperatorVertauschungsrelationComputeranimation
ModelltheorieFehlermeldungStochastische AbhängigkeitPartikelsystemRechenwerkWiderspruchsfreiheitDialektInformationMinkowski-MetrikGravitationsgesetzElektronischer DatenaustauschAnalysisRauschenFehlermeldungModelltheorieBitInstantiierungSelbst organisierendes SystemCodierungOrtsoperatorNichtlinearer OperatorPartikelsystemQubitInformationSummierbarkeitKonfigurationsraumWeb SiteSoundverarbeitungGeradeMinkowski-MetrikWort <Informatik>DifferenteRechter WinkelVakuumARM <Computerarchitektur>Computeranimation
ZufallszahlenEnergiedichteAlgorithmusMittelwertPolynomBeschreibungskomplexitätEinfügungsdämpfungMathematikGlobale OptimierungFehlermeldungGewicht <Ausgleichsrechnung>EntropiecodierungStochastische AbhängigkeitGewicht <Ausgleichsrechnung>ZeichenketteFehlermeldungDifferenteEnergiedichteGeradeCodierungModelltheoriePerpetuum mobileRandomisierungStatistikLokales MinimumÄquivalenzklasseKorrelationsfunktionEinfach zusammenhängender RaumEinfügungsdämpfungKollaboration <Informatik>KonfigurationsraumRandverteilungWeb SiteZellularer AutomatKonstantePunktMittelwertCASE <Informatik>PolynomStatistische HypotheseMathematikKomplex <Algebra>Physikalisches SystemFlächeninhaltRechter WinkelLogischer SchlussInstantiierungKonfiguration <Informatik>Mapping <Computergraphik>TermMatchingSprachsyntheseIsing-ModellProgrammierungSoundverarbeitungStatistische MechanikKlasse <Mathematik>Nichtlinearer OperatorSyntaktische AnalyseAlgorithmusGemeinsamer SpeicherRelativitätstheorieRauschenOrdnung <Mathematik>Computeranimation
GruppenoperationZentrische StreckungCodierungLokales MinimumNummerungGenerator <Informatik>InvarianteAlgorithmusPhysikalismusMathematikFreewareStabilitätstheorie <Logik>ZeichenkettePhysikalisches SystemPerpetuum mobileBasis <Mathematik>PolynomialzeitalgorithmusSpezielle unitäre GruppeRoboterNichtlinearer OperatorComputeranimation
SinusfunktionÄhnlichkeitsgeometrieMaßstabInvarianteKonkatenationscodeGenerator <Informatik>Generator <Informatik>DatenstrukturInformationZentrische StreckungEin-AusgabeCodierungp-BlockQubitZellularer AutomatStabilitätstheorie <Logik>InvarianteHierarchische StrukturComputeranimation
Web-SeiteCodierungForcingAlgorithmusRandverteilungp-BlockQubitComputeranimation
Quilt <Mathematik>ModallogikFehlermeldungModelltheorieCodierungLokales MinimumZellularer AutomatKonfigurationsraumPartikelsystemRechenwerkZeichenketteMaß <Mathematik>p-BlockEin-AusgabeCodierungDatenstrukturFunktion <Mathematik>PartikelsystemForcingQubitFlächentheorieZeichenketteRandwertSummierbarkeitPhysikalische TheorieNichtlinearer OperatorMusterspracheFehlermeldungOffene MengeRandverteilungSpeicherabzugZellularer AutomatGlättungComputeranimation
ApproximationsalgorithmusStabilitätstheorie <Logik>Physikalische TheorieKontrollstrukturQubitCodierungBlockcodeMereologieEinflussgrößeSummierbarkeitComputervirusp-BlockInstantiierungBitComputeranimation
SystemaufrufKonstantePunktgitterKnotenmengeRandwertVariableRechenwerkWiderspruchsfreiheitRandverteilungBeschreibungskomplexitätSchwellwertverfahrenParallelrechnerCodierungMaßerweiterungCodierungPartikelsystemQubitGeradeModallogikWort <Informatik>InformationZellularer AutomatStabilitätstheorie <Logik>ImplementierungVertikalep-BlockRandverteilungDreiecksfreier GraphBitQuadratzahlDifferenteKurvenanpassungRauschenKonstanteModelltheoriePropagatorLinearisierungFehlermeldungMultiplikationsoperatorLoginAnalysisResultanteMAPHierarchische StrukturMathematikZentrische StreckungGenerator <Informatik>PunktgitterRichtungPunktWiderspruchsfreiheitNeuroinformatikKorrelationsfunktionFunktion <Mathematik>Ein-AusgabeKomplex <Algebra>UnrundheitGewicht <Ausgleichsrechnung>SchwellwertverfahrenRenormierungKartesische KoordinatenRelativitätstheorieNummerungZeichenketteMatchingAlgorithmusRechenbuchKreisflächeKonfigurationsraumGreen-FunktionPhysikalische TheorieBasis <Mathematik>CASE <Informatik>Rechter WinkelPräprozessorInstantiierungVerknüpfungsgliedMinimalgradt-TestMailing-ListeMereologieFlächeninhaltFehlerschrankeKomplexitätstheorieRechenwerkSoundverarbeitungWürfelSchlüsselverwaltungOrtsoperatorApproximationsalgorithmusFamilie <Mathematik>ComputeranimationDiagramm
StatistikPartikelsystemTypentheorieCodierungNummerungAlgorithmusPunktgitterÄquivalenzklasseMengentheoretische TopologieNummerungÄquivalenzklasseModelltheorieCodierungMapping <Computergraphik>BitAlgorithmusRauschenMAPLaufzeitfehlerAchtortTranslation <Mathematik>Rhombus <Mathematik>Generator <Informatik>TopologiePunktgitterStabilitätstheorie <Logik>LoopQubitStellenringNichtlinearer OperatorLokales MinimumMatchingKnotenmengeAnpassung <Mathematik>WarteschlangeMailing-ListeInstantiierungSchlüsselverwaltungSystemaufrufAbstandMereologieCASE <Informatik>Minkowski-MetrikBaumechanikSoftwaretestGruppenoperationComputeranimation
KommutativgesetzRelation <Informatik>Mapping <Computergraphik>TopologieFehlermeldungKanonische VertauschungsrelationNichtlinearer OperatorQubitRauschenRhombus <Mathematik>Sigma-AlgebraPunktgitterCodierungModelltheorieMereologieZweiArithmetisches MittelKorrelationsfunktionSoundverarbeitungSoftwaretestRelativitätstheorieFlächeninhaltEinfache GenauigkeitInstantiierungSchlussregelWürfelComputeranimation
SchwellwertverfahrenCodierungCodierungNummerungNichtlinearer OperatorTopologieMapping <Computergraphik>SchwellwertverfahrenRauschenInformationAlgorithmusSigma-AlgebraEinfügungsdämpfungFehlermeldungKorrelationsfunktionPropagatorZeichenketteVollständiger VerbandStabilitätstheorie <Logik>SchlüsselverwaltungHoaxPräprozessorLogischer SchlussInstantiierungSoundverarbeitungWort <Informatik>Konstruktor <Informatik>BitDiagramm
EichtheorieGenerator <Informatik>SchwellwertverfahrenQuadratzahlCodierungPartikelsystemEichtheorieCodierungZeichenketteLineare AbbildungTopologieNummerungMAPVollständigkeitNichtlinearer OperatorStellenringQuadratzahlMultiplikationsoperatorDatensatzt-TestRauschenPhysikalischer EffektPlotterSchnittmengeComputeranimation
Mengentheoretische TopologieMapping <Computergraphik>SchwellwertverfahrenQuantisierung <Physik>DimensionsanalyseFehlermeldungCASE <Informatik>FehlererkennungModelltheorieHalbleiterspeicherInformationsspeicherungEinflussgrößeTopologieKategorie <Mathematik>Computeranimation
SchwellwertverfahrenKomplex <Algebra>NummerungRenormierungWürfelMultiplikationsoperatorMinkowski-MetrikLoginRichtungCodierungEinfügungsdämpfungInformationsspeicherungZellularer AutomatWissensrepräsentationDiagramm
Spitze <Mathematik>CodierungKonfigurationsraumQuantisierung <Physik>Zellularer AutomatLogischer SchlussHomologieKonvexe HülleModelltheorieAbelsche GruppeAlgorithmusPhasenumwandlungMengentheoretische TopologieDiffusionsprozessMathematikBasis <Mathematik>MultiplikationsoperatorWürfelCodierungProdukt <Mathematik>FehlerkorrekturmodellZellularer AutomatBitInvarianteNichtlinearer OperatorZentrische StreckungDatenstrukturCASE <Informatik>ForcingPunktgitterFehlermeldungGeradeZeichenketteRichtungEinflussgrößePunktSchnittmengeAutomatische HandlungsplanungSchwellwertverfahrenKartesische KoordinatenBeweistheorieDigitaltechnikModelltheorieFehlererkennungQuarkconfinementQuantisierung <Physik>InstantiierungRöhrenflächeHalbleiterspeicherRauschenMessfehlerDimensionsanalyseSyntaktische AnalyseTopologieVorlesung/Konferenz
Zellularer AutomatQuantisierung <Physik>KontrollstrukturEinflussgrößeRechenwerkQubitDigitaltechnikVektorpotenzialDatenfeldEvoluteForcingNichtlineares GleichungssystemGamecontrollerQuarkconfinementRechenwerkSpektrum <Mathematik>InstantiierungPhysikalismusGravitationMereologieComputeranimation
Transkript: Englisch(automatisch erzeugt)
David Poulin at the University of Chabot. So David was invited to give this talk, but unfortunately he couldn't be here. So I would like to thank the organizers for giving me the opportunity to replace him today and give this talk about the results we obtained in the last couple of years regarding the decoding problem for topological codes.
Much of it has been done, or a big part of it has been done in collaboration with Hector Beaubin from the Perimeter Institute, who gave a talk here last Tuesday about this idea of local equivalence of topological codes, so what we're going to discuss an application of this mapping to quantum error correction.
So my title was quite simple and explicit, I guess. And given all that we've heard up until now this week, I don't think I need to work or to make too much effort in convincing you that topological codes are interesting. Let's just summarize this by saying that these are candidates to fault-tolerant quantum
computation. But I'm a physicist, and as a physicist, these codes also have another interest, and I would like to emphasize it. And the idea is that, first of all, they are physically relevant because they have local interaction. And this seems to be physically realistic. If two systems are far from one another, you don't expect them to couple or to interact.
So this is the first thing to consider. Then there is this idea of topological order, which can get very abstract very fast. So these systems are pretty simple. We can solve them exactly, and they give us examples of topological orders. And just to say the same thing in another manner, they give us examples of anionic excitations
and anionic models. So we can study them in an easier way. And moreover, what is also interesting is that the toolbox we have, all the concepts introduced regarding topological codes, for instance, twists or boundaries, actually have a very natural interpretation in terms of physics, actually permutation
of the anyon. So they are symmetries of the anyon model. And boundaries actually have to do with particle or string condensation. So this being said, there are lots of things that I would like to cover, but I would like to focus more on points three to five. The two first points we've already heard pretty much. I'll cover this quite fast.
And the last two, time permitting, I would like to discuss new ideas we have regarding this problem. And the main part of the talk is actually introducing a method to decode the Toric code, which was inspired by our background physics, which is some kind of RG flow that we use to decode. I will discuss the results we obtained for Kitao's Toric code.
And I would like to give this application of Ektaol's mapping to quantum error correction to extend our decoder to decode other codes, for instance, color codes. So like we already saw, we consider some stabilizer code to be in the physical picture,
if you will, where we take some qubit system, some Hamiltonian on this qubit system, which is just minus the sum of all your stabilizer generators, which are of two kinds. These As would be like these small x operators, loop operators, which are located at each vertex of the lattice. And on each plaquette, you have a similar operator,
which is a loop of same as that operators. So since we're considered a ground space as our code space, we see that we have actually a stabilizer code, meaning the plus 1 eigenstate of each of these operators. Again, there's a nice link between homology or topology and the actual logical meaning of operators.
And simply put, the idea is that if you have some string operator, which is a trivial loop, which means you can contract it to a point, then it has a trivial effect on the code. So every kind of operator that looks like a string which is close to a loop, and that is contractable, will maximize your element.
And if you consider non-trivial cycles, they actually are non-trivial logical operators, because you can't shrink them down to a loop. They wrap around the lattice if you consider, for instance, your periodic boundary conditions and you consider the torus. Again, you can take just the transpose. You have two logical qubits encoded in this structure.
So there is this nice link between homology and logical effect of string operators. Then we can consider what happens when I start adding, for instance, errors, or if I, let's just say, I flip some qubit. Then this thing must have x operator actually
anti-commutes with neighboring packets, so it creates two syndrome bits. And as we're in this physical picture, let's just say there are kind of quasi-particles, because they're actual excitations of my Hamiltonians, because I flip the eigenvalues of the operators from plus 1 to minus 1. So errors can create particles. And yes, so syndrome will be considered as particles or defects.
And so we said errors could create particles. They can also diffuse them. So they can move them around. And the way this happens, consider this very simple case and add another x error. So the defect has actually moved from left to right, and the number of defect didn't change in either. So now, by this very simple example,
we can convince ourselves errors can actually move the defects around. So they can create and diffuse the defects. And we can more generally consider error chains in general. And just by inspection, you can convince yourself that the only place you will have defects will be on the endpoints of your string.
So yes, so the syndrome configuration on the endpoints does not depend on the geometry of the path. It doesn't depend on its length or its geometry. So in some sense, these error chains can be because you only have to pay only a constant energy
cost, which is the energy necessary to create two particles. But the number or the shape of the particles does not change as you increase the size. And this is why you need actually active error correction in these systems, because you have, in some sense, to confine these defects. OK. So we said particles could be created, diffused, and they can also be annihilated back to the vacuum.
So if, for instance, this is just an example of a chain with errors on its endpoints, if you had an extra sigma x here, it actually takes back the defects together. And we're back in the code space. And the only thing that is left behind is actually the word line. So the history of all the particles were created, moved around, and then annihilated.
OK. So if we let the diffusion uncheck, particles could derive or diffuse for very long distances because it's free. And then if they fuse back to the vacuum after having, for instance, wrapped around the horizontal direction as this example, then they will have implemented some logical operation.
And this will corrupt our memory if we're not aware of this. So we have to keep defects localized, and we have to prevent this kind of events. So this would result in memory corruption if you were to use your code as a memory, for instance. You can have something similar with sigma z strings,
but you create defects on the site operators instead of the packet operators. So it's exactly the same thing, but on the lattice instead of the dual lattice. So now, given what I've said about the code, I would like to discuss the decoding problem in this setting.
OK. So we've heard about this earlier in this week. We're going to consider very simple, practical noise models so that our analysis is easier. So for instance, depolarizing bit flip error models, which are quite standard and probably the simplest. So this is, for instance, depolarizing channel acting on our code.
We take some noise rate, let's say 15%. So this means there is a 5% chance of having either x, y, or z error for every qubit. So the error acts, some defects are created. We come and we measure the position of the particles by measuring all of the plaquette and site operators. So this is the information we have. And in a sentence, the decoding problem is trying to infer this, given that.
So there are many ways we could fuse all the particles back to the vacuum, because we want to go back to our code space. This is what decoding is about. So we have to fuse all these defects together so that they all disappear. So this would be a possible way. This is another possible way. So are they the same or are they different?
Well, one way to see that, you actually take the sum of the two. So if you sum this configuration plus this one, what you're left with is something like that. What this means is that if you had the two, you actually implement a non-trivial logical operation, which would be as on the horizontal operator here. So what this is is that these two corrections
are actually not equivalent. There's a difference between doing one or the other. So you have to do a correction, which has the same logical effect as your error. And if you choose this improperly, you will implement some error, and you will lose your information. So you have to infer some word line or some correction for defects, and you have to be careful the way you do this.
So we've heard about an existing method, which is matching. So the idea is to find the shortest path connecting all the cells. Remember, we assume some independent noise. So the shortest connection should be the one with highest probability. So this seems to be a pretty reasonable thing to do.
And you can map this problem, as we heard earlier this week, to actually a problem from statistical mechanics, which is a random biasing model. And in this case, this decoder would be equivalent to minimizing the energy of this system, given the excitations. Again, this is called the perfect matching algorithm, and it has a complexity which is polynomial and the linear size of the system.
So this has been studied by Jim Harrington in his PhD thesis. It has polynomial complexity, but it's still quite prohibitive. But fortunately, very recently, and we heard about that two days ago, if I remember correctly, this method was improved very much by a fallers and collaborator to actually be able to paralyze it
and simplify it, such that you can do average constant decoding with marginal losses of performances, actually. So this is pretty good news for this method. But still, we can think about what did we put aside when we did this decoding. First of all, remember I told you many different strings
or many different corrections are actually equivalent. So by choosing the one, the shortest one, we actually forget about the fact that they are highly degenerate, that many different strings are actually equivalent. So maybe we should take that into account. And moreover, if there is some correlation, for instance, x and z error, which is the case in the deep rising channel, which introduces this correlation through the y
operator, then usually in these methods, you decode x and z defects individually, and you don't take into account these correlations. So these are the things that motivated us to actually search for another way to decode this code. So I would like to talk about these two points, the degeneracy and the correlations.
So imagine you have this configuration of defects. You could say, OK, let's pair them that way. There is only one minimal weight option, because if you take any other, it would be longer. But if you take something which is homologically different, for instance, this one, then you
can see that many have the same weight and are actually equivalent. For instance, if I just change this check to this, then I didn't do anything. So I should count these different strings, which have the same weight and which actually are in the same class, which is not done, for instance, in PMA. So this is just saying that there is lots of degeneracy in the possible corrections.
And again, if we go to this idea of mapping it to some physical model, it corresponds to minimizing, actually, the free energy. So it's minimizing free energy versus energy. And the difference is this term, which is actually taking into account the degeneracy, which we call entropy. OK. So this is one point.
The other thing, remember, I talked about correlation. So for instance, if we have site and blanket defects in such a manner, this would be the minimum weight matching string. And for instance, this one would be with highest weight, because the green strings went from weight three to weight five, actually. But the thing is that's not true if this is the depolarizing channel, right?
Because a y error is actually a z and an x error. So when the line crosses, you don't have to count twice for having an error occurring. OK, so this is how, for instance, you can change the effective weight of the changes by taking into account correlations. So this is, in general, not done by the PM algorithm,
neither. OK, so given what I've said and given what we knew from our physics background, we thought of approximating this minimization of free energy, which would be optimal decoding, using an RG group algorithm. So the idea is that if you do the optimal thing,
the minimization of free energy, it's actually intractable. The number of strings you have to consider is exponential in the size of the system. So our approach is to say, let's try and approximate this optimal decoding, which is intractable. And this leads us to this RG decoder. OK, so when we talk about RG, what we would like to see
is, in some sense, some scale invariance. So let's say we start with our stabilizer, placket stabilizers. So this is half of the stabilizers. And we would like to make a change of basis, such that we have something which is scale invariant. And we do this the following. So consider every four operators, and just remove from your generators one of the four plackets.
And replace it, actually, by it times the other three. So this gives you that, right? OK, so we started with this generator, and we replace this one by it times the other three. And this is what we get. And then on the red scale, you can actually do the same thing exactly.
So each one of these, you actually take it out and replace it with the one twice as big. And you can do this recursively until you get to the trivial one by, let's say, just having one cell. So you have this kind of hierarchy of stabilizer generators. And actually, you didn't do anything in the sense that you just changed your generator,
but the stabilizer is the same. So now we have this scale invariance. And what is interesting is that structure is very similar to what we've seen concatenated codes. So you really have this idea of coding qubits into qubits such that you create this hierarchy. So we're going to use what we know about concatenated codes.
We're going to do some decoding on each blocks on the first layer, let's say all the red blocks, and then this information that will come out of each of these decoders will be used as an input for the decoder, which run, let's say, on the blue scale and so on until we get to our logical information. Just to make this a little more precise, let me tell you about how we decode concatenated codes.
So the way this goes is, so this is, for instance, a concatenated code of a very simple 3-qubit code. The way we decode this, if we want to do this optimally and efficiently, is by decoding, let's say, we have a decoder which could be just brute force decoder for one code.
You actually just run the decoding algorithm brute force on one small block. So this will give you a marginal probability on the logical qubits of this block. Once this is done, it can be used as the input of the blocks on the layers just above it. So the logical output of one layer becomes the physical input of the layer just above it.
And this is how you can actually efficiently and optimally decode concatenated codes. And this is the structure we're going to use for our decoder. So just think of the theory code as a concatenation of this very small open boundary surface code. So you have 8 qubits. You have two smooth boundaries, two rough boundaries. And if you do a brute force decoding on it,
you will actually end up with a logical marginal probability on your two logical qubits implemented in this very small surface code. So the logical operators are just strings going through vertically or horizontally the surface of the code. So given some syndrome pattern inside the cell,
you can run the brute force decoder. It will spit out a 16 possible probability, which corresponds to these two qubits, meaning, for instance, is there a string going through the horizontal way, through our block or not? And you have x or z. And you can generate everything else, these 16 possibilities.
And as I said, this is done by brute force. You just sum over all possible errors, which are consistent with your error syndrome. But there is a catch. The thing is that the theory code is not a concatenated code, right? So this is where the approximation part comes into play.
So for instance, if you consider the small block code I just told you about, you can see that some of the stabilizers are actually incomplete in this compared. If you were to compare these stabilizers to the stabilizers of the theory code, you can see that there are some qubits missing, actually. So when you do your syndrome measurements,
you do them on the theory code. So you don't have the value of these partial stabilizers, if you will. So as I said, you cannot break the theory code as d small. So one solution would be just let's use overlapping cell and close the gaps. So everywhere there's a qubit missing, let's just add it.
This is what is represented by these dashed lines. We just say some of the qubits are going to be shared between many cells. And then we can run the decoding because we have all the syndrome information necessary for this cell, because now the stabilizers are closed. But again, this implies some approximation.
Because remember, for instance, if I zoom in my theory code on only two such blocks, let's say I focus on this qubit, which I label by a green circle here, and I have some syndrome configuration. When I will come and decode, I will cut the block. But this qubit and this one are actually only one qubit. So every time I suppose there is an x error here,
I should do the same on the other one. So if you look, for instance, at only the left part, this correction seems completely reasonable. It's a string going out this qubit with the syndrome. If you concentrate on this right part, it also seems completely reasonable since it would correct for the syndrome. But there is an inconsistency in the fact
that I have two different errors for the same qubit, which doesn't make sense. So the simplest thing to do is just let's forget about these correlations totally. If you do this, the algorithm doesn't work. You can find constant weight errors, which will make your decoder fail independent of the size of your lattice.
So this wouldn't work. So we want to find some kind of a compromise. On the one hand, we have forgetting about all these correlations completely. And the other one is taking them into account, but making our calculation intractable, because then it would correlate all the decoders together. So one thing we can note is that, first of all,
if we were to compute, given the syndrome inside this cell, the marginal on this qubit, the marginal error probability on this qubit, in general, it wouldn't match the marginal probability of error for the same qubit in another cell, because the syndrome would be different. It involves different qubits. So if you look at only this qubit, the marginal error
probability, it actually doesn't match. So the compromise we want to come up with is, let's change the error model of these two cells such that these marginals fit. So we won't ask that. Every time you put an x error on this side, you actually put one here. But we're going to ask that it has the same probability. So let's say this one says, 40% of the time,
I think there's an error here. We require that this also says, 40% of the time, there should be an error here. So the way we do that is just by implementing belief propagation. So each of these blocks computes the marginal probabilities on these shared qubits. And then they just exchange this information. And this information is used to update the noise model such
that, at some point, it should converge to having these marginals agree. We don't actually come all the way down to that point, because you have some problems with your belief propagation. For instance, you have four cycles in your decoding, and this is a problem. But still, it works.
Even if it's not perfect, the simple fact of doing it for some rounds, this belief propagation algorithm, even if this agreement is not perfect, the algorithm still works. So we have L squared. Given the layer, let's say the physical layer, the first one, we have L squared such blocks. But you can see that you can very easily and naturally
parallelize it, because each block is actually decoded independently. Or every time you compute one of such marginals, it's decoded independently from one block to another. So the only thing you're left with is the number of level in your hierarchy. This you cannot parallelize. And this is where this log L time comes from, actually.
So I'm going to tell you the results we obtained for Kitaev story code. So what do we have here? So we input three rounds of belief propagation. So we don't have perfect agreement in our consistency relations. But it seems to be enough. So on the x-axis, you have depolarizing channel strength. On the y-axis, you have the logarithmic scale.
And you have the decoder error probability. And you can see here different curves for different sizes of linear sizes of the lattice, going from L equals 8 to L equals 64. We didn't do some sophisticated analysis. But I think this is relatively convincing that there exists some threshold, which is around, let's say, 15%.
And if you look back at, for instance, Jim Harrington's thesis, you will find that using PMA, you obtain 15.5% if you decoded x and z separately. So we have lost a bit of performance. But the decoding time was actually increased quite dramatically.
And let me say, this data, we obtained it like one and a half year ago. So we didn't know about the results of Austin at that time, so this and log L, which was quite an improvement over PMA. Also, I would like to emphasize the fact that this RG cell I showed you, we had this 2 by 2 block. But this is quite flexible.
You could use, for instance, 2 by 1 blocks, which are smaller. And this drops the constants in your big O and your complexity. So how do you do that? If you have a 2 by 1 cell, it actually shrinks the size of the lattice in one direction. For instance, horizontal. So first, you do an RG in one direction, let's say the x direction.
And then you use the output of this, and you use a transposition of your unit cell to actually shrink the size of your lattice in the other direction, let's say vertical direction. So by implementing two RG steps with two different cells, which are one the transpose of the other, you can have an effective renormalization in both directions.
And this enabled us to decode codes of linear size of up to 1,000, which is like millions of qubits. And again, this is on a mistake here. It should be a bit flip, excuse me. So on the bit flip channel, PMA had 10.3% roughly of a threshold, and we had something like 8.2%. So we lose a bit in performance again,
but we gained a lot in time implementation by dropping the constants. And we were able to decode a million qubits. So there is this trade-off between the speed of the algorithm and the precision, or the height of the threshold. And where it gets very interesting
is that if we had some pre-processing, this is a bit tricky. But remember, by doing this RGG scale, this change of basis in our generators to have RGG invariance, scale invariance, excuse me, there are some syndrome bits which we don't use
until the very last steps. And this seems to be problematic in some cases. So one compromise we cannot come up with is using very basic qubit-wise belief propagation between every checks and every qubits, and between every qubits and every checks, and so on, to actually just update the noise model to take into account all of the stabilizers, all
of the syndrome bits instead, without committing to it. So this is only updating the noise model. And by doing this, we're actually taking into account earlier information that would have been taken into account only at the end of the computation. So this seems to actually improve pretty much the performance. And by doing this, we were able to beat this 15.5
threshold. So we can go over what was done by PMA using this technique. OK, so I've covered these two points. Now let me tell you about an application of the results
Hector told us about last Tuesday. So we have this RG decoder for the Toric code. Use it to actually decode other codes. So for example, we have the topological color code. So I think we heard about this in Andrew's talk yesterday. So we have one qubit per vertex.
And we have very similarly to the Toric code, we have loop operators for each of the faces, x and z, so for each octagon and each diamond. And the question was, can we decode this? Because in this case, PMA does not work in a straightforward fashion. And it does not know how to do efficiently the equivalent of the matching on this lattice or on this
code. Very recently, Pradeep Sarvapalli and Robert Warzenorff came up with an adaptation of the RG scheme to the 666 color code, so hexagonal lattice. It's not a straightforward. It's not just running the algorithm on this lattice. There are many subtleties. For instance, you have many shared qubits. So you have to do some kind of coarse graining
in what is shared to optimize the runtime. And also, it's better to work on the dual lattice because the hexagonal lattice, its dual is the square, makes it much easier to work with. But before that, actually, we want to propose something a bit different, which is instead of inventing a new decoder, let's just bring the decoding problem back to decoding that of a Toric code.
So remember what Ektaal told us. Every 2D translation invariance, non-carrel stabilizer code, so let's just call that topological stabilizer codes, with local generators and microscopic minimal distance, are local equivalent. So there exists this local unitary map that brings us to a number of copies of the Kitaev story code. The number of copies has to do with the Indian model
of your code. So one can show that the topological color code is actually equivalent to two copies of Kitaev story code, the one I showed you, the 488 color code. So what we propose is do the mapping on your syndrome bits and on your noise model, such that you can actually run your Toric code decoder at the end
and decode the color code using it. So the two ingredients we need is an update of the noise model from the color code to the Toric code, and we need a map for the syndrome bits so we can run it on the Toric code. So this is just part of the mapping. There is two columns missing. So what is pictured here is blue means
sigma x, red means sigma z. So the first column here labeled TCC is the qubits on the topological code lattice. So this would mean sigma x on the qubit living on top of a diamond in the color code. Here we have two copies of Kitaev story code, and the qubits are in the edges. So this says that, for instance, this one here
says that the sigma x on top of some diamond actually gets mapped to a sigma x on this qubit on the first Toric code. And these two other sigma z operators on the second Toric code. And you can go on and check for every one. So what is interesting first, this mapping is local. So this is very important for when
we want to map the noise model. This means that, for instance, if there was a probability of having an error here, it's linked into a correlated error on these three operators. And everything is local at most three bodies. So this is encouraging. This is required, actually. And you can check by inspection that it preserves also
commutation relations. So using this, we can actually do the mapping. So I think the mapping of the noise is quite straightforward. The first thing you can do is just forget about correlation. So you just get an effective noise strength for each of these operators once you do the mapping of the noise model.
And the other thing, which is a bit trickier, but is quite simple once you have the mapping, is, for instance, operators that move, for instance, this z defect on the color code are strings of x's on the red sublattice. And using the mapping I just showed you the previous slide, you can actually see that this gets mapped to this very simple sigma z
operators. So this operator here moves these defects or key defects. So from that, we can induce or deduce that this z defect is actually equivalent to an electric defect or a side defect on the first toric code. This is just a direct conclusion
drawn from this mapping here. And you can do the same thing for operators moving. I call them hopping operators. So they are the operators that move the particles around on your color code. And this directly implies that the sigma z on the blue is actually just a magnetic defect, because this is the hopping operator for the magnetic defect on the toric code. So we have a way of mapping the syndromes.
We have a way of mapping the noise model. Then the only thing left to do is to run our algorithm on each of these toric code. So if you do this very crudely, you get a 5% noise threshold. If, moreover, you run some pre-processing, belief propagation pre-processing, as I told you about in the toric code,
we can actually increase the performance quite dramatically. So by using this technique, so again, we have channel strength, the calling error probability, different sizes, and we got something like 8.7%. And it is known that this code should have the same threshold as the toric code, which is around 11%. So the good news is that, obviously, we
have some suboptimal decoder, because we just lost some information between the correlations between the two toric codes and so on. But it's not that bad. Actually, we were quite surprised that the performance losses are not dramatic at all. So using this technique, actually, you could build, if you have the mapping
for any topological code, which is ensured by Ektos construction, you can actually decode any topological stabilizer code using this technique by mapping it to a number of copies of Kitayev's code and then running your decoder. And it doesn't have to be RG decoder. If you prefer the PMA or the new decoder proposed by Austin, then you could run this algorithm instead.
So you would get something suboptimal, but it's systematic. I would like to point out that, with the remaining time, that we were also able to decode the topological subsystem color code, which was introduced by Ektos also. And what is interesting is that this is not equivalent to a number of copies of Kitayev's toric code. You do not need to have this complete local Clifford
map to do the decoding. What is needed is just having every particle, which is its anti-particle, and the particle being moved around by string operators or quasi-string operators. And if you look at the details of this code, well, first of all, it's interesting, because it's a gauge code which only involves two-body operators.
And moreover, you can show that it has three fermions. So it's not equivalent to toric codes, but fermions are their own entire particles in this setting, and they are moved around by string operators. So you could actually come up with a linear map for the noise, a linear map for the syndromes, and actually run the decoder. So Shushara Bhravi and Barbara Thial came up
with a decoding for a similar code, which is a five-squares code, which has a de-placing channel. And this particular decoder was tailored for this particular code. And using this very general technique on this code, we obtained a threshold which is around 2% also. So this is quite encouraging, actually, that we didn't lose very much by doing this mapping
in this case. How many time? How much time? A while. OK, so I guess I'll just say one thing. If you want to, if you go into this phenomenological model where you add, you see the toric code, you want to do fault-tolerant storage with the toric code,
and you had this probability of error for your syndrome measurements, you can adapt the RG method to work in three dimensions. Because in this case, it just becomes a quantum error correction problem in 3D. This was shown in the paper by Dennis et al, Topological Quantum Memory. And you can adapt the RG to work in three dimensions. And if you do so, and I implemented it
with the simplest cell, which was born by one cell, that you can run in the x, z, and y direction, one after the other, to implement an effective renormalization in the whole space. So if you do this fault-tolerant storage, in this very basic phenomenological model, what we got was something like just above 1.8%.
And this has to be compared with a number reported in geometric thesis, again, using PMA, which was 2.9. So again, we have some loss. But we were able to decode l equals 64 cubed. So 3 times 64 to the cubed cubits. So we all, again, have this log L complexity.
So I guess I will skip to the end, since I'm running out of time. So just to conclude, I presented a decoding problem. So it's inferring word lines. I presented the RG decoders, which is very fast, log L time steps. And this is worst-case fidelity. This is also an interesting point. It is versatile, higher threshold
on some very specific conditions, depolarizing noise. It is heuristic, but I proved that there exists a threshold. But it's not that big. But at least there exists a threshold we know. And we will show that we can extend it beyond 2D. And I also gave this application of Ektaus mapping to quantum error correction.
And on that, I would like to thank you for your attention. Thanks, Guillaume, for a great talk. Questions?
Just one comment about the threshold. The rigorous proof that there is a universal threshold for all d-dimensional topological codes, which is 30 to the negative 2 times dimension. So in two dimensions, it's 30 to the negative 4.
So it has been improved. That's what you're saying. Thanks. So did you improve it? Or is it something that needs to work? OK, thanks. So can you say a bit about how
you decode when the syndromes are erroneous in the 3D model? How they are erroneous? Yeah, when the syndrome measurements have errors, you quickly went over the 3D error correction model. You said you couldn't adapt to the 3D model. Your question is about having syndrome measurements, or we couldn't adapt it in this case, the RG decoder in this case?
Yeah, the RG decoder when the syndrome measurements have errors. OK, have you seen? This one. Are you aware of the paper, Topological Contour Memory, by Dennis et al.? Yeah. So the idea is, in this case, when you have only consider, for instance, bit flip channel, your check operators in these 2 plus 1 structures
actually become star operators with set. So the legs in the plane are the physical errors. And the legs in the other directions are syndrome measurement errors. Once this is set, we still have just the lattice of all these stars, right? So we can use the exact same trick of scale invariant. So you do this change of basis in your stars such that,
or instead of thinking about stars, you could think of cubes, right? So we can use the same trick of saying, take a bunch of cubes and replace one of them by the product of this times the other. And you can rebuild the scale invariant structure for the 3D case. Then you can do this very simple brute force decoding for a small cell.
Again, it's the same thing. You have some stabilizer code. You have some overall strings which have the right syndrome. And you just do this layer by layer. And it goes through exactly in the same fashion. Conceptually, it's the same thing. Just one last question. Do you plan to consider the case when you only have two qubit gates?
You mean like more in the circuit model? Sure. So I'm not sure yet, because I couldn't talk about this. But we have another idea, which involves simulating confinement to the decoding. I don't know. I have a picture which maybe will give you an idea of what I mean.
So we want to have, so this is the new idea. The idea we have is to have a unit cell, a controlled unit cell for each check. It's only allowed to speak with its neighbors and its neighboring qubits. And the idea would be to simulate, in some sense, a confining potential. So this is inspired from physics. So for instance, if each defect
was assumed to have some gravity, these unit cells would be used to implement the Maxwell equation evolution of some electromagnetic field or something like that. And then we would move the defect according to the force created by this potential. So this idea of simulating confinement
is where we want to, after this, we could have the circuit base. But we want to do this intermediate step before. Let's thank Guillaume again. Can we have our next speaker?