We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Fault-tolerant quantum computing with color codes

00:00

Formal Metadata

Title
Fault-tolerant quantum computing with color codes
Title of Series
Number of Parts
48
Author
License
CC Attribution - NonCommercial - NoDerivatives 3.0 Germany:
You are free to use, copy, distribute and transmit the work or content in unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
We present and analyze protocols for fault-tolerant quantum computing using color codes. To process these codes, no qubit movement is necessary; nearest-neighbor gates in two spatial dimensions suffices. Our focus is on the color codes defined by the 4.8.8 semiregular lattice, as they provide the best error protection per physical qubit among color codes. We present circuit-level schemes for extracting the error syndrome of these codes fault-tolerantly. We further present an integer-program-based decoding algorithm for identifying the most likely error given the (possibly faulty) syndrome. We simulated our syndrome extraction and decoding algorithms against three physically-motivated noise models using Monte Carlo methods, and used the simulations to estimate the corresponding accuracy thresholds for fault-tolerant quantum error correction. We also used a self-avoiding walk analysis to lower-bound the accuracy threshold for two of these noise models. We present two methods for fault-tolerantly computing with these codes. In the first, many of the operations are transversal and therefore spatially local if two-dimensional arrays of qubits are stacked atop each other. In the second, code deformation techniques are used so that all quantum processing is spatially local in just two dimensions. In both cases, the accuracy threshold for computation is comparable to that for error correction. Our analysis demonstrates that color codes perform slightly better than Kitaev's surface codes when circuit details are ignored. When these details are considered, we estimate that color codes achieve a threshold of 0.082(3)\%, which is higher than the threshold of $1.3 \times 10^{-5}$ achieved by concatenated coding schemes restricted to nearest-neighbor gates in two dimensions [Spedalieri and Roychowdhury, Quant.\ Inf.\ Comp.\ \textbf{9}, 666 (2009)] but lower than the threshold of $0.75\%$ to $1.1\%$ reported for the Kitaev codes subject to the same restrictions [Raussendorf and Harrington, Phys.\ Rev.\ Lett.\ \textbf{98}, 190504 (2007); Wang \etal, Phys. Rev. A \textbf{83}, 020302(R) (2011)]. Finally, because the behavior of our decoder's performance for two of the noise models we consider maps onto an order-disorder phase transition in the three-body random-bond Ising model in 2D and the corresponding random-plaquette gauge model in 3D, our results also answer the Nishimori conjecture for these models in the negative: the statistical-mechanical classical spin systems associated to the 4.8.8 color codes are counterintuitively more ordered at positive temperature than at zero temperature.
Quantum computerCodeLattice (group)HomologieWave packetQuicksortQuantum computerFault-tolerant systemStudent's t-testGradientAreaComputerDrum memoryFile archiverComputer animation
CodeSimulationExecution unitMaxima and minimaPositional notationVertex (graph theory)Axiom of choiceStability theoryNetwork topologySquare numberEstimatorBitDistanceTransverse waveOperator (mathematics)Numbering schemeElectric generatorService (economics)SpacetimeQubitDimensional analysisSurfaceFocus (optics)Category of beingLattice (group)Term (mathematics)Overhead (computing)CodeThresholding (image processing)Sigma-algebraProduct (business)Sound effectHypermediaEinbettung <Mathematik>Object (grammar)MultiplicationPlanningPhase transitionEndliche ModelltheorieLogic gateCubeSelf-organizationTriangleShared memoryOptical disc driveTwo-dimensional spaceMultiplication signBoundary value problemLocal ringCommutatorWeightPlanar graphOctagonMathematicianComputer animation
Endliche ModelltheorieControl flowEmailDedekind cutSoftware engineeringLogic gateStandard deviationBasis <Mathematik>Observational errorQuantum computerMathematical modelNoise (electronics)Error messageDoubling the cubeLevel (video gaming)Product (business)PolygonResultantMeasurementMultiplication signOperator (mathematics)State of matterBlock (periodic table)Phase transitionBitLocal ringQuicksortSphereGame controllerQuantum information scienceChannel capacityCategory of beingQubitComputerSurfaceProcess (computing)Numbering schemeSet (mathematics)Latent heatCodeThresholding (image processing)Digital electronicsEndliche ModelltheorieInformationPerfect groupVorwärtsfehlerkorrekturFacebookDemosceneBit rateForceHypermediaMilitary basePhysical lawComputer animation
CAN busExecution unitThresholding (image processing)CodeError messageOrder (biology)SurfaceNoise (electronics)BitChannel capacityMathematical modelResultantEndliche ModelltheorieQuicksortState of matterFault-tolerant systemDivisorNumbering schemeTable (information)Lattice (group)Network topologyCondition numberSystem callPoint (geometry)Scripting languageLevel (video gaming)Regular graphContext awarenessGoodness of fitPressureCASE <Informatik>Traffic reportingServer (computing)EstimatorService (economics)Military baseDrop (liquid)Information privacyMultiplication signFunctional (mathematics)PolygonComputer animation
Graphical user interfaceBitObservational studyError messageDigital electronicsFinitismusTriangleDistancePoint (geometry)Boundary value problemSound effectRule of inferenceScheduling (computing)Direction (geometry)InterleavingCompact spaceOrder (biology)Multiplication signThresholding (image processing)SequenceLevel (video gaming)Fraction (mathematics)Total S.A.Water vaporOvalCodePropagatorFehlererkennungSurfaceBound stateComputer animation
Inclusion map1 (number)Execution unitParity (mathematics)Phase transitionConsistencyMatrix (mathematics)Incidence algebraVertex (graph theory)Constraint (mathematics)Optimization problemRevision controlCodeError messageReal numberLinear algebraQubitNumbering schemeVariable (mathematics)Modulo (jargon)Equaliser (mathematics)Cross-correlationMathematicsLinear programmingAlgorithmBitDifferent (Kate Ryan album)Noise (electronics)Functional (mathematics)Multiplication signCASE <Informatik>WordProcess (computing)Dimensional analysisThresholding (image processing)Computer programmingMeasurementComputer animation
CodeThresholding (image processing)Execution unitRadio-frequency identificationError messageSummierbarkeitSound effectFinitismusCurvePhase transitionFault-tolerant systemCodeThresholding (image processing)DistanceNumbering schemeMultiplication signBitPattern languageDegree (graph theory)Classical physicsEstimatorPoint (geometry)PhysicalismLattice (group)Quantum computerUniverse (mathematics)Line (geometry)Parameter (computer programming)Inflection pointChannel capacityIsing-ModellEndliche ModelltheoriePhysical lawScaling (geometry)Social classCurve fittingComputer animation
Host Identity ProtocolThresholding (image processing)Scheduling (computing)SurfaceTerm (mathematics)Mathematical modelFitness functionPoint (geometry)Error messageDifferent (Kate Ryan album)Positional notationCodeLogic gateDigital electronicsAlgorithmWeightThresholding (image processing)Curve fittingComputer simulationHoaxOrder (biology)Multiplication signQuadratic equationAdditionSlide ruleAreaEngineering drawingDiagram
Convex hullWeb 2.0Execution unitAvatar (2009 film)EmailNumeral (linguistics)Operator (mathematics)Set (mathematics)Error messageDegree (graph theory)Combinational logicDifferent (Kate Ryan album)Pattern languageChannel capacityBound stateLattice (group)BitLengthNumbering schemeSurfaceNoise (electronics)DistanceCodeAdditionMathematicianScaling (geometry)String (computer science)Likelihood functionWeightExtension (kinesiology)Data structureSummierbarkeitLink (knot theory)ApproximationRound-off errorCausalityThresholding (image processing)RootLogicQuicksortMathematical modelComputerComputer animation
ArchitectureLogic gateThresholding (image processing)Quantum computerMeasurementCodeSurfaceSemiconductor memoryMathematical modelCASE <Informatik>Computer animation
Gamma functionSurfaceDigital electronicsMathematical modelMultiplication signCodeQuantum computerNetwork topologyTheoryNichtlineares GleichungssystemThresholding (image processing)Group actionState of matterBound statePropagation of uncertaintyOverhead (computing)Process (computing)InjektivitätDigital photographyDifferent (Kate Ryan album)Stability theoryNumeral (linguistics)ComputerElectric generatorDivisorInformationBit1 (number)Channel capacityObservational studyChemical equationDoubling the cubeService (economics)Endliche ModelltheorieLine (geometry)Single-precision floating-point formatGoodness of fitWordLecture/Conference
Transcript: English(auto-generated)
At this conference, Andrew Landau will speak about fault-tolerant quantum computing with color codes. OK, so for this talk, I'd like to take us all on a magical journey. You guys ready? Let's imagine that a giant twister has come and sucked us up and taken us to the wonderful world of Oz. As everybody knows, the wonderful world of Oz
is defined by color. So we're now in the world of Oz. And what I'm going to be talking to you about is how color can make fault-tolerant quantum computing wonderful and delightful. So this is work based on a paper that we have in the archive. And it's co-written with my grad students Jonas Anderson and Patrick Rice.
And sort of a shameless plug, I'll mention that Jonas is graduating this year. So if you're interested in the topics I'm talking about now, and you're looking for a postdoc who's got training and experience in this area, please see Jonas. He also has a poster on homological quantum computing upstairs. He's here at the conference. OK. So color codes are defined by a trivalent lattice embedded
in a surface, such that that embedding has a phase three colorable. The embedding is phase three colorable. If you look for surfaces that are semi-regular, semi-regular lattices that have this property, there are only three. By semi-regular, I mean that it's vertex transitive,
so that every vertex locally looks the same as every other vertex. So we can define that in terms of what's called vertex notation. So for example, 4, 8, 8 here means that in every vertex, you see a square and two octagons around that vertex. So we've only got three choices. Mathematicians are wonderful. They've gone and classified all these things, so I know for sure they're just three.
And of these three choices, which one is the most interesting? Well, I don't know. They're all kind of interesting. The one on the left, I think, might be more interesting because it uses the fewest number of qubits to achieve the same code distance. It also has a nice feature that the S-gate or phase gate is transversal for these codes. Not true for the other two color codes.
The way the codes are defined relative to this lattice is that we imagine there's a qubit at each vertex in the lattice. And the stabilizer generators are defined by the simultaneous product of sigma x around all the qubits on a face and sigma z around every qubit on a face. And so this funny criterion of being trivalent and phase 3
colorable is ensuring that these stabilizer generators commute with one another. So I wanted to also just mention as an aside that these codes are not the topological subsystem color codes that Robert Rosendorf spoke about in his tutorial. Those are derived from color codes, and they have the additional feature that all of the stabilizer generators can be measured by using weight two operators.
So that's another nice feature. And it turns out that they also enable any color code to have the S-gate transversal, as Robert talked about as well. So the focus of this talk, though, however, is going to be on these 4-8-8 traditional color codes. Now, to be a bit more practical mentalist, I'm not going to imagine arbitrary surfaces,
Klein bottles and whatnot. Let's just imagine the plane, a bounded plane. So if you look at planar color codes, it turns out that they have to have a multiple of three corners in defining them if they have a boundary. And so the simplest such object is a triangle. So I'm going to be focusing on triangular 4-8-8 color codes.
And so the smallest triangular 4-8-8 color code turns out to be our familiar friend, the steam code. And these other codes are kind of a way of growing the steam code in an organic lattice-like way, as opposed to via concatenation. Now, these codes, of course, are naturally suited to these 2D technologies that we've heard about, where long-distance transport is infeasible or impractical
for one reason or another. And we've heard multiple times already at this conference that the 2D surface codes are amazing. And our World Wizard of Oz land, they're wonderful and delightful. And we'd like to see whether or not the color codes share some of these same delights. In particular, we weren't excited about the fact that they're local in two dimensions.
They also give us very high accuracy thresholds. And they don't need syndrome ancilla distillation, which is a huge overhead cost we know that is incurred in traditional concatenated coding schemes. So how do the 2D color codes compare to this? Well, as I said in my tutorial, whenever we want to answer a question like this,
we have to be very clear about what our control and noise models are. So the control and noise models that I'll be considering here are the control model B1, where the gate basis is basically the standard gate basis, but with the magic state for the T gate. And I'll make the standard assumptions that we have maximally parallel operation. We have refreshable ancilla. We can compute as fast as we like.
And all gates take the same amount of time. Then, of course, we have 2D layout and local quantum processing restrictions, which really aren't a restriction at all for the 2D color codes. Before describing the noise model, it's helpful to introduce a couple of noise channels are used to define those noise models.
So the first is the BP channel, which is the bit flip channel followed by the phase flip channel. Each of these channels causes the block sphere on a qubit to contract either around the z or the x-axis, like the ellipsoidal picture up above. We'll also be interested in the DP channel, which is a sort of double poly channel that applies every one of the possible 16 poly products
with probability p over 16. So these channels are used to define our noise model. We'll assume that the qubits don't leak. Sean talked about that the surface codes can handle very high leakage rates. The color codes might be able to as well. I think that's an interesting open question. And we'll assume that the classical computing
is reliable. So there are three levels of noise models that we're going to consider here. So the first is the most fine grained. It's the circuit-based noise model. So we'll model in detail using this faulty gate basis, the syndrome extraction process. And we'll model each faulty gate, each faulty one qubit gate, as being ideal, followed by this BP channel.
We'll model each faulty CNOT gate as being ideal, followed by the DP channel. And we'll model each measurement as being preceded by the BP channel and having its result flipped with probability p. I should mention that that second aspect of the result flipped with probability p is important and often overlooked. If our noise model had been that the bit gets corrupted
in some way and then we measure it, ideally, that wouldn't be a measurement error at all. The measurement would be consistent with what had happened to the data. A measurement error is when the measurement is inconsistent with what's happened to the data. And so we add that extra bit on as well. Now, we can have a more coarse-grained model,
this so-called phenomenological model, where we don't worry about the details of exactly the syndrome extraction circuit at all. We just say we lump its probability of error into one number we call p. And we say that whole measurement process just fails with some probability p. So this has the advantage of informing us about the threshold in the advent of new technology that can give us new gates.
The threshold of the circuit model assumed a very specific set of gates. But if some technologist comes along and says, well, now I've got a great new set of gate. Maybe I have the syndrome extraction gate. It's one gate. And it does it for you. Then the phenomenological noise model kind of gives you an upper bound on what any threshold will be no matter what your gate basis is.
So that noise model is exactly the same as above, but just as I said. And the only time gates appear now, because the only time gates appear in syndrome extraction is getting that ancilla out, the only other place gates will appear is when we do the encoded computation. And finally, the third model, which is just a property of the code itself,
is the code capacity noise model. So if you just want to know the capacity of this code for sending quantum information over a channel, that's the capacity of the quantum error correcting code. And this would be sort of the single shot, one letter kind of capacity. And it's the same as a phenomenological model. Our syndrome measurements are perfect. OK, so those are three noise models.
So now we need to figure out how we're going to decode this and what the thresholds are going to be. So I talked in my tutorial about optimal and maximum, most likely, error decoding. And here's a table in our paper that lists all of the known topological codes that had thresholds that we were aware of. In fact, one of them doesn't have anything at all, the 4612.
So the first three are the three color codes that I mentioned. Then we've got Kitaev's code on the square lattice. It also can only be on three different lattices if you want to demand this semi-regularity condition. There's the topological subsystem color codes. And another code, which is defined on hypergraphs. I don't have a great name for it.
So I'm just calling it the Suchara-Bravy-Tirhal code. I don't know what else to give it a name for. So we can look at the code capacity of these. And we see that, well, if you optimally decode these things, they're all about 11%. And that's actually close to the best you could ever hope for for any CSS code. I would say call these, I mean, you can dither about the decimal points, but they're all basically the same.
We know that for the surface codes, it dropped a little bit, but not significantly. So again, basically the same. So 10.3%, I believe Sean quoted that in his talk. And you can use some suboptimal decoders. And I should mention that these colored numbers are numbers that appeared after we created the table. So Pradeep Sarvapalli and Robert Rosendorf
calculated the 7.8% threshold number using a heuristic decoder. And as you heard Austin talk in his talk, he's improved his estimate of .9% up here. So our first result in sort of Striptease way of presenting talks is that we find that it's 10.56%
if we use the most likely error decoder for the color codes. So again, not a major drop, all basically the same. In the phenomenological noise model, the threshold number drops to 3.3% of the surface codes and then drops to 2.93% when we do the suboptimal but most likely error decoding.
And previously it's been shown that it's a higher upper bound at 4.5%. So this is starting to sound good, like the color codes might be doing a little bit better when we think about fault tolerance relative to the surface codes. Maybe its threshold will be higher when we go to the MLE decoding. And in fact it is. Even when we drop to that, it's 3.05%, plus or minus .04%.
So it's a little bit higher, so that's good. But now we wanna get to where, the closest model to reality that we have here, where we actually work out the details of all the circuitry. And in this case, we know that for the surface codes, it's close to 1% is the threshold. Using the MLE decoder, you'll notice
that the optimal decoder is missing here. We don't even know what the threshold is for optimal decoding. The decoding for that is very complicated. But we don't know what it is for any of the color codes. There had been a heuristic decoder that was used, Fowler and others, where they found a threshold of .1%. In order to achieve such a high threshold,
they also added ancilla distillation. They used shore states for the ancilla decoding. So that's sort of anathema to the whole spirit of using these 2D codes. But it does give you a very high, you know, it boosts the threshold some, and so it gets you to .1%. So we're curious to see, well, what do we get if we don't do that? And our result is, voila, we get .082%.
So again, nearly .1%, let's say, .01%, let's say, .1%. So unfortunately, while things started to look promising at the code capacity and the phenomenological level, the threshold is looking about a factor of 10, give or take, worse than what we find for surface codes.
Okay, so how do we actually perform this syndrome extraction? So if we're gonna go at the circuit level, we have to be very detailed about what the actual circuits we use are. And I mentioned in my tutorial that there's certain rules that you wanna obey if you wanna keep errors from propagating catastrophically on a surface. Our schedule we thought of was we said, well, let's imagine there's one syndrome at the center of each face, and we'll just do our CNOTs
either into or out of that syndrome bit in sequence. So if it's measuring a Z-check, we'll just do the CNOTs all around. I guess we have it in book order here. I apologize, it's a little hard to read, but one, two, three, four, five, six, seven, eight. So we can have them go in sequence into the bit in the center to measure the Z-check, say. And then to measure the X-check, we follow it up with one, two, three, four,
five, six, seven, eight, with the CNOTs going the other direction. So the total schedule, it takes 16 CNOTs to measure both the X and the Z-checks. And we thought, well, gee, that takes a long time. If you're trying to do total error correction, you're waiting around a long while while the X errors are building up before you check them. You're waiting eight time steps. We know from studies of concatenated coding
that by using more compact syndrome extraction circuits, for example, with the bacon short codes, we get improvements in the threshold. So we thought maybe we should think a little hard about how we can compress this schedule. So we came up with an interleave schedule that I just ate total coherent time steps. We call it the XZ interleave schedule, where the X-checks are measured in book order,
one, two, three, four, five, six, seven, eight. And the Z-checks are measured in Hebrew book order. So you read from right to left. So one, two, three, four, five, six, seven, eight. And if you do that, you can show that the errors don't propagate badly. It satisfies all the right criteria. And if you were to examine errors happening at different points in the schedule,
they propagate out, but they propagate to a bounded distance. So this is an example of something like that. And since we're looking at these triangular codes, there's this finite size effects and boundary effects. So we have to be a little careful on the boundaries as well when we define this schedule. Okay, now we've explained how we extract the syndrome.
Now we need to process it and figure out what we think has gone wrong given the syndrome. So we developed a decoder. We reformulated the decoding problem, the most likely error for our problem. It actually works for every CSS code. The problem you want to solve is you want to minimize the total number of flips
that you think have happened. So X of V is a possible flip for each vertex V where a qubit is. And because it's a CSS code, you can do bit flip and phase slip decoding independently of one another. And the fact that it's the most likely error is really a function of the fact that we use this bit flip channel followed by the phase flip channel. If we instead use a depolarizing channel,
there would be subtle correlations between those two noise processes because a Y flip is more probable than an X followed by a Z. So that's why we can use this word most likely in this case. So we want to minimize this and we want to have these flips subject to the constraint that all of the syndrome, they're consistent with all the syndrome values we see for every phase F.
So that's the optimization problem. We can linear algebraize it, if you will, and this parity constraint can be written as the face vertex incidence matrix H, which should just be the parity check matrix for the code. And we want to have this linear algebra version
of the optimization problem satisfied. A challenge with formulating this way is that the syndromes have to be equal modulo two. And this is doing integer programs over GF2 are difficult. So it's more convenient to add on auxiliary slack variables that allow you to do the integer program over the reals.
And so you can reformulate it to do that. And so this is the decoding algorithm that we use to identify errors. In three dimensions, we don't want to just respond to the syndromes because, for example, if a data error happened, if we repeat that syndrome measurement, that syndrome flip will persist. And we don't want to imagine that a data flip has
happened each time step. It just meant that it happened at the first time step here. So the more correct thing to do is to respond to syndrome differences. So changes in the syndrome are telling us that something has happened. And so we can reformulate it. And I won't tell you, but there's a way to reformulate this with the syndrome changes to do this all in a nice way with slack variables. And that's what we did.
So what is the threshold? So let's look at this code capacity threshold first. The probability of failure of decoding is the sum over all the error patterns that you have times the probability of that happening. And up to small distance codes, we can actually iterate over every possible error that happened, look at its syndrome, decode it, and see what happened.
We can get exact curves up to distance 7 using our decoder. And so that's what these nice curves are, p fail versus p here. When we got up a little higher, like distance 9, then we had to resort to Monte Carlo estimations. So what we would do is we'd throw down an error pattern and ask whether or not our decoding failed or succeeded and repeat this many times, and build up enough statistics such that the error bars were small.
And so there's a couple of error points on here to indicate that data overlaid on this. And as I mentioned in my talk, these codes are expected to have the mutual intersect, all of them be the same. There will be some finite size effects, but eventually, asymptotically, it should all have the same inflection point.
And to find the threshold, what we don't do, what you might want to do, is to just look at the crossing of all these curves. But that's not as stable a thing to do, especially at these small sizes. A more robust thing to do is to take advantage of the fact that there's this beautiful mapping between the threshold of fault-tolerant quantum computing in these lattice codes and an associated order-disorder phase
transition in an icing model, classical icing model, on a certain kind of a lattice. And because of that, we can use our understanding in physics of these scaling laws, universality parameters in these icing models, to do curve-fitting, to say that near where all these lines curve, we expect this to be linear. And we understand how these slopes are supposed
to scale because of this universality class. And this allows us to do a curve-fitting. This was described in a paper by Wang et al. We can do this curve-fitting near this curve and get precision on our reported value. So this is why we're able to say it's 10.561%. Now, that curve-fitting is assuming,
though, however, that some of the finite size effects may actually enter into this, maybe not to the degree to which, if you were to just do the crossing, but I think that they would be present. So there's, I think, a little bit of artifice in this small number here. The phenomenological threshold, we do the same way. In the interest of time, I'll skip the algorithm here.
It's maybe of interest only to specialists. But we do a Monte Carlo algorithm for that as well. And we get our data. We go up to distance nine again here, and we infer what the threshold value is. In the circuit threshold, we have an additional challenge that the gates in our model can cause single errors
to propagate out to multiple errors. These were called hooks in the surface code paper that I worked on a long time ago. So we just call these generalized hooks for the color codes. They could be of weight two, three, up to four because we have weight eight checks. And so that means that near the crossing point here isn't necessarily linear, but to the next order, quadratic, we have to consider.
And we get a better fit if we do that. So we do our curve fitting with a quadratic term in here as well, and calculate the threshold for both our interleaved and our sequential schedules. And I have to say, somewhat disappointingly, we found that all this cleverness that we put into interleaving these schedules was for naught. We found that there really was no difference in the threshold by interleaving or not
interleaving. In fact, I mean, so again, this notation means 8.2 plus or minus 0.3. So these are consistent with each other. There's no real difference. In addition to doing numerical simulations, it's valuable to have rigorous analytic bounds on the threshold so that you know that you're not
having difficulties with numerics or round off errors in your computer or whatnot. And so what we did is we adapted a technique that we call the self-avoiding walk technique that's from the surface codes some years ago to the color code setting. The general idea is as follows. We can bound the probability of failure by the probability that an operator
and the support of a logical operator is contained in the error pattern that consists of the actual errors that happened and our best guess of what happened. If the combination of the actual errors and our guess contains a logical operator, we'd say we failed. For color codes, the logical operators are not just string-like.
They actually can be what's called string net-like. I haven't gone into that, but they're a little bit more elaborate. But fortunately, for every string net that there is, there's an equivalent string that has the same likelihood. So we can just double this sum to get a bound. And the probability that a logical operator is contained in the string is the probability that a self-avoiding walk is contained in that string.
And so we can just count the number of self-avoiding walks and look at the probability that we have bits flipped along that walk. So there's n possible places for the walk. And then given that you've started there, there's a certain number of walks of length l that are distance d or greater. So we're going to assume that every error pattern of greater than distance d will cause failure.
There's a number of ways to choose the flips on that path and a probability of that. And the only thing that we need to work out, then, is how does the number of self-avoiding walks scale as the length? And this is something, again, mathematicians have studied in great detail. The coarsest bound, you could say, is, well, the first step I can
walk in any one of the degree of the lattice step. I can take a step delta different ways, where delta is the degree of the lattice. And then the next step, I can take delta minus 1. And then it's delta minus 1 and so forth. But generally, there's this notion of a connective constant for the lattice. And for the 488 lattice, it's known exceptionally well.
And if you like bounds as opposed to approximations, you can actually use the bound instead. And when we use this approach, we find that for the code capacity noise model, we get a bound of 8%, which is less than the 10.56% that we estimated. But it's a bound. And for the phenomenological noise model, it's a little bit more elaborate. So this is the structure you have to do the walk on here.
You have to walk in this three-dimensional lattice. That's the prismatic extension of the 488 lattice, allowing couplings between the syndrome bits and the data bits. If you do that, we get a bound of about half a percent. So that's less than the 3% that we had. So finally here, then we actually
push this through for the quantum computation threshold. And it turns out that all the gates have a threshold in the transversal model that are roughly the same as the memory threshold. Destructive measurement and code preparation can be much higher. And you can do code deformation in these codes just like you can in the surface codes. And then in that case, all gates
have the threshold of the memory threshold. So that's nice. So to summarize, we find that the comparable to the surface codes in these phenomenological code capacity models, but 10 times worse than the surface codes when we look at circuits. We have some rigorous bounds, but they're fairly weak. It would be nice to tighten those up
to really understand things better, especially in the circuit model. The overhead of these is comparable to the surface codes. But there are some interesting questions like these topological subsystem color codes. Do they have better thresholds or not? What about leakage errors in these? Can we handle them just as well? Tightening up these lower bounds.
Magic state distillation is something that hasn't been studied well, I'd say, either in the surface codes or the color codes. This whole process, I mean, not distillation, but the injection process. And you might be concerned that while the magic states are not encoded very well, that they could propagate errors badly until they grow up to the size of the code. I think that's an interesting question. And finally, for those of you interested in topological
quantum computation, it'd be interesting to see if we can extend the color codes to a quantum double model in the way that Kitaev, or actually originally in the surface codes, to think about different kind of anionic theories that might realize topological quantum computation. So with that, it's time for me to click my heels and enter back to reality here and answer any questions
that you might have. Thank you for your attention. Very good showmanship. Any questions? We have time for maybe one before lunch.
Is there a simple explanation for why the circuit threshold is about a factor of 10 lower? Is it possibly because you're measuring high-weight stabilizers with two cubic gates? Well, that's my guess. I mean, numerics can't give you, you know, what is it Pablo Prasow said? Computers only give you answers.
So I don't know. It just tells you the, like, my guess is that it has to do with the high-weight stabilizer generator. So a way to examine that is just 4.612 codes, which have even higher-weight generators, perform worse. But the 666 ones would somehow balance with 4 and 8 or in between. I don't know. So I would say a comprehensive study of these other codes might give some more evidence to that line of thinking.
But that's certainly my thinking, is that the high-weight stabilizer generators are why that's what's causing that. In fact, I would say that's also why I believe that the phenomenological model does better than the surface code, because I'm getting information from 8 bits now all in a single step, whereas on the surface codes, I say I only got 4 bits in a single step. So I think it might explain both sides of that equation.
Let's thank all our speakers from the morning session one more time. And now we have lunch, and we have a group photograph outside.