Faulttolerant quantum computing with color codes
Formal Metadata
Title 
Faulttolerant quantum computing with color codes

Title of Series  
Author 

License 
CC Attribution  NonCommercial  NoDerivatives 3.0 Germany:
You are free to use, copy, distribute and transmit the work or content in unchanged form for any legal and noncommercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor. 
Identifiers 

Publisher 

Release Date 
2011

Language 
English

Content Metadata
Subject Area  
Abstract 
We present and analyze protocols for faulttolerant quantum computing using color codes. To process these codes, no qubit movement is necessary; nearestneighbor gates in two spatial dimensions suffices. Our focus is on the color codes defined by the 4.8.8 semiregular lattice, as they provide the best error protection per physical qubit among color codes. We present circuitlevel schemes for extracting the error syndrome of these codes faulttolerantly. We further present an integerprogrambased decoding algorithm for identifying the most likely error given the (possibly faulty) syndrome. We simulated our syndrome extraction and decoding algorithms against three physicallymotivated noise models using Monte Carlo methods, and used the simulations to estimate the corresponding accuracy thresholds for faulttolerant quantum error correction. We also used a selfavoiding walk analysis to lowerbound the accuracy threshold for two of these noise models. We present two methods for faulttolerantly computing with these codes. In the first, many of the operations are transversal and therefore spatially local if twodimensional arrays of qubits are stacked atop each other. In the second, code deformation techniques are used so that all quantum processing is spatially local in just two dimensions. In both cases, the accuracy threshold for computation is comparable to that for error correction. Our analysis demonstrates that color codes perform slightly better than Kitaev's surface codes when circuit details are ignored. When these details are considered, we estimate that color codes achieve a threshold of 0.082(3)\%, which is higher than the threshold of 1.3 \times 10^ achieved by concatenated coding schemes restricted to nearestneighbor gates in two dimensions [Spedalieri and Roychowdhury, Quant.\ Inf.\ Comp.\ \textbf, 666 (2009)] but lower than the threshold of 0.75\% to 1.1\% reported for the Kitaev codes subject to the same restrictions [Raussendorf and Harrington, Phys.\ Rev.\ Lett.\ \textbf, 190504 (2007); Wang \etal, Phys. Rev. A \textbf, 020302(R) (2011)]. Finally, because the behavior of our decoder's performance for two of the noise models we consider maps onto an orderdisorder phase transition in the threebody randombond Ising model in 2D and the corresponding randomplaquette gauge model in 3D, our results also answer the Nishimori conjecture for these models in the negative: the statisticalmechanical classical spin systems associated to the 4.8.8 color codes are counterintuitively more ordered at positive temperature than at zero temperature.

Related Material
00:00
Area
Homologie
Drum memory
Gradient
File archiver
Student's ttest
Faulttolerant system
Quantum computer
Computer
Quantum computer
Wave packet
Code
00:59
Axiom of choice
Email
Digital electronics
State of matter
Multiplication sign
Set (mathematics)
Mathematical model
Dimensional analysis
Optical disc drive
Facebook
Estimator
Qubit
Positional notation
Bit rate
Dedekind cut
Hypermedia
Square number
Endliche Modelltheorie
Quantum computer
Logic gate
Lattice (group)
Error message
Stability theory
Electric generator
Sigmaalgebra
Channel capacity
Shared memory
Sound effect
Bit
Twodimensional space
Control flow
Measurement
Demoscene
Category of being
Process (computing)
Phase transition
Triangle
Selforganization
Endliche Modelltheorie
Quicksort
Simulation
Resultant
Spacetime
Software engineering
Numbering scheme
Game controller
Vorwärtsfehlerkorrektur
Overhead (computing)
Service (economics)
Perfect group
Observational error
Maxima and minima
Distance
Code
Thresholding (image processing)
Computer
Product (business)
Force
Latent heat
Term (mathematics)
Operator (mathematics)
Energy level
Noise (electronics)
Execution unit
Multiplication
Focus (optics)
Information
Military base
Surface
Polygon
Physical law
Planning
Basis <Mathematik>
Sphere
Code
Doubling the cube
Cube
Network topology
Vertex (graph theory)
Einbettung <Mathematik>
Object (grammar)
Transverse wave
Local ring
07:59
Context awareness
Scheduling (computing)
Digital electronics
State of matter
Interleaving
Multiplication sign
Direction (geometry)
Water vapor
Information privacy
Faulttolerant system
Mathematical model
Fraction (mathematics)
Estimator
Graphical user interface
Oval
Lattice (group)
Error message
Scripting language
Compact space
Channel capacity
Sound effect
Bit
Fehlererkennung
Sequence
Order (biology)
Triangle
Quicksort
Resultant
Point (geometry)
Numbering scheme
Functional (mathematics)
Server (computing)
Service (economics)
Divisor
Observational study
Drop (liquid)
Regular graph
Distance
Code
Thresholding (image processing)
Rule of inference
Goodness of fit
Propagator
Energy level
Boundary value problem
Traffic reporting
Condition number
Noise (electronics)
Execution unit
Military base
Surface
Polygon
Bound state
Total S.A.
System call
CAN bus
Personal digital assistant
Network topology
Table (information)
Pressure
13:38
Multiplication sign
Parameter (computer programming)
Computer programming
Dimensional analysis
Mathematics
Estimator
Radiofrequency identification
Different (Kate Ryan album)
Matrix (mathematics)
Endliche Modelltheorie
Quantum computer
Error message
Lattice (group)
Social class
Curve
Algorithm
Constraint (mathematics)
Optimization problem
Sound effect
Physicalism
Bit
Variable (mathematics)
Measurement
Degree (graph theory)
Inflection point
Process (computing)
Phase transition
Summierbarkeit
Pattern language
Point (geometry)
Classical physics
Numbering scheme
Finitismus
IsingModell
Parity (mathematics)
Real number
Distance
Code
Thresholding (image processing)
Revision control
Crosscorrelation
Linear programming
Curve fitting
Noise (electronics)
Execution unit
Scaling (geometry)
Physical law
Linear algebra
Incidence algebra
Line (geometry)
Thresholding (image processing)
1 (number)
Code
Inclusion map
Word
Personal digital assistant
Universe (mathematics)
Vertex (graph theory)
18:15
Email
Hoax
Digital electronics
Multiplication sign
Combinational logic
Mathematical model
Likelihood function
Roundoff error
Different (Kate Ryan album)
Extension (kinesiology)
Logic gate
Error message
Lattice (group)
Area
Algorithm
Channel capacity
Fitness function
Computer simulation
Bit
Degree (graph theory)
Order (biology)
Convex hull
Pattern language
Summierbarkeit
Quicksort
Mathematician
Point (geometry)
Slide rule
Numbering scheme
Link (knot theory)
Distance
Thresholding (image processing)
Code
Computer
Host Identity Protocol
Quadratic equation
Causality
Root
Term (mathematics)
Operator (mathematics)
String (computer science)
Data structure
Curve fitting
Addition
Noise (electronics)
Execution unit
Scaling (geometry)
Weight
Surface
Bound state
Thresholding (image processing)
Avatar (2009 film)
Approximation
Web 2.0
Logic
22:16
Group action
Service (economics)
Digital electronics
Overhead (computing)
Divisor
Observational study
State of matter
Multiplication sign
1 (number)
Mathematical model
Thresholding (image processing)
Code
Computer
Theory
Architecture
Goodness of fit
Semiconductor memory
Singleprecision floatingpoint format
Endliche Modelltheorie
Nichtlineares Gleichungssystem
Gamma function
Quantum computer
Logic gate
Stability theory
Injektivität
Information
Channel capacity
Surface
Chemical equation
Bit
Line (geometry)
Digital photography
Word
Numeral (linguistics)
Process (computing)
Doubling the cube
Propagation of uncertainty
Personal digital assistant
Network topology
00:00
at this conference Andrew Lendl will I speak about faulttolerant computer color codes OK so for for this talk I'd like to take us on a magical journey he has rent Let's imagine a giant twisters common sucked up and taken us to the wonderful world of ours as everybody knows the wonderful lots defined by color so we're now in the world of us and what are be talking to you about is how color can make faulttolerant quantum computing wonderful delightful so this is work based on a paper that we have in the archive and it's cowritten with my grad students Joanne Jonas Anderson Patrick Rice and that's a shameless plug I'll mention the Jones is graduating this year so if you're interested in the topics of talking about now you're looking for a postdoc who's got training experience in this area but Jonancy also suppose drum homological Kwan computing upstairs I use here the common OK so colorcoded defined by a
01:01
trivalent lattice embedded in the surface such a that embedding has a face 3 colorable a can be cult is the media space recallable and you look for services that a semiregular as a regular lattices that have this property the only 3 up by semiregular I mean that it's vertex transitive so that every vertex locally looks the same as every other vertex so we can define that term so it's called vertex notation so for example for 8 8 here means that every vertex you see a square and too often dons around over so you've only got 3 choices that mathematicians a wonderful gone classified all these things I know for sure they're just 3 and of these 3 choices of which 1 is the most interesting well I'd another all kind of interesting I wanna let I think might be more interesting because it is the fewest number of qubits to achieve the same code distance it also has a nice feature that the estimator phase gate transversal for this code not true for the other 2 color codes but because defined relative is lattices that we imagine there's a cube it at the vertex of at each vertex in a lattice and the stabilizer generators are defined by the simultaneous product of sigma X around all the qubits on a face and sigma around every Cuban effects and so this funny criteria being trivalent face recallable is ensuring that the stabilizer Jenna's commu with 1 another so I wanted also just mention as an aside that these go are not the topological subsystem color codes that robber Rossendorf spoke about his tutorial are those that arise from color codes and they have the additional feature that all of the stabilizer generators can be of measured by using way to operators that's another nice feature and it turns out that they also enable any color code to have the SK transversal as Robert talked about as well so the focus of this article however is going to be on these 4 88 traditional color code now to be a bit more practical Atlas and I could imagine arbitrary surfaces you climb models and what not was just imagine the plane and bounded playing us if you look at play here color codes it turns out that they have to have a multiple of 3 quarters in defining them if they have a boundary and so the simplest such object is a triangle so of going to be focusing on triangular out for 88 color codes as of the smallest triangular 40 color code turns out to be our familiar from the being and these are the codes kind of a way of growing the steam codon organic latticelike way as opposed to be a concatenation now this goes of course an after suited to these 2 D technologies that we've heard about where distance transport is infeasible in practical for 1 reason or another so we've heard of multiple times at this conference at the 2D surface codes are amazing and our world Wizard of Odds land into the wonderful delightful and we'd like to see whether the Coleco's share some of the same delights in particular but they were excited about the fact that the local in 2 dimensions they also was very high accuracy threshold and they don't need solution which is a huge overhead cost we knows that it is anchored in traditional concatenated so how to Coleco's compared to us but as I said my
04:17
tutorial I'll whenever we wanna answer a question like this we have a very clear about what our control noise models so the control noise models of be considering here I could to the patrol model the 1 where the gay basis is basically the standard basis but with the magic state for the antigay and I'll make the standard assumptions that we have massively parallel operation we have refreshable insula we can compute as fast as we like it or gates at the same time and then of course the ITD layout local car processing restrictions is really on a restriction offers to the color I before spreading the noise model it's helpful to I introduce a couple of noise channel and userdefined those noise models so the 1st is that BP channel which is the bit flip channel followed by the Facebook by each of these channels causes the Bloch sphere acuity contracted the this year the X. axis like the looks a little picture of love I also have it was in the D. channel which is a general as that of double poly channel applies every 1 of the possible 16 poly products of probability P over 60 so these channels are userdefined a noise model assume that Cubans don't leave Sean I talked about that the surface codes can handle very high leakage rates Akaka's might be able to is well I think that's an interesting open question and also the classical computers reliable so there are 3 levels of noise models that are going to consider here so the 1st is the most finegrained it's the circuit noise model so all model in detail using this fall gate basis the syndrome attraction process and will model each faulty gate it to go 1 cubic gate as being ideal followed by this BP channel all while each seat faulty scene I gave as being ideal followed by the DP channel and a model each measurement as being preceeded by the BP channel and having this result with the probability p I should mention that that 2nd aspect of the result of the probability P is important and often overlooked but if not always model had then that the bit gets corrupted in some way and then we measure it ideally that would be a measurement error at all the measure would be consistent with with the date would happen to the data a measurement errors when the measurement is inconsistent with what's happened to the data and so we add that extra bit on as well now we have we can have a more coarsegrained model this socalled phenomenological model where we don't worry about the details of exactly this syndrome attraction circuit at all we just say the law its probability of error in 2 1 number we call P with that whole measure process that just deals with some probability P this has the advantage of informing us at the threshold in the advent of new technology that give us new gates but you know the threshold of the circuit models in a very specific set of gates but if some technologist comes along and says what I've been a great new set of gate media have the singer attraction gave it's 1 gay and it does it for you then the phenomenological noise model kind of tells you what the that is an upper bound on what any threshold will be to my what you're gay bases its so that so that was models exactly things above but but just as I said in the only time data appear now because in that have its appearance and a retraction is Inzunza is given getting that allow the only other place it'll appears when we do the encoding computation and finally the 3rd model which is just a property the code itself is the code capacity noise models if you want just 1 the capacity this code person in quantum information over a channel as a capacity required errorcorrecting code and has been sort of a single shot 1 mother kind of capacity and time is the same as a phenomenological model it's a syndrome measurement of perfect OK so those are 3 noise models so now we
08:02
need to figure out but our going to decode this somewhat the threshold is going to be so I I talk to my tutorial about optimal and that some of most likely error decoding and here's a table in our paper that lists all the known as a topological so we that had thresholds we were aware of that 1 of them doesn't have anything at all for 6 12 so that the 1st 3 are 3 spot color codes I mentioned I then we've got the tires code on the square lattice it also can only be on 3 different lattices if you want them in this city regularity conditions others a topological subsystem color codes and another code which is to find a hypergraph they don't have a great name for it so I'm just going to try our broadly to haul though that I'm aware of it in for so they look at the code capacity these we see that well if you only decode these things are all about 11 per cent and that's actually close to the best you could ever hope for for is especially I would say call these assuming you can dither about the decimal points but there are basically the same but we know that for the surface goes it dropped a little bit like that but not significantly so again basically the same so an 10 comma decimal 3 per cent Italy function put that in his time and you can use some suboptimal ecodesign I should mention these colored numbers are numbers that appeared after we the table so I pretty server poly and robber Rossendorf Cocke is the 7 comma decimal 8 per cent threshold number using heuristic decoder and as you heard Austin talk in his sight these improved is estimate point 9 per cent a year so our 1st result and serve strip tease way of presenting to us is that the we find 10 comma decimal 5 6 per cent if you use the most likely error the color so again not a major drop all bases in the phenomenological noise model a threshold number drops to a you know 3 3pointers service codes and then dropped 2 comma decimal 9 3 when we do the suboptimal but most likely error decoding and privacy it's a higher up about a 4 comma decimal 5 per cent so this is saying is good like the color might be doing a little bit better meeting about fault tolerance of the relative the surface but maybe it's special to be higher I went this the MLE decoding and in fact it's even when we dropped that is 3 comma decimal 0 5 per cent of that plus 1 comma decimal 0 4 per cent so are some of the higher so that's good but now you get to where you know the closest model to reality that we have here our we actually work out the details of all the searches and in this case I we know that the surface codes it's you know close to 1 per cent the threshold by using an LED go you know the the optimally coders missing here we don't even know what the threshold is Rothman coding of at the decoding flat it's very complicated but we don't know what it is for any of the color codes there and then I heuristic Dakota that was used a follower another so a without a pressure point 1 per cent and in order to achieve such a high threshold they also added and instill a distillation you sure but sure states the so decoding so that sort of anathema to the whole spirit of using these 2 but it does give you very high you know the boost threshold some and so it's report 1 % over curious to see what what we get we don't do that and a result is well we get point 0 8 to process so again nearly point 1 per cent it's point 0 1 of characters and so unfortunately while things started look promising at the code capacity phenomenological level the threshold is looking about a factor of ten year at a worse than what we find for surface scripts of this hierachy performance
11:42
sentiment fractions right before they got the circuit level we have a very detailed about what the actual circuits we use and I mention my tutorial that certain rules you want all day you won a key errors from propagating catastrophically of surface this get we thought it was he said well let's imagine is 1 syndrome at the center of each face and or just you our C not into or out of that syndrome bit in sequence so if it's a measuring a z Chakwal just do this you not all around so I guess hidden book order here about his little harder but 1 2 3 4 5 6 7 8 so we can have them go in sequence into the bit of the center to measure the z check say and then to measure the X. Check we followed up with 1 2 3 4 5 6 7 8 with a that's going in the direction so tell Stedall take 16 seen not to measure both the exons each we thought would you that takes a long time if you try to do total error correction you're waiting around a long while while the X errors of building up before you check that anywhere anytime time steps we know from studies of concatenated coating that by using more compact Center retraction circuits for example with the bacon short codes we get in improvements in the threshold so we thought maybe we should I think a little hard about how you can compress the scandal they came up with an interleave scandal that I just 8 total coherent time steps but the we call the eggs Z interleaved schedule where the X checks are measured in bulk order 1 2 3 4 5 6 7 8 and is checks are measured in in Hebrew book water so you read from right to left so 1 2 3 4 5 6 7 8 if you do that you can show that the errors don't propagate badly it satisfies all the right criteria and if you look we you examine no errors happening at different points in the schedule they propagate out but they probably to a bounded distance so this is an example of something like that and since we're looking at these are triangular cos others is finite and bounded finitesize effects and boundary effect so we have to be a little careful on the boundaries as well we define the scope or
13:39
OK now we explain how we extract syndrome now the process it and figure out what we think is gone wrong given the central so we developed a of a Dakota we reformulated the decoding problem them but most likely arity for problem it actually works every CSS code but that probably was obvious you 1 and minimize the total flips that you think of happened so X of the user possible flip through each vertex the work it and because it's the CSS but you can do bitflip and phase of becoming independently of 1 another and and this you know the fact that it's the most likely error is really function effective uses bit flip channel followed a face of channel if we instead use a depolarizing channel there would be subtle correlations between those 2 noise processes because a wife flip is more probable than X followed by a z so that's why we can use this word most likely in this case so we want to minimize this UI I have these Flips subject to the constraint that all the syndrome that they're consistent with all the syndrome values we see for every face so that the optimization problem we can you know linear algebra is it if you will and the this parry constraint can be written as the face of vertex incidence matrix H which just be the parity check matrix each for the code and so you know we want to have this linear algebra version of their optimization problems satisfied a challenge of formulating this way is that the they have to this engines have people modulo 2 and this is doing is the program's over GF 2 are difficult so it's work needs add on auxiliary slack variables the light to do the integer program over the reals and so you can reformulate to do that and so this is the decoding algorithm that we use to identify errors in 3 dimensions but we don't just responded the syndromes because for example the data error happened you know if we repeat that syndrome measurement that syndrome flip will persist and we don't want to imagine the data flip this happened each timestep adjustment that happened at the 1st times that year so the more correct thing to do is to respond to syndrome differences that changes in the syndrome were telling us that something has happening and so we can be formulated and I will just over there is a way to reformulate this with this syndrome changes to do this all in a nice way with slack variables and that's what so what what is that threshold solicited this
16:04
code Abassi threshold 1st the the probability of failure of decoding is the sum over all their patterns you have times the probability of that happening and for small up to a small distance scales actually iterate over every possible error that happened look at its syndrome decoding and see what happened we can get exact curves of 2 distant setting using are decoded so that's what these nice curves API fail versus here we got a little higher like this since neither we had to resort to MonteCarlo estimation so we do is we throw down an error pattern and ask whether our decoding failed or succeeded in repeat this many times and build up enough statistics such that the error bars were small and so there's a couple of their points on here too I indicate that data overlaid on this and as imagine my talk without these codes are expected to have the mutual to understand all the the same it will be some finite size effects were you but eventually it should all have the same inflection point and to find the threshold what we don't do which you might want to do is to just look at the crossing of all these birds but that's not a stable thing to do especially the small size is a more robust thing to do is to take advantage of the fact that this is beautiful mapping between the threshold of fault quantum puting in these lattice codes and associated orderdisorder phase transition the ising model classical ising model and a certain kind of a lattice and because of that we can use our understanding and physics of these scaling laws universally parameters in these ising models to do curve fitting to say that near where all these lines per we expect this to be linear we understand how these slopes are supposed to scale because of this universe out the class this allows us to do a curve fitting this is described in a paper by way at all and we do this curve fitting here this curve and get the precision on our reported died so this is why realestate 10 comma decimal 5 6 1 per cent now that curve fitting is assuming go however that the sum of some I would say that some of the finite size effects may actually enter into this made up to the degree to which if you just to the crossing but I think that they would be our present so there is I think a little bit of artifice in this small numbers OK the phenomenological
18:17
threshold and how we do the same way I'm not in the interest of time I'll skip the algorithm here may be of interest only to specialists but we do have a MonteCarlo algorithm for that as well and that we you know get our data we go to this slide again here we infer what the threshold value in the circuit threshold with additional challenge that the gates in our model can cause single errors to go out to these things propagate to multiple areas so difficult hoax and surface but paper that I worked on time going so we just call the generalized to the color codes though they could be of weight 2 3 to 4 because we wait checks and so that means that I near the crossing point here is is necessary linear but to the next order or quadratic we have to consider and we get a better fit we do that we do our curve fitting with a quadratic term in here as well and calculate the threshold for both are interleaved and our sequential scandals and add to save somewhat disappointingly we found that this all this cleverness so we put into into leaving the scandals was for not we found that there really was no difference in the threshold by interleaving or not interleaving in fact I mean these are so this again dissertation at a point 2 plus or minus 1 comma decimal 3 so these these there consistently each other there's no real in addition to do a numerical simulations is
19:37
valuable to have rigorous analytic bounds on the threshold of so that you know that you not you know having difficulties in America roundoff errors in computer or what and so what we did is we adapted a technique that we call the self avoid walk technique that's from Iran surface code some years ago I to the color code the general idea is as follows you we can bound the probability of failure by the probability that those support operator in a support logical operators contained in the error pattern that consists of the actual errors that happened on our best guess of what happened the combination of the actual errors in the gas but contains a logical operator roots we failed for color codes the logical operators are not just string like they actually can be what's called string that like I haven't gone into that but they're a little bit more elaborate but fortunately for every string that there is there's a medical knowledge strain that has the same likelihood so we just double the sum to get abound and the probability that a logical operators contain the string is the probability that a self avoiding walks and as that contained in a string so we can just count the number of self avoiding walks and I look at the probability that we have a bit flip along that so we know and possible places for the walk than giving you started there there's a certain number of walks of link L at a distance d year greater it's all going to assume that every error pattern of greater than distance d will cause failure on there's ever ways to choose the folks on that path the probability of that and I the only thing that we need to work out that is how does the numbers of avoiding walks scale as the link and this is something again mathematicians of studied in great detail the coarsest bound you could say as well as a 1st step I can to walk in any 1 of the degree of the lattice of step but again I think it is I can take a step delta different ways were dealt as a degree the lattice in the next step I can take delta minus 1 that has delta minus 1 and so forth but generally uses as those of a connective constant for the lattice and for the 48 now for 80 lattice it's known exceptionally well and if you like bounces oppose approximations you the bound instead and how we use this approach we find that for the code capacity noise model we about of 8 per cent which is less than the 10 comma decimal 5 2 per cent we estimated but you don't abound and and for the phenomenological noise model breaks it will be more elaborate so this is the structure you have to do the walk on here you have to walk in this threedimensional lattice that's the sort of the prismatic extension of the before lattice allowing couplings between the syndrome that's in the data its if you do that you read get about of about a half a per cent so that's less than the 3 this week so I finally here then we
22:20
we actually push this through for the quantum computation threshold and it turns out all the gates have at the threshold in the transversal model the roughly the same as the memory threshold destructive measuring go preparation me much higher and in this code effort you can do co defamation in these articles just like you can see in the Surface goats and then in that case all gates have the threshold of the memory threshold so that's and so to summarize
22:46
and we find at the heart of the service codes in these phenomenological code capacity models 10 times worse the service codes and we look at circuits we have some rigorous bounce but they're fairly we would be nice tight nose up to really understand things better especially in the circuit model the overhead of these is comparable to the Surface codes but there's some interesting questions like these topological subsystem Coleco's do they have a better threshold or not I would not leakages in these can we handle them just as well I'm tightening up is lower bounce but magic see distillation is something that has been studied well I'd say there in the service codes or the color codes this whole prosody not the solution but the injection process and you might be concerned that while the states are not included very well that they could propagate errors badly until they grow to the size of the code that's interesting question and finally for those of you interested in topological quantum computation I'll be interesting to see if we can extend the color codes to a quantum double model in the way that time and she originally the surface goes to think about different had any ionic theories that might realize topological quantum computation so with that it's time for me to put my heels and then enter back to our reality here integrations he might have been written the challenge of any questions we found from the 1 before lunch is there a simple explanation for the circuit threshold isabell ones a factor 10 lower is it possibly because you're measuring highway stabilizes to keep let's my Ghassemi numerics K give you you know what is the probable process and computers only give you answers so I don't know just tells you the act like my guess is that it has to do the highway to stabilize generator so a way to examine say for 612 codes which I have even higher regenerators perform worse but this it's it's it's 1 about balance of foreign aid in between I don't know so I would say a comprehensive study these other codes might give some more evidence of that line of thinking but that certainly my thinking is that the highway stabilize Jenna's a why that's what's causing that back out there that's also I believe that the phenomenological model does better better the surface good because I'm getting information from 8 bits now all in a single step words and service goods I said I only got 4 bits in a single step so I think it might explain both sides of the equation that said they dollars because in the morning session 1 more time and now we have lunch and we have a group photograph of side