Bestand wählen
Merken

Towards practical classical processing for the surface code

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
but everybody and the 1st speaker in this session is often follow these gonna tell us about some begs about the surface coated maybe sometimes worked on the rock what a classical processing of the syndrome information instance thanks to the interaction and I'd also like to thank the organizers very much for you choosing to speak here so here is not to mention I like to talk about are the real deal so when you come to doing so this could climb error-correction what this your classic computer and actually
need without sweeping under the rug anything that can actually be dealt with cost properly the mother has done that were the that see if that's why so at the moment there are a couple of candidates for an efficient algorithm to do this this renormalization group over them which will hear about on friday and and then this the minimum perfect matching of which is the topic of this talk prefer the latter simply because at the moment the only 1 that's been demonstrated to work in the parts so with that in mind the problem although correcting the errors that can generate in service code looks like the following year graph each 1 of these vertices represents the end point of an error checking and you have to take that graph which has boundaries and find appearing all vertices to the boundaries or other goodies versus the use that the minimum amount of string so this represents a few like the minimum number of errors required to reproduce the observed error information detection events and it could be reasonably expected that if you find the minimum number of errors that reproduces the of information that that would also be a pretty good guess at the corrections and all of those qualifications are there so technically this method has not even to work like this the formal level there is no known threshold for this all the work that's done is numerical sorry that caveat aside numerically it worked spectacularly well
so to you were 1st a very wise studying this of this kind of big fans of his codes used only of flavor that talk describe what it is all that's needed features and they will get right into the guts of the classical processing why hide and even what open problems remain there are still some very challenging or problems that remain right now today sponsored he's a billion cubic on the computer please go incorrectly errors using service for the ants would be we have no idea how to use them so talk about why that is what we can do about so the claim like this for this talk is twofold firstly that we can paralyze this problem and sold in order wine and secondly that we can do it in an absolute sense very quickly so we have a 3-second implementation that faults hobbies solves the distance 1 thousand case four-million qubits at every round of error correction on a PC takes 3 seconds so why why the surface
what's so great about SCO will firstly there is the best my knowledge no known concrete architecture that implements the abstract every qubit can interact with every other qubit with no error penalty no time healthy and full parallelism at the very happy to hear of anyone disagrees with that we can give a concrete architecture in which the horse by contrast there are many concrete proposals for how to do to the nearest neighbor so it's a very practical way of thinking about cube somewhat we might actually be able to 1 day build so this is a list of these guys which you can read through but the the might take home message is that it's achievable to do that wears a may not be achievable to this arbitrary interactions following on from that this an all full of work has been done concatenated codes that's was the early days of error correction it's very well theory but when you go on mass these codes and to the nearest neighbor and just don't work very well I you end up with very local stabilize the unique measure a lot of surpassed located up with a lot of get overhead and this eats into your threshold error so for the true mappings that I'm aware of that have been formed staying encoded by Shor code to do nearest neighbor notably threshold rates of approximately 2 bite and the minus 5 and that still assuming that your memory errors are tense everything else so it's a generous calculation and it still obtains a spectacularly low-threshold parent secondly when you do this mapping you bump up the overhead quite enormously use since the 2 bit staying codes becomes a 48 Cuban tile in concurrent work nodes country better but that's the current work and this means that as you sat concatenating distance 9 code nearly 2 and a half thousand and 27 100 thousand you know and you get this scaling of the number of qubits with distance so nor concatenated codes become very high over by contrast the surface code was motivated and designed for 2 dinners works optimally into nearest neighbor long-range interactions wouldn't really even help you very much with the surface covered that's what it's good for so in this situation you know did really nice thresholds of very close what can be done experiments to die and indeed this is well above of many individual gets the experimental and it's actually quite low overhead so it has a reputation of being high over to infect wrote Rossendorf himself mentioned that the service code is high overhead but I would really beg to differ and for concrete reasons well let's have a look at this if we wanna compare apples with apples his to the nearest neighbor but you also mentioned this threshold error rate does not assure memory areas are a 10th the size of the their full strength so far this is non requires a AD so it's less than the concatenated or even better the distance is very flexible I can make my letters just 1 notch began I get more protections like and use a distance 10 color whereas if I need just a little bit more error correction the concatenation of God use a whole new level of go up by a factor essentially 50 cube it's just for a little bit more here I can go up by about 10 cents and get a little bit more even if we keep going and have a look at the complete scaling will see that the service card has been in an absolute sense low overhead than concatenative saying so if you wanna compare with other topological approaches as it stands as of today the service has a afraid that's a factor of 10 higher than any other known topological we also have the full set of Citigroup's worked that with low overhead you can do all sorts of nice things to talk about a set in terms of scene objects and logical gates so for me this is enough to claim that at the moment it's just simply the most promising impractical covering so what why in more
detail firstly to actually even get started with this current again people so all you need hundreds of qubits demonstrated the it's not true hero 13 qubits light up to the nearest neighbor this would allow you to fall tautly nearest neighbor perform distance 3 quantum error-correction if you have an ion trap and 1 thing I said we like to talk to write out about then you don't need for check you because you only need 1 that moves around so you would have 10 cubits and a full fault-tolerant Comair corrections and demonstration there's no occur the can do better than that Yubico could to equal to that that has a much much low-threshold there and so this is actually the lowest have overhead high special most promising experiment we realistic 1st experiment we don't we fall tons now it's it's not all that rosy this is not very scalable we recently out online asking that is based on this but but you have to put in a lot of extra cubits to help these plates together it is similar and more honest to look at scale case where logical qubit actually takes 72 qubits to implement I you end up cutting pairs halls have logical operators that stretch between these holes around these holes and this is a distance 3 scalable service and what you can do you know what it's not in the hands of work we have this a rape Q that black and white or the white ones represent places where we store dada about once a place where we checked for their assistance in dyadic units you performed these very simple circuits to to and what areas you have nearby so we want all that a cube is to satisfy a few conditions of few stabilized conditions we want them to be in an eigenstate all these operators that that at around each face XXXX around each this has been mentioned in previous talks they'll commute so that it is possible to check with which eigenstate of the sky we have it's a simple circuit in particular if you have come on demolition measurement you only need 5 gates to give you 1 bit of information about the presence or absence of X Harris so those bits of information of very reliable setting for the case of the there's sort putting those circuits into his lattice you can't such that you can measure every X and Z stabilizer in a sequence of 5 decades you get lots and lots of the information very quickly very reliably and that's why you have such a high official there single logical qubit gates firstly the few easy 1 bit flips and face which in any error correction code you just implement software you decide what I now have a logical 1 which means if you go and measure it you flip the result there is more complicated ones however you wanna do I have in my day or you take a scalable qubits cut it out of a lattice they do a transversal operation a so this code is a CSS code so that always works you have to do a few technicalities but you have to offset the qubits with couples swaps and a bunch of stuff but is that sufficient to do armed with an efficient had in mind you can form a whole bunch of tricks sorrow for universality you know only thing in terms of these 2 states distilling these 2 states and then you're going to use get teleportation to use these chief rotations you want but because we have an efficient had a much you can create a Y states was plus I 1 and effect and SK without mentioning that results so if you get a cheap implementation of all of the single qubit that gets and the scene not Gates' just fantastic so let's have a look at what that means so we have 2 types of what you units you can in the punch a pair of holes by punching out z stabilizes or pair of holes by punching out excite was we call the smoothed you've interrupt you it's just because of the way you tend to draw these black lines in the background which represent faces and vertices which represents NEC stabilizes sort of the way kind when of these so here's a look on or off you absolute cubic the nice feature these which proof but hopefully it is intuitively believable is if I go and move this whole round moving a defect is a simple concept if he is a whole and I want to extend out to this location I just turn off the error correction so in that case I don't have to do anything physical exotic to move my defects I just turn off the error correction and I can do that in a great long life I wanna move this defect to here in constant time I just all we're frictional that rope so that means I can get from here the only way to here in constant time right this is not a sequential process and by doing that i'd dragon stretch the logical itself right around then I come back where I started what that leaves is a line and a ring Sweden that a logical X show product the tensor product or logical X on both that's 1 all the fundamental relations that defines what a C not get is there are 3 others that fairly easy to show the true if you mentioned z operate between here and then you do things out this you wind up China ring of z that is this of this relationship and the others if you have for example at use around the sky and then move around it just goes for right it doesn't change so that at this hour operation and the other 1 is similarly trivial to show that this process implements see not it's kind of needs but it's abrading operation but doesn't know what direction you go around you know it's equally true that that's a not and that is saying not and furthermore you can do all kinds of really nice things in terms of both next slide before I mentioned that I thought you might be asking hang on you've got smooth what you given a rough logical given that seems a bit restrictive but it's not that he's a smooth raw writing represented in space and time so I hope you can match up this picture with this picture we have out to you the defects we move this guy around remove this guy around them in the back the back and we that's the bright in space and time is what really looks like and if we want to do this not between 2 rough Hubert's then you break this matter this is effectively teleportation from rough to smooth and then you perform the standard see not just here that's that piece right there and then the teleport back so you can do rough rather smooth not there's a restriction that bus into heaps of more interesting things of that so you could do was he not gay between of truly much of qubits in constant time so we have a nearest neighbor protocol and near cyber code that allows a non-local logical operations which the ones we really want this means you can implement your algorithms very efficiently with only modest space of even more than that alright so we have some classical information we need propagate which prevents some l getting concerned with this you know we can actually bend and further than flat rebound back in time so you can implement a chain updates in the circuit is very commonly thought of as requiring time in constant time as a general theorem that says you can take an arbitrary with the group circuit in constant time the including this in the circuit model he perhaps is just a graphical picture of why that's true in the topological model and then you can imagine going back so here we have seen on Venice and then see not then had not taking time but if this is the space time right we have seen not now we just been the bride's over and we do as a Member Marvin do a sample and above and you had what this is really equivalent to is you prepare a bell here dubious get on 1 end of a pair of a bell basis measurement and so on this is Bell payable mission belt about what mission means you can do it in this model as well this is nice way to prove that theorem is true so so code good
but as I said we're not there yet if we want to implement and Of course this experimental side problem actually building into let's qubits with high fidelity but there's also a significant classical processing challenge that call so the first one is this guy which we mentioned on the opening title page to have to correct errors fast enough so a single round of service covered curiously involves fewer spied sequential Comdex Mori said that and that's a good thing it's also a very challenging thing that means you get an awful lot of dada very quickly you need to allocate not so depending on the technology this could take much less than a microsecond some of the superconducting Hubert's the conduct technologies particularly conduct acknowledges run incredibly fast subnanosecond sometime and keeping pace with that fast stream . your right here like it's not include you can even do it with pushed all the hard work in the class of computer classical user arbitrary reliable about throwing us but neither of those statements is true well the reliable it's true the row on inference fastest-selling not so we need to be able to solve an infinite size graph problem in a constant that that's the job furthermore entrapping logical measurements is is far from clear here state distillation and you and I'll point out there's a much better way of doing this which our place in future talks just requires to cubits which is quite exciting here is the version as a break so it's been optimized impact in space time so that it's efficient but now the question is what what does it do what is the output there are all kinds of byproduct operators associated with this Brady you seal get doesn't just do a scene on it also introduces collated by Petrobras and states and z all sorts combination and you need to know how to interpret the measurements around those rights to work at what was byproduct operators are furthermore I could execute this in this direction in time this direction in time this direction in time and suddenly what used to be an initialisations of this U-shaped when it's in this direction is is an initialization can be some something quite different and it's no longer clear how you would transform the byproduct operators as you rotate this in the form in space and time we want we talking any further about this particular classical processing challenge instead will be part of presenting a solution to this challenge
so correcting errors fast enough Country infinitely for us to go to be fast enough so let's consider the life cycle of an error we have this circuit this is our error correction circuit and this is a block all that again in space and time and then the way 1 about chicks qubits 1 death syndrome cube itself as a bit of error never traced through all these gates met exploding at our page in time to get this these X-ers propagate by Richard signals and to the eventually detected by hazard bases measurements these guys so this X error propagates to these 2 time locations not given a series of their model so an error model for all about dates we can work out the probability that that particular area is created and the probability that you observe this particular kill pair that detection events we can also then do that for all possible errors for all possible states and what you would get with it is something that looks like this so this connection from here to hear is represented if you can see it White finally by this diagonal line down there and for that matter this 1 here this 1 here so this particular picture we call elects we call the lattices to distinguish between the graph that we'll see later this part of the matching of and it represents the probability that any given here all syndrome locations are connected by a single qubit errors it's a very useful object allows us to take into account of any error model and feed that information into the matching problem such that it makes very good guesses about which there is a connected so for example let's take an arbitrary example you have an arrow right here you'll see this great big fat cylinder which is the probability of a connection to the time boundary if you like that's the probability that the 1st and 2nd measurements differ Let's imagine there's another detection event here and say this is very faint diagonal line it would be much more reasonable to match the error in this corner to the boundary that it would be to match it to use syndrome change so that's the basic idea and when you do all these reasonable guesses you find the error correction poems lot
so how do you do it in a bit more detail so we have this underlying lattice which characterizes out their models for without dates we then go on run our simulation we generate detection events in space and time and then we essentially sold the minimum white perfect matching theorem on these 2 objects so we have a graph that has no edges these around the cities and an underlying lattice that allows us to calculate the weight of an edge between any pair of nodes so here as so we have a lattice recalled allows adults lines is just terminology we take the weight of the line to be the minus log of its probability naively this means that the higher the probability of error below the whites alright right and the more likely will choose said when we do minimal perfect matching and he is an example of the white perfect watch matching using that actual object this if you then don't correct your classical measurements and should be stressed that the whole procedure is all about fixing up classical mission you never get back to Qantas and actually doing this process of fixing those cachinnations is highly likely to preserve biological state so
How's minimize matching it's a very old and well studied was invented by a guy objective way back in 1965 so despite the fact that it's been well studied it's been well studied in the context of here is an arbitrary graphs tree weights and edges and let's all that case so when you do that it's a really tough problem so given n vertices it takes all rank time the available library functions are out on the web they don't support after all you keep generating more more Dada so do you have an infinite matching problem in space you have in the matching problem in time so it doesn't support in it that doesn't support parallel processing and when you actually run and it's quite slow so we've been running it for some years and this was the best we could do with the available Kolmogorov cholera we get the distance 90 get reasons to statistics whistle to statistics means but 10 thousand logical fair to everyone of the start points represents a repetition of error correction and 2 we observe a failure and we do that and time and I'd like to point out 1 somewhat embarrassing feature of this particular graph we worked quite hard to even just to get this distance 9 with the library code working quite hard mean we worked a lot computer quite hard to scratch on and we for looking at the start that it was fairly reasonable conclude that there was 1 comma decimal 1 per cent it turns out that's wrong there are all kinds of boundary effects at work in particular if you have a look
back here the lattice you'll notice that you much more likely to match I'd theory here to the boundary these boundaries stabilizes have white 3 them more reliable label fewer gates that increase the power of your error correction so when you have a small code with a lot of boundary it performs better than a large code which effectively
has no so while it seemed reasonable the time to conclude that the official a rate was 1 point 1 that will actually see the threshold error rate is point 9 % in a large-scale simulations and I was quite surprising to us since we thought we had reasonably good error suppression appointments and we'll see how the change is a bit later in the story
so the over them without so this is al fundamental simulated system we take it to the patch cubits bits all these dire all these syndrome here logical operators we simulate the circuits with initialization errors to keep its seen errors of measurement errors all the usual stuff depolarizing channel we form a Volterra analysis creates lattice and no way fully went matching but we perform in a very specific manner and to describe that let's not talk about these 3 object which would be a night and that's instead break this underlying lattice describing our problem into a simple to the nearest neighbor lattice of these will gray lines represent equal weights links how dots represent out detection events so the endpoints error chains and then we have a few other objects at times going up you can imagine this is the 1 D approximation all out to the surface coat purely for explanation purposes this picture should not be confused with our actual their constant of how the what and have a whole bunch of other things so we have a cut off line here which represents the fact that you can't create this lattices can Kumble broken at the top you can't create this lattice before you have a few rounds of error correction in practice than the numbers to so you can't do any matching up here you don't have flared mention yet and then there's this double line which actually doesn't exist and not simulations here just represents the latest vertex with successfully match this sort of had this will avoid where it should be possible to match some of these vertices by now we have enough that far enough back in the past CIA has computer we should get a work out how to fix the mind so here it what do we do well we pick a vertex
from there we go and explore all the local region so it's a breadth-first search can we stop searching when we encounter of objects so those of aerobics maybe of our exports base there may be other unmatched vertices if we encounter other unmatched vertices then we're happy because we can match to 1 of them and we done with that here for the moment it should be stressed that this matching is not forever nor these matching spread of assume lot of attempts to generate artificial heuristics nonwhite matching and they never take into account the fact that a given matched pair can change as the over seeds yeah the problem right so the waiting in other explore the space a rich rather to encounter of objects and here we need to do something different because this 1st that is already match what we do is we create what's called an alternating tree not tree is a simple object it it's a graph that has just as unique path from the root to get really the only difference between a tram alternating trees it comprises of alternating unmatched matched edges so with that definition in mind you can also define what out of vertices which you can think of is if the 2nd vertex from the root so this is another text and this is out of it's the number that will keep that convention as we go we would like to expand the exploratory region around every out of texts and decrease the exploratory you program reach around every in a vertex you kind of get this growing bubbles and shrinking bubble such that all the bubbles just touch the edges but in this particular case they're already touching we can expand further so we actually do is we call this into a single object this is now alternating tree with just 1 node I recall that a blossom just terminology so this 1 node is then expanded we expand further up there again and we encountered interesting objects nearby he we've encountered a boundary and an unmatched vertex to have a couple of choices as to what to do next these this choice is easy and say all there's not much such such that and in practices that is what do I was a little bit more complicated choice which is illustrated you could instead choose to match this vertex which was the unmatched 1 to the boundary and that means that you would change this unmatched itch to match this match 1 to unmatched and this 1 to match this an example of how the matching can change dynamically as you add more Dada new dyadic until you pay that much heated way back in the past it was wrong this is a better way of doing it we keep going choose another text formal that tree expand the outer exploratory regions at the cost of the inner 1 try to keep going we can't form also we now have a single node alternating tree former new alternating trees so this has a blossom as its 1st out an odd huge box as its 2nd in an hour and then a single version of still match as its 2nd also noted we then it can expand the other at the cost of the and what you find is we've now into the forbidden region so this is the broken up region where our lattice has not yet had time to keep track of all the errors when we do that we know we're doing the wrong thing and so we a boss who bought by undoing to run the backwards and to give back to the beginning all that matching and you stop you stop into your all upon computer generates more Dada and then you start under-matching so if we had 1 more of dada up here I get a move is forbidden line up 1 and that previous attempt at matching would have succeeded and we could continue over here would be eating match this 1 at the boundary without do this 1 we don't much that went to here and so on so it gradually percolates up as you like more more daughter upon the matching problem you can match more mode of the past It's 1 thing of this
algorithm there are many roles but each will the simple and cheap so while it's a very complicated to write it runs very quickly on average every vertex on the news local information which is a very important property now that's only a property of the graphs generated by quantum computer running topological error correction because the grass correspond to the end point of a change in the probability of having a white and their tangos is naively the physical probably P the so they're sentence pressed if you have fewer vertices it takes was processing sound silly but you know it's important feature thought the more reliable the pewter is the fast the of runs it's also important to note there remains efficient even afferent above threshold error right so I purposely constructed the example to be pretty much at the threshold error right for this example so this example is as to the nearest neighbor bristled rates around 10 % approximately 10 % of the nodes here which have detection events and you could see that there were no problems everything was still local a work just fine you even gotta you know 4 5 % physical gate error and the will will still run quite well so it's not a problem so the threshold at a very interesting quality and these are things you what exactly is it that makes error-correction bright when you hit a threshold parents so because you only need a local amount of information on average vertex with X is ability you might need the entire graph justice successfully met 1 of its but the average is constant because of that it takes water in time where n is the number of vertices parameter correction if you make out that the site the lattice then it would go water D-Squared the area determines how many vertices note also the runtime is independent how much history you have so if this extended down you know as in the past we we didn't use anything down below here and so we have a few I checks the going to determine when we can describe old information processing pop up point the probability of needing that all data is much much lower than the probability of having a logical error and since that's OK you can introduce new where that's smaller than the probability of a logically you just throw that out of the class finite memory furthermore if you just imagine doing a double-size an example to bring another 1 of these panels over here there would be nothing wrong with 1 of these guys thing and what with 1 processor and 1 of them this find other its power lies a bull in a very natural way you have to take into account communication between the 2 processes so if you have errors is the the boundary you start constructing support regions around here we plan to make at a given acted exploratory region of forbidden region so if I competing expense alternating tree into another active processes alternating tree we would a ball and undo and then like that 1 static for example anyway you can solve the communication and conflict resource problems efficiently the main point is that even when you do this you still or require local information so you can have an infinite size lattice with constant feuding resources per unit area fixed size patch is independent of the strength of the error correction code they started to grow more expensive is the core gets bigger and sold the whole in upon get the global minimum white optimal matching in order 1 time a cone implementation harvest currently single process so
let's get on a get to know it so this is
what you can actually run the numbers you have a physical error for you that's a logical error rate and you can see that you get all these curves which when you stay very highest and blow it up and give you a new threshold rate as point nights and different to our own work which was wants said what did not change however these numbers so at an error rate of Hopis and you get effective to suppress the lodge where as you increase the size the card and aerated comma decimal 2 sets a factor of 10 so these a very practical high error rates very practical suppressions in things a limit as going further religious memories we need to get on with paralyzing the code will do that starting from next Monday so
just to summarize so we presented by a complexity optimal so you can't do better in complexity it takes order INSQUID time in Syria where n is the linear dimension of the lattice can paralyze all 1 so we can actively handled arbitrary their model on each gate that generates these guys as officially fast at this array many architectures however we wanna take a reduce memory usage improve its speed and raise the effect the we certainly don't want out classical processor to limit the speed of our quantum processor or our class processing capability to tell us what the compute competition fission alliterate it's 1 actually get the the radical thresholding out of so hopefully you from now all that will be committed and thank you for your attention we have time for a few questions you said that when you perform the matching that you will not act on the last 2 rounds of this a new measurements wider last 2 rounds landed 5 and with that depend on the of model you're using there it does it also depends a little don't what you to find around to be so we defined around to be to be as long as it takes 14 run to be measured if you have a model that involves should lost all text can fail it can take and an equal amount of time to every single measured so it's 2 layers of every single being made why because 2 layers of error correction I guarantee to catch every single error a single error in 1 way I can if it's lucky propagates through 1 Lebanon but not to so have you done 2 rounds you know what's going to happen nothing will will change so I wonder whether this 1 comma decimal 1 2 earlier treschow 1 comma decimal 1 per cent of the treasure results still stands on that now you you said no because it was just boundary reflex but if I look at your plot those curves all meet at 1 point starting from the 3 both research sale pretty reasonable to us at the time so what goes wrong that you're getting a lot of help from the boundaries that sustain up the power of your error correction and that goes away when you go to larger error rates so while at small distances and higher at UC benefit as you go to the larger distance because you're getting more of these lovely boundary positions as it gets bigger and bigger eventually you codes not getting help from these boundaries and what you see is this logical error rate goes up and up and up and up sorry the logical reliability goes up and up and up and then pass a certain point it starts come back down and eventually planets OK but I just want to point you curve social researchers to go forward you have an easier yeah they all intersect and 1 comma decimal 2 don't seem to have any binary effects there and all this is at large file sizes so this is the distance 25 35 45 50 units of in the in the big plot so this is also the mean was really illegal go from 1 point yeah no longer reflects their though going from 1 point here because I chopped off or the lower distance information loaded despite suffers loss of by so eventually at high enough distances you get rid of the boundary effects and things are not OK thank you and yeah we can we talk about the work what question yes so I suspect that you will get lots of entropy for the worse if you try to get the sense that the news because you have created southern do lots and lots of voluntary and so you're still sort of look you will need lots of space around it to protect if you try to do it instantaneously so you may be better off moving too slowly OK so it's a fair question but it's acting on a problem to when you want to cry along defect effectively move you just turn off if you not was none of violence a new computer that process that turns white for stabilizes into white 3 stabilizes parts so the very gentle effect on you computer up said you 1 then need that standard distance to protect the currency assure you need a protective region around although the effect may take a region around all defects and it's the same amount of protective so while operation just a quick clarification of language about 360 really mean finite-size effects the know on the side of the law have like 3 stabilizes the more reliable and you better more reliable air information if you put it on a torus for example which I wouldn't use the same facts not at 100 is 1 reason with people that have struck studied the toric boundary conditions have a much cleaner plots the later so you think it would these issues will go away if you absolutely the lines published literature Roberts enumerates for example have boundary conditions that are periodically don't have any of these take 1 more question David would you mind setting up so my question is assessed the same but just to follow on key to invite you to do whatever the reason you didn't imposed it found the conditions what we did is periodic boundary conditions because we're writing so where 2 of you face the express so we want to be coming it's hot to hand them take a bit of a lot of effort to help them we went that if it can run on express let's thank Austin again
Prozess <Physik>
Momentenproblem
Selbst organisierendes System
Extrempunkt
Messfehler
Flächentheorie
Interaktives Fernsehen
Zahlenbereich
Code
Komplexitätstheorie
Übergang
Knotenmenge
Algorithmus
Flächentheorie
Code
Schwellwertverfahren
Fehlererkennungscode
Prozess <Physik>
Graph
Matching <Graphentheorie>
Ereignishorizont
Renormierungsgruppe
Randwert
Dienst <Informatik>
Einheit <Mathematik>
Mereologie
Information
Zeichenkette
Instantiierung
Fehlermeldung
Bit
Prozess <Physik>
Klassische Physik
Momentenproblem
Datenparallelität
Flächentheorie
Maschinensprache
Extrempunkt
Netzwerktopologie
Code
Primzahlzwillinge
Kontrast <Statistik>
Parallele Schnittstelle
Einflussgröße
Zentrische Streckung
Schwellwertverfahren
Prozess <Informatik>
Güte der Anpassung
Globale Optimierung
Ruhmasse
Bitrate
Rechnen
PASS <Programm>
Teilbarkeit
Dienst <Informatik>
Verknüpfungsglied
Betrag <Mathematik>
Menge
Würfel
Festspeicher
Grundsätze ordnungsmäßiger Datenverarbeitung
Overhead <Kommunikationstechnik>
Ordnung <Mathematik>
Message-Passing
Fehlermeldung
Große Vereinheitlichung
Zahlenbereich
Interaktives Fernsehen
Implementierung
Kolmogorov-Komplexität
Unrundheit
Dialekt
Term
Code
Komplexitätstheorie
Demoszene <Programmierung>
Knotenmenge
Fächer <Mathematik>
Flächentheorie
Vererbungshierarchie
Abstand
Qubit
Fehlermeldung
Fehlererkennungscode
Zwei
Mailing-Liste
Paarvergleich
Quick-Sort
Chipkarte
Mapping <Computergraphik>
Objekt <Kategorie>
Flächeninhalt
Parkettierung
Computerarchitektur
Beobachtungsstudie
Flächentheorie
Schreiben <Datenverarbeitung>
Extrempunkt
Raum-Zeit
Richtung
Homepage
Fehlertoleranz
Streaming <Kommunikationstechnik>
Algorithmus
Code
Notepad-Computer
Kontrollstruktur
Vorwärtsfehlerkorrektur
Gerade
Cliquenweite
Befehl <Informatik>
Cybersex
Güte der Anpassung
Biprodukt
Tensorprodukt
Konstante
Dienst <Informatik>
Verbandstheorie
Rechter Winkel
Würfel
Beweistheorie
Konditionszahl
Wärmeleitfähigkeit
Fehlermeldung
Eigenwertproblem
Stabilitätstheorie <Logik>
Folge <Mathematik>
Lochstreifen
Klasse <Mathematik>
Unrundheit
Mathematische Logik
Interrupt <Informatik>
Demoszene <Programmierung>
Knotenmenge
Informationsmodellierung
Bildschirmmaske
Datentyp
Abstand
Soundverarbeitung
Videospiel
Protokoll <Datenverarbeitungssystem>
Unendlichkeit
Offene Menge
Digitaltechnik
Hill-Differentialgleichung
Speicherverwaltung
Resultante
Bit
Prozess <Physik>
Punkt
Inferenz <Künstliche Intelligenz>
Ausbreitungsfunktion
Gruppenkeim
Versionsverwaltung
Drehung
Fastring
Eins
Einheit <Mathematik>
Theorem
Einflussgröße
Funktion <Mathematik>
Nichtlinearer Operator
Zentrische Streckung
Prozess <Informatik>
Kardinalzahl
Klassische Physik
Kommutator <Quantentheorie>
Green-IT
Spannweite <Stochastik>
Rechenschieber
Verkettung <Informatik>
Verknüpfungsglied
Information
URL
Aggregatzustand
Physikalismus
Schaltnetz
Implementierung
Term
Code
Komplexitätstheorie
Datensatz
Unterring
Software
Quantisierung <Physik>
Grundraum
Minkowski-Metrik
Qubit
Fehlererkennungscode
Graph
Relativitätstheorie
Mathematisierung
Einfache Genauigkeit
Thermodynamik
Quick-Sort
Flächeninhalt
Basisvektor
Mereologie
Bus <Informatik>
Bit
Gewicht <Mathematik>
Prozess <Physik>
Extrempunkt
Mathematisierung
Extrempunkt
Login
Homepage
Knotenmenge
Informationsmodellierung
Theorem
Stützpunkt <Mathematik>
Zeitrichtung
Minkowski-Metrik
Gerade
Einflussgröße
Einfach zusammenhängender Raum
Videospiel
Fehlermeldung
Fehlererkennungscode
Graph
Matching <Graphentheorie>
Reihe
Hasard <Digitaltechnik>
p-Block
Algorithmische Programmiersprache
Ereignishorizont
Objekt <Kategorie>
Randwert
Verknüpfungsglied
Verbandstheorie
Flächeninhalt
Würfel
Digitaltechnik
Dreiecksfreier Graph
Mereologie
Simulation
Information
URL
Kreiszylinder
Fehlermeldung
Aggregatzustand
Punkt
Gewicht <Mathematik>
Prozess <Physik>
Ungerichteter Graph
Extrempunkt
Raum-Zeit
Code
Komplexitätstheorie
Netzwerktopologie
Knotenmenge
Benutzerbeteiligung
Programmbibliothek
Abstand
Parallele Schnittstelle
Gammafunktion
Leistung <Physik>
Soundverarbeitung
Lineares Funktional
Fehlermeldung
Fehlererkennungscode
Statistik
Matching <Graphentheorie>
Graph
Kontextbezogenes System
Unendlichkeit
Objekt <Kategorie>
Arithmetisches Mittel
Randwert
Verknüpfungsglied
Verbandstheorie
Bit
Punkt
Gewicht <Mathematik>
Messfehler
Mathematisierung
Zahlenbereich
Unrundheit
Extrempunkt
Komplexitätstheorie
Knotenmenge
Flächentheorie
Diskrete Simulation
Gerade
Analysis
Nichtlinearer Operator
Fehlermeldung
Fehlererkennungscode
Schwellwertverfahren
Approximation
Matching <Graphentheorie>
Güte der Anpassung
Physikalisches System
Bitrate
Binder <Informatik>
Quick-Sort
Ereignishorizont
Objekt <Kategorie>
Patch <Software>
Verkettung <Informatik>
Differenzkern
Verbandstheorie
Digitaltechnik
Fehlermeldung
Komplexitätstheorie
Bit
Prozess <Physik>
Punkt
Momentenproblem
Extrempunkt
Minimierung
Versionsverwaltung
Formale Grammatik
Ungerichteter Graph
Raum-Zeit
Netzwerktopologie
Einheit <Mathematik>
Wurzel <Mathematik>
Vorwärtsfehlerkorrektur
Gerade
Auswahlaxiom
Parametersystem
ATM
Schwellwertverfahren
Kategorie <Mathematik>
Quantencomputer
Heuristik
Stellenring
Globale Optimierung
Matching
Bitrate
Dialekt
Ereignishorizont
Randwert
Generator <Informatik>
Verknüpfungsglied
Verbandstheorie
Benutzerschnittstellenverwaltungssystem
Rechter Winkel
Festspeicher
Information
Ordnung <Mathematik>
Fehlermeldung
Telekommunikation
Web Site
Subtraktion
Quader
Wasserdampftafel
Mathematisierung
Klasse <Mathematik>
Physikalismus
Zahlenbereich
Implementierung
Mathematische Logik
Komplexitätstheorie
Knotenmenge
Bildschirmmaske
Weg <Topologie>
Mittelwert
Vererbungshierarchie
Optimierung
Ganze Funktion
Leistung <Physik>
Drucksondierung
Fehlermeldung
Fehlererkennungscode
Graph
Matching <Graphentheorie>
Finitismus
sinc-Funktion
Rechenzeit
Unendlichkeit
Objekt <Kategorie>
Patch <Software>
Flächeninhalt
GRASS <Programm>
Speicherabzug
Neuronales Netz
Schwellwertverfahren
Punkt
Festspeicher
Inverser Limes
Zahlenbereich
Kurvenanpassung
Bitrate
Mathematische Logik
Code
Teilbarkeit
Chipkarte
Fehlermeldung
Resultante
Bit
Punkt
Prozess <Physik>
Minimierung
Ausbreitungsfunktion
Formale Sprache
Schreiben <Datenverarbeitung>
Maschinensprache
Komplex <Algebra>
Gesetz <Physik>
Raum-Zeit
Kurvenanpassung
Einflussgröße
Gerade
Nichtlinearer Operator
Plot <Graphische Darstellung>
Bitrate
Frequenz
Arithmetisches Mittel
Randwert
Verknüpfungsglied
Verbandstheorie
Konditionszahl
Festspeicher
Information
Ordnung <Mathematik>
Fehlermeldung
Ortsoperator
Hausdorff-Dimension
Klasse <Mathematik>
Unrundheit
Mathematische Logik
Komplexitätstheorie
Informationsmodellierung
Torus
Quantisierung <Physik>
Abstand
Hilfesystem
Leistung <Physik>
Soundverarbeitung
Fehlererkennungscode
Elektronische Publikation
Ordnungsreduktion
Quick-Sort
Differenzkern
Mereologie
Computerarchitektur
Entropie

Metadaten

Formale Metadaten

Titel Towards practical classical processing for the surface code
Serientitel Second International Conference on Quantum Error Correction (QEC11)
Autor Fowler, Austin
Lizenz CC-Namensnennung - keine kommerzielle Nutzung - keine Bearbeitung 3.0 Deutschland:
Sie dürfen das Werk bzw. den Inhalt in unveränderter Form zu jedem legalen und nicht-kommerziellen Zweck nutzen, vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
DOI 10.5446/35329
Herausgeber University of Southern California (USC)
Erscheinungsjahr 2011
Sprache Englisch

Inhaltliche Metadaten

Fachgebiet Informatik, Mathematik, Physik
Abstract The surface code is unarguably the leading QEC code, featuring a high threshold error rate ~1%, low overhead implementations of the entire Clifford group, and flexible, arbitrarily long-range logical gates - all despite only requiring a 2-D lattice of qubits with NN interactions. These highly desirable features come at the cost of high classical processing complexity. We show how to perform the processing associated with an n by n lattice of qubits, each being manipulated fault-tolerantly, in O(n^2) average time per QEC round. We describe how to parallelize the algorithm to O(1), using constant computing resources per unit area and local communication. Both complexities are optimal.

Zugehöriges Material

Ähnliche Filme

Loading...
Feedback