Merken
Faulttolerant quantum computing
Automatisierte Medienanalyse
Diese automatischen Videoanalysen setzt das TIBAVPortal ein:
Szenenerkennung — Shot Boundary Detection segmentiert das Video anhand von Bildmerkmalen. Ein daraus erzeugtes visuelles Inhaltsverzeichnis gibt einen schnellen Überblick über den Inhalt des Videos und bietet einen zielgenauen Zugriff.
Texterkennung – Intelligent Character Recognition erfasst, indexiert und macht geschriebene Sprache (zum Beispiel Text auf Folien) durchsuchbar.
Spracherkennung – Speech to Text notiert die gesprochene Sprache im Video in Form eines Transkripts, das durchsuchbar ist.
Bilderkennung – Visual Concept Detection indexiert das Bewegtbild mit fachspezifischen und fächerübergreifenden visuellen Konzepten (zum Beispiel Landschaft, Fassadendetail, technische Zeichnung, Computeranimation oder Vorlesung).
Verschlagwortung – Named Entity Recognition beschreibt die einzelnen Videosegmente mit semantisch verknüpften Sachbegriffen. Synonyme oder Unterbegriffe von eingegebenen Suchbegriffen können dadurch automatisch mitgesucht werden, was die Treffermenge erweitert.
Erkannte Entitäten
Sprachtranskript
00:09
the the use well yeah OK can vote series OK great I'd like to think I did on the
00:20
the organizers for inviting me to give you tutorial on faulttolerant quantum computing but I like tied there's a lot of material to talk about in just 1 hour but fortunately though I have a bit of a review earlier a release valve that Todd did not have I'm Robert Rosenberg will be speaking after me about how fault tolerance applies to surface codes and data Garzon will give me be giving a keynote talk at the end of this workshop on Friday also lawful faulttolerant 1 computing so any of the topics you don't hear about here you may hear about these other talks following mine so because of that I've structure this tutorial to really focus more on the of quantum computing you know what comprises a faulttolerant on computing protocol how does it work and not so much on the analysis of faulttolerant 1 computing protocols which is really a subject in and of itself I OK so I have written about fault tolerance so maybe you should find fault tolerance is but so tolerance is the ability of function of correctly in the event of a failure and like I type mentioned out we typically are able to handle every possible failure but a certain set of failures but it's important to realize that only that but also by the sea the processes if you actually bother the read these end user license agreement so in your software you know some of them say this software is not fault tolerance not intended for aircraft you know navigation on nuclear power plant you know of you know of software and those things you taller what's the process this suppose too it's not getting you from point a to point B it's not killing you and so it's important to know you know that that failures which in our airplanes opposed the fault tolerant to is necessarily getting you do as a nation but to protecting the cargo inside so I a more mundane example in is just to protected data for that information to be classical or quantum and so that if that information is most is the static we say that's protecting memory just staying there but if we're trying to get the information from point a to point B there was this communication were trying to protect communications and the failure of that they were gonna think about is very narrow such as the corruption of the data itself nothing else and I coming from Let's a local environmental noise so it's not adversarial it's just some sort of local noise source and I there a solution to this which you just heard about error correction works classically work sponsor mechanically and I have a solution that involves adding layers of redundancy and extra processing in order to achieve and we're assuming again that those processes in the errorcorrection setting themselves ideals of the only faltered were worried about error correction to the data and so this process that narrow our various set is in fact faulttolerant we can lower we can suppress the failure probability in data to whatever level at some we desire by adding by using a coded sufficiently large as to be order log of 1 over epsilon large to protect the information that that'll work as long as the error rate per data that satisfy some criterion and error threshold so if we have a local noise model reach bits a flips probability P that as long as that a bit flip brave below some critical value which I'll call the error threshold then it is possible by using a larger a larger coding that that a certain family of 2 and surprises arbitrarily well so error correction is itself a faulttolerant process to a very narrow area said but if you think about this little while realize this isn't completely satisfactory because out of more realistic model should account for our theories in the processing that's itself used to protect against those errors it's a really natural thing to consider moreover I that we can get a little bit more greedy and say well now that we've protected the information very well we were actually do something with that we don't I just haven't set there we wanted of computers not just hard drives and so we die till the process the information encoded form it's not a good idea to you know decode the data processor what exposed to noise memory encoded that doesn't seem like a particularly robust way to do things so I generally we use the expression of fault quantum computing to refer protocols that I work arbitrarily well efficiently also in the presence of faults above its data and its processes and to the central questions of overall in the field of false mark on computing are exactly the kind of computation models control models and noise models that will admit fault or upon computation we actually notary not narrow classes combinations of these models that will allow for top on computing but despite that there's quite a large body of literature in the field and the interesting and understand if they're enough the nature admits other combinations the other such a question the field I would say is that what are the resource costs for achieving fault quantity computations us 1 to say that it's possible that a question ask were just how difficult is it it would be particularly exciting if Nature had discovered a way to do it by itself OK so before I launch into
05:39
default Takuan computing moneyspinner moment talk about fault classical computing something with just even some of them I think 1 can be expressed might be less familiar with so I market years as we all know are very reliable they can calculate for weeks or maybe even years side I theory calculations a lesson months but nothing ever that's right you somebody here in the audience as but the reliable it's not because of the hardware is particularly faulttolerant just very good the error rates are so low in these things that in the time scale of a computation were interested in doing in our human existence it just doesn't fail but however we could have we know that in principle it is possible to do the classical computation even with faulty hot hardware and I think we're beginning to enter the new age in classical information processing where this is acceptable way of doing computing but you know we had their demands to use less power cell phones our smartphones and things like this or or save money if your business you might find it's cheaper to build a lousy hardware and higher coupled engineers to make it work write is to build better components in the 1st place so it's possible that fault tolerance you know sort of a revival I think and we're starting to get to that point as were getting these parts are a small our class beginning will begin to come to the fore again for unfortunately that great scientific reasons the probably for economic reasons of the setting up of classical computing mobile narrower again here talk about circuits and more specifically formula so this is a an hour class a circuit for each gate in a circuit has a single output so for example and or not gates things like this they end up and we're at the control models that were going to admit parallel operations all taller and and more gates happen at the same time say we're also going allow ourselves to refresh bits so this is very key assumption if noise is being introduced to our system it's generating entropy in our system we need some means for expelling it from a system of 4 going to try to protect it and so we're assuming that we have some mechanism for doing that classically I had a noise model is when very simple for this competition models a noisy gate looks like an ideal gate and be followed by a bit flip with the probability p and so on so that's the setting that we have in the approach for faulttolerant classical computing is to simulate this ideal formula I with a faulty formula anyone simulated lot precision so that's 1 of the key ideas of it to get across in this tutorial that faulttolerant quantum computing is really all about simulations it's about simulating a faulty brought an ideal process with a faulty ones that serve the overall guiding principle it's not about computing on its own sake is about simulating something that you'd like to have what happens with didn't happen and the 2nd part of this approach of like to get across is that the the way we do this simulation is to structure it so that it suppresses error spreading that's the key idea how do we simulate something well we simulated well by but making sure that when errors do occur you have maybe they grow a little bit but they don't sort of broke catastrophically and cause the entire simulation to crash and that's something we like to avoid so this is a celebrated threshold therefore fault are classical computing but by phone or and starting in the fifties and he said you extensions that only through the sixties that said that it is in fact possible they've an arbitrary formula which gates to precision epsilon by just using logarithmic overhead so in order to you logged excellent but the faulty formula and again there is a caveat here as a function of noise smaller those as this is possible but whenever that the error rate is below a critical parameter in this case a vertical that the accuracy threshold that to distinguish it from the error threshold it's used in just straight quantum error correction so if you're familiar with at all of this you might this area I think was kind of a dead field you know it's been many years since people study this and you know the harbors all really good so why would we care I theoretically it's interesting as well even recent developments of exploring the prospects of fault are classical computing it's only in 2008 that we've learned that the threshold for fault or classical computing if you restrict all the gates to just have to to be inputs in 1 that outputs is 8 comma decimal 9 per cent so I and that's tight so that's the best you could do in this form in this model here so when you hear people discuss thresholds for fault or upon computing of you know 10 a minus 3 demands for whatever bear in mind that fall you you felt classical computing at 8 comma decimal 9 per cent is that the best you can do faulty classical components of the I should add also though that just because this is a point 9 % in principle doesn't mean that fall Takuan computing couldn't have a higher threshold but because we typically assume that the classical computing that augments quantum publication is itself ideal you might imagine some clever scheme which would somehow dump all the years and of a classical computer and instance for artificially 7 the classical computers perfect to boost the threshold but it seems to be unlikely star at is or not a strict barrier restrict upper bounds to the threshold for fault are going to be the likely you're not going to find a quantum computer with a threshold higher than this but then if you allow yourself to have gates the have and outputs greater that like 3 5 7 etc. but then actually can push the classical fresh all 50 % adjusted to you know you could asymptotically approach that so we can get very high thresholds for fault our cost computing if we include increase the of basis of gates were allowance OK so that's
11:15
Volterra classical computation what about quantum computation so we need to define the setting here are the 1st thing we need is a computational model and right now I think stand there for computational models that are known for quantum purity and they all have variants of the there's the 1st 3 that Turing machine walk at 80 batik models of they're not known to be faulttolerant the interesting the turn is so more academic 1 it's the only model the really selfcontained are the other 3 models require a classical Turing machine to construct uniform families of objects in them so the quantum machines only selfcontained quantum computational model known the walk model not known to be robust it is likely to be so the 2 Emerson localization that land homage a yes and our recording of the in the introduction that we saw before we heard about this so there might be ways around that the 80 that model has a lot of intrinsic robust to noise from the way it's units gap protections certain control robust OSes but it's not known the fault tolerance the only thing we know to be felt as a circuit model this is the model where we take an arbitrary computation we decompose into these elementary pieces and we make each of the pieces good and put them back together again and make computation hold and those pieces might be implemented in ways there a topological or through holonomy user through local measurements or what not but that's that's what we're working with that's FERC from here on out of your stricken in the circuit model in the control model and the number of false assumptions that are made in faulttolerant quantum computing Analyses I have color coded these here are the ones that are either necessary helpful conveniently so in the control model a parallel operation is is that you probably necessary for a a faulttolerant quantum being protocol you if you're fixing years over here and you let things over their fall apart by the time you get over there fall apart you have to really keep on top of the whole system at once in order to have any hope of keeping it from crashing I give only perfect parallelism you need to have the maximal parallelism possible maybe every 10 step to get to look over here and that will impact the performance of the protocol but you have some degree of parallelism I you do it dump entropy again that's provable you have to have a way to refresh cubits in this case it's convenient to assume a would recall helpful helpful sorry to assume the you have fast possible computations so instantaneous if you like it's not necessary in fact it's probably not in realistic in in probably gonna push Quan computers as fast as our class called promising go it's but it's you know it's a theorists in of the light it up with these helpful assumptions and you can take them down in their threshold there's for that 1 that's relaxed so we have a finite a specific finite databases so typically at up pick 1 that it might actually be necessary to have a finite databases basis of some kind of a theorem that says that it seems plausible but what I mean by this is that we assume a particular database and so when we do an analysis we pick a database and say this is the 1 we're going to use more we assume that each of those gates this he eagle time this is again typically not realistic measurements in hardware you know often have a different time scale than the coherent states say I to realistic assumptions so that are worth considering as a 2D layouts and many Kwan technologies are naturally layout 2 spatial dimensions and so that something to think about when you think about locality that particular but possibly that all the quantum processing that's realistic to do can only occur between information are relatively close to 1 another it's the unlikely to imagine you could do something about between 2 passes upon information that are distantly separated as easily as you could have there were close together of 4 noise model i it's necessary have a non increasing error rate if the rate of errors in your system grows as the size of the system grows then you're fighting an evergrowing demon and there's no hope of of ever catching up with it that so you need to make sure that it's not increasing as you grow or at least it's not increasing at the same rate at which you I could possibly put redundancy into corrected of it's helpful to assume that the classical computation augments the protocol is not only faster reliable and also that cubits don't meet our souls who better cubits are there that are just disappear I is a helpless in the noise is uncorrelated that each cube it is of phone independent noise source again we can relax this assumption and I find it convenient to assume that the noise does depend on the stadia system so you know if you're cube is encoded a 1 verses might in a real hard we might think that it works suppose you know like a higher energy state for 1 but not to the other you might think that it there's a sort of state dependent noise of recognition now assume that the error as a function of the gates and that the operations not the states themselves and I will assume that the the gate uniformly faulty so if I do a a C not gate now than I do a scene I get an hour from now I'm going to assume that it's they can have the same failure model which again isn't necessarily the case but it's a natural assumption so are all kinds assumptions and I guess the main message from this slide is that whenever you read a description of a fault or upon computing protocol the literature review the keeper a watchful eye what the set of assumptions in the models involved are because they can change the results from OK so what so let's say the sum of all top on computing like an application so suppose we want to us stimuli particular circuit in this case I've got that this is an inverse Quan Fourier transform another particularly legible but you have some ca you'd like to simulate but every fall tower cloudcomputing protocol that we know of has 4 components to it that's what the differs maybe there's other ways to come up with it but right now we just know foreign and they themselves to be broken into pieces will this arrow here on the screen so the we we have a full Takuan errorcorrection protocol and protocol for doing encoded computation and errorcorrection protocols that has 3 pieces we have an infinite family of coats of protocol for extracting the center of the region because this is implicitly assuming were doing stabilizer coding the syndrome decoding but the decoding algorithm for interpreting this interim so what we have always classical information tells with the syndrome I we infer with the errors are given that that's a classical protocol and it's called decoding offer historical reasons it doesn't mean on encoding like during the inverse of the encoding so it just means inferring the errors are given the and then a finite date bases its universal away to implement each of those us so what we have is for peace is how do we actually effect of fault or quantum circuit well the 1st thing we do is we take our ideal circuit this thing here we compile it into a circuit over our gate basis will get a little bit larger than we do that that then we choose a code that's large enough to suppress those errors are below 1 over the size of the new circuit fault quantum error correction and so in no I you know it's that the algorithm dictates the size of the code reuse of of a really big algorithm you go during the code library you dial Figure of code to handle that and the fault the protocols designed to not propagate errors badly so you just need to suppress errors too long number of gates you don't have to worry about the spreading of the years and that's a what we do that either replace each of the gates assert we've chosen the code now we replace all the gates are compiled circuit with be encoded versions of it this code and after each game we insert syndrome extraction and some of them might be built to be omitted depending on the structure the actual algorithm and so we instantly know because we have instead of a computational model we instantly know the decoding after this and a retractable we don't actually apply it all the time but that itself could be faulty so we only apply when we need to which is before each non Clifford encoded gate the you heard the previous topic these Clifford gates where as the very ends we do a measurements we do a logical measurement of all the information we do classical decoding of the final outcomes from from a result because although the fall Takuan computed protocols keeping errors a propagating badly when they get to the very end it uses that very final time step would you can use classical errorcorrection since using the class computers ideal to boost the or suppress the failure rate that final measurement OK so let's go
20:04
through the steps of it more detail here so Kwan compiling is the 1st step so here's our inverse Fourier transform circuit and the sort complied he can see that this is rotation size curial by pirates to fire of war by 32 etc. you have yet a really big Fourier transform you imagine have really teeny tiny rotations here as you've a finite they faces you knock on have all those so we compile it and the degree to which we work a pilot is how what the compiling itself is a simulation you we can only approximate each of these gates to some precision you might want you might ask a how well do after Proc speech each gate in the circuit that itself is a function of the overall circuit itself so that 1 of these places were global local information interact you need to know the whole so the number of data the entire circuit about how well you needed a proxy each day in the circuit with this part of a pilot so in this the G. Gates the circuit you need to simulate it with your clock apparent order whatever G though so could cite algorithm guarantees we can do this and do it very well this particular explanation but by DOS anneals never variant of at this very nice where you can actually if you're given a gate you you can find an approximation but your again approximation epsilon approximation to it you can do it in a lot of power of 1 over epsilon you actually find the approximation sequence CitizenTimes fastener sequence itself because of the description is compressed and so when you do this we don't while circular for original circuit Hajji gates at a new 1 has a logarithmic poly log logarithm poly logarithmically more gates i in the new search and this is the circuit there and simulate fault tolerance so the 1st step but now I have to choose a gate basis
21:54
so there are lots of finite day basis from which to choose so if you're the kind of person likes to keep it real but there's a real databases you can use the Hadamard tougher linear period she showed that this is a universal databases Of course we augment it with the ability of hair and measure states as well to get a sort of a completely basis such as the coherent part if you like the heart of our date Mom just attitude fumigated time should you could have the controlled acid or phase gate to the heart of to make universal you'll be hearing the next talk by Robert and that the the surface code cluster state databases you can have a control phase controlled Seagate had a mark and some preparations and measurements number 2 an unusual measure it actually augmented with these to get universal on computing in all these though the 1 I think that's most discussed the literature are called the standard databases it's that had a Margate that he gave of so called high rate radiates the pirate but the z axis by for because the have angle formants rate and at control not it's convenient actually use not that's a very parsimonious basis we can expand that include all the gates and their inverses and that's useful for the quantifying algorithms which typically require that and so there's this more of a complete 1 I'd say are favored are gay bases and fall talk on computing constructions as this latter 1 here where I the only coherent date we have is the Controlled NOT gate everything else is our preparations in measurements so these are this 1st collection of operations eyes that will call the CSS set of gates these a data have particularly ice implementations for CSS codes and I'll be describing later we add preparation of the socalled plus I state and 0 plus I won a battle legend appear for those of you not so for me it was notation the they congested here as well if you have difficulty understanding the please ask for clarification the this plan adding this plus I still as you get the entire Clifford group of encoded for an amnesty stabilized the duty gate and get universal quantum computing but this is great but to actually use it and that was couple things to realize what we don't have extra Seagate's but that's OK I we can propagate those for forward there's this goddess and no theorem that tells us that so we are actually to know those we can just keep that information in our head propagated forward to do the sth gates how we can do it with these socalled magic states so it is pirate to state is the same as plus state we can use that and but the other operations again we don't actually the z here to affect the skate and using the spiral for state of T staring at the T. J but and the plus state you can think of as a magic sorry the H so were able to be any time than an original circuit where we had an answer to your age we have to add this extra layer of circuitry it blows up or circuit that much larger still I have a little warning here is that you might think that this is a Clifford circuit because S is a clever gate but the classically controlled sk not equivocate you know because a single bit flip on the classical control cause the S properties or OK
25:20
so we've got our grammar circuit you know without compiled now we that the code but well that we need the infinite code family we need to go large enough to get the right code and well 1 way given if the code villages pick a code and concatenate you know concatenated can in fact and so you know you get some elaborate structure like these threedimensional serpents the gasket with wheels within wheels kind of thing and there are many many such codes of and studied in in character we know that you can use any stabilizer code is a basic concatenate but these CSS codes which about earlier particularly dies the lingo they can't use the fields we we start with different levels of concatenation with the zeroth level is the physical level of cubits each higher level is another layer coating on top of that anything about the about concatenation is that errordetecting codes handy in this scenario because although errordetecting codes from but they can detectors they are able to correct located errors and so if you have a concatenation and something an error happened is detected on a lower block the block with the errors detected is now located for the higher level of code and that error detecting code can correct that so there's a a large role for Error detecting codes in addition to error correcting codes are a concatenation and a lot of labor this full list of codes but there are quite a few here and you know pick your favorite and concatenate away now to that Yukos and
26:53
concatenating them you can tessellate them instead this is another way to get an infinite family of codes are many ways to intervene because with these 2 the most widely studied so you can test later code by thinking of that our laying out in space and then thing about hiking glued together to get larger and larger codes and so the goal in doing this is to keep those the the checks was the language I use for us stabilizer generator so the flood easy to say the keep the checks local so you wanna measure local checks but but have the logical operators be global over the entire area in which you tessellated so strictly speaking these the whole logical code so homologies the theory of boundaries and so on so the idea here is that the logical operators in these codes are boundary lists and the checks are themselves boundaries little faces or something like that and so there's this close connection tree homology topologies there also called codes as well but I think maybe that scares let people often recall them tessellated cos maybe you and seen since so frightening and so has you can do this tessellation 1 dimensions 2 dimensions through engines whatever you'd like are naturally were focused on 2 dimensions although of our some Samawah's work in 3D and some the 2nd 1 the and but not all such codes are goods Australia the bacon short code you can imagine putting on a very large lattice but they don't perform well themselves as faulttolerant con computing the approach for fulltime on people because they're good for a base coding catenane but he just thought of them as an infinite family of codes and larger larger lattices the of the threshold set up and 2 years ago the list of codes that I'm aware of that and I'm sure they're more of 4 and tessellating at a code in space to come up with an infinite family I incidentally these 3 codes you these colleges color codes they start widowed and they grow that they tessellated of 3 different ways so there is another way to grow and if if because of esteem coathangers just concatenating and this coherences from I Hector bomb and and collaborator the have to do at the end a 3 dimensional color code it's just the same as the 15 1 3 read Mollica OK so I know how all of these
29:14
goods performed well on it different we look at we look at what the failure probability of the system of the code is a bunch of the input probability per elementary data user data let's say every concatenate the codes then we find that as we increase the code size these curves they start to become steeper and steeper at all intersect at the point but roughly where they line people's p fail now just a little bit so we call this so that you know a particular point a pseudo threshold so we talk about a level wonderful little to etc. but and that I'd say I think it's fair to say that we don't really understand well how let drifting occurs so for people who do numerical simulations they reported value as a suit of racial but they know it's hard to the know exactly what that means asymptotically the surface codes and color codes of a feature that the codes intersected their mutual inflection point not at the point where they intersect people's P fail and so that means that they continually get better and better and better as they go up the you know the the threshold for any particular finitesize surface a color code but increases by a docent that that where the level of the neutral function pointers the the bacon short coats example I mentioned earlier they to get steeper and steeper and they start getting better start moving forward like the surface in Coleco's with they start going retrograde innocent going down down down until the threshold goes to 0 so now instead family of codes are good you have to think about them and study and see which ones again which ones are not I had to find where the threshold is you know you can look at you can try to find these intersection points but it turns out for that the surface in the color code serves as a statistical mechanical site you could make so too I know what the scaling is going to be like near the intersection points so young adults therefore crossing you can sort of fit to a a scaling that comes from a universality class in the static model but and all warning here is that these curves look the exact same for error correction they do for faulttolerant error correction so when you read a value for a threshold you know just be very careful that you're reading it for the right kind of noise small but actually this is a
31:32
good segue into noise models of what the noise was people think about but the natural wonders start with this to say well it's a matter that the depolarizing noise model so we'll assume that each but the gate preparation gate is ideal followed by this in general of function of key words they'll be the parameters describing it I to measure is preceded by that divorce in general and also flipped the result probability P we need that last step because of but I just because it if the depolarizing channel flip the bits and then we measured it but it would say that measurements correct it reflected the actual value of the bit that got flipped so to make the measurement itself all to have dimension of the media itself can fail independently of the data feeling going into the media as as an active noise knowledge 1 this user numerical estimates quite a bit of an alternative 1 is to assume that instead of the depolarizing channel we have the bit flip general fall by the phase of channel or preceding it are free to these operations the so that means that a wire is actually I'm less likely because it would require an X here in this year to happen the correlations in it but there's lots of a phenomenon called a phenomenal error model where we will give a detailed circuitry that's used to extract a syndrome we just say that each 1 of those bits that we measure is just failure with probability p so that particularly realistic model but it is what allows us to do this mapping onto the standard model for those topological codes and that something be aware of is that decoders are a lot of 4 seasons decoders because you have X checks and the check natural stable 1 of them looks a bit flips their looks for phase flips and or just do decoding classically in each of these you can do that but if you had a depolarizing channel you're missing the fact exons years or more correlated that is becoming more likely for x and z to happen together then from the happened separately and so that higher decoding is not really optimal for the depolarizing channel and you have to have a more subtle decoder for that but for the bit flip channel follow the face of general it is optimal and so and literature again had to be carefully read these things you might have to multiply threshold value by 2 thirds to compare 1 guy to another is that that's we would map between those 2 channels if you were to decode as well so again the caveat emptor but while those are nice mouse use for numerical simulations are not necessarily easy analyzed by analytically and so in this model in this setting we will sometimes use the more generically words called locals to cassock noise or just say that the of errors that are specified locations in the circuit so these are a locations where gate is a gay could be the identity gate or could also be to fumigate or preparation measurements and each 1 of these locations the sum of probabilities of all fall pads so collections of faults with the fault of those locations is no larger than Peter the and that's very generic and it doesn't it's not assuming independent noise in each of these why would you possibly 1 look at a noise model like this so official was nice about is that it's invariant under concatenation so if you're trying to study how can get it encodes work it's a very nice is that this sort of its invariant through that process of I Jemison nonMarkovian noise where the bit more realistic where you have a system bath Hamiltonian you can make out local or nonlocal you can have a system that term just wherever the gate sorry can make them happen wherever and they're been plenty papers you know looking at these generalizations of and so I again just a preservice any rise which noise model were to consider I'm going Juris is I think mostly about like these to depolarizing noise channels OK so now we are
35:16
at 1 extract the syndromes we've got our code of that the noise highlight track the center all I need to measure these poly operators the stabilizer generators attracts is a generic circuit that lighter do that for general poly operator and I but it's not particularly fault tolerant if you for example in hand ideal gating of a bit flip you have in the middle of the circle that's intended to extract the poly operator X X X X then the acts operator the failure happens and will propagate through the control not and create a correlated error in a pair of bits here and that's that's not we don't what errors to grow on me what keep them sort of bounded and so are there please for possible solutions to this side the buckle on and the Shor extraction Steen extraction no extracting topological track so I'll talk about these in turn so the short fraction of what we do is we extract the syndrome bit by bit every each bit at the center of that we wanna extracted prepare a socalled cat state it's a state in the repetition code that's the logical plus for the repetition of I and it'll work very stabilizer codes so any the wiser could you have you just for that bit of syndrome you extract it this way and it turns out that that to increase the reliability of the syndrome value that you extract we need to repeat the extraction of a number of times equal the T plus 1 where the distance of the code is a D equal to 2 to so this works but requires some repetition a little bit more efficient way of doing things is to use the steam extraction so this'll unfortunately only work for LCFS codes unlike the Shor method which were all codes and what you do is you create and so that's our cat state but a state in the code in which you're trying to extract the syndrome from in the 1st place so you double the number of qubits in this case and you transversal so up but do see not between the of the data that you have in the in solar and the scene of go a down depending on whether or not you're attracting as each actor X check and any destructively measure the insulin and do classical coating on the outcome that you obtain and if you push this you'll see that that has this neat affected a wall of probably 1 error to anywhere here if there were no correlated errors in this it it doesn't create preliminary anywhere and you will not disturb the data anymore they just learning the information about the syndrome but the you're interested so on so that at the nice technique you can be even more efficient but at the expense of using even more cubits by using a Belsen coded bell state in the code not just encoded plus a 0 state of this is the new extraction technique from and what you do is in a single transversal operation between the and so in the data you all learn both the X and those checks in parallel so like in a single step to the you also teleport the data to the other half of the bell encoded Delpierre and you can imagine even having these deltas be preloaded with the gate you wanna do so viewing error correction you can be doing a code gate at the same time so that's pretty cool it in a single step you can be doing error correction and the gay and and like the previous protocols at most 1 I data arrow propagated error you won't get correlated errors however if there are correlated errors in any of these it's all states that they could propagate badly so we need to ensure that we don't have correlated errors in the insula and so that will be a distillation process will talk a little bit more about that and I I should say that if you are concatenated code and you will do this error correction at each level independently so your the paralyze this whole extraction process a great deal with concatenated codes OK topological attraction so
39:20
this this obsession a concatenated codes of keeping errors from propagating but the topological approaches little bit laid back it's as well you can but there's probably a little bit just as long as the only probably at a finite distance and stop so it's more thing you the global structure of the code that will prevent because propagating badly not censor local structure that keeps them from propagating badly so on and the nice feature of this is that the structure of a code is what's preventing the error propagation and you don't need any a labyrinth solos matter a huge deal turns out that this insulin business is extremely complex and I consumes a large part of faulttolerant protocol so that the 2 criteria need and that the the code checks propagator themselves in the syndrome checks properties themselves multiplied by a stable as well let me give an example of this this is the surface going to be hearing a lot more about this later this is the version where the CubaChina vertices tied to the cube on the edges but you know let's face it you bits of the like to live vertices so opening 1 vertices so and and we've got a twocolor done a lattice here and for each blue river check that it's a z check and FreeCell or even X checked as we measure unit XXXX Azizi's you see around the faces and so on art so we had a not in this syndrome bit at the center of each face a meter control but in to of that face to do this and attraction I we don't do it faulttolerant readers to your transversely like this and so I mean errors will propagate but the thing is if you look at the stabilize jittery here in this case it's easy easy easy and you ask how does it propagate through the schedule which is a clock schedule where it's at the time step 1 we do a C not I to hear what you see that from this it to central Cubitt intensive to we go from this Stewart the center 1 3 we go from here to a center for the center so if we go around each face clockwise that will propagate but that's scheduled do over the entire lattice of propagate this stabilizer element the z will probably through the seed arts just to hit each of the central bitterly X checks twice would be nowhere at all so so so that the code check is propagated to itself it doesn't nothing's detected here so that's good but this Skittles a bad schedule this is 1 of the active it is what I have a paper I wrote 1 I go it's not a good for I'm doing a I faulttolerant error correction because if you look at the syndrome checks if I look at what happens to this bit here so the center at the 0 it's an eigenstate of and see how the z propagate outward from here will probably outward and the detectors 2 independent errors by the X checks and so on that's not good and so US marking to do as the books get worried because we have a scantron book order we do see that's 1 2 3 4 in each of these faces and if you do that you can push it through and see that of errors which are not errors all just stabilizer elements acting on the syndrome it's I don't propagate errors so there's some money yet to deliver the thinking was with that these that lattice and tessellated codes but if you do it right you cannot keep errors propagating badly in the car and avoid this hole and so again 1 thing you have to do and
42:40
topological is repeat quite a bit the syndrome so so we want to make sure that the syndrome is as reliable as the information the data itself so were repeat the center retract and Timer number times equal to the distance of the code to its kind like having a repetition code in time and so that boost the reliability of our syndrome the same degrees which the code itself protects the information which is with linear size of a lattice that we using that indifference to the concatenated coder approaches which only a single single single syndrome extraction how a lot of postelection because of as a ceiling and so the solution you have to repeat many times because what let's talk about this until a verification so in the concatenated putting approaches but 1 way to do that is sure verification so we could verify that each syndrome bit separately on a state it would appear to be of particle plus state and so I know we wanna see that it's a reliable we treat it itself is a code word and if it happens be a cat state that it added a single scene arcade so for example here the circuit and b this is a figure from Briny's thesis but he's got some you you had a modest seen out there or prepare cat state yet 1 more I control not will verify I you measure that if it's if you get a 1 that means you prepare cats sit on the street fear 0 then you didn't and you try again to just keep repeating until you get there this is the slightly larger cat state and if you had a code that was not a cat state that he would just to the sender at the shore tied extraction on it and using cats cells that had been verified and so you can now I get an elaborate procedure for verifying the and so this way and this this part here by the way is the insula this is just the preparation circuit coming from the generator of the steam code that's making the state another way of verifying the insula is to not do it bit by bit that that do that the whole thing at once so you can actually if you just distance 3 code you can and the pair of a pair of encoded plus states check them for his errors against each other if they both past and check them for axes against each other this works for at a distance 3 code but because more lab if you have a distance decode for example here is a result for the Golay code that's a distance 7 has had recursive procedure of verifying and verifying again becomes kind elaborate but the nice thing about this is that you don't have to do for you know it is for each bit this syndrome you do just once for the entire state that extracts the entire center there are many other verification protocols and and developed since then I don't have time to go over all of them some other mobile once there's a clever variant using Latin squares that but will force errors to us but not propagate badly and compress the schedule for verification you can use instill decoding where you can remove the whole verification step if you think about the entire process of Heidi syndrome attraction as a whole I was an overlap method that is vigorously developed by the right heart and the pets set to export
46:12
overlaps and stabilizing generators others there's all kinds of techniques to improve this and this is just a tutorial so right this is a research level literature joint 1 more about those I encourage you to read those what that generally though the whole in solar verification visits to give you a sense of how how dire this is for faulttolerant including I this summer recent paper I had that estimates that 60 per cent of actual architecture the unit designed for doing fault are contributing is dedicated to just this whole in pipeline so you know if you try to compute if you spend most your time just prepared so it seems like and and it was not doing you majority of most the time OK so decoding so now we've extracted this syndrome were supposed to figure out what we know what we do with the information we now that we have it generally is that I mean if if we assume the cost of computing is instantaneous we don't really worry about this but you know we do I think a little bit about how difficult it is and so the general tenor if we think about is how hard is it to infer what the errors are given the information versus how was performed what we have to use that information so optimally decoding will find them the recovery that's most likely to succeed given the syndrome and for quantum codes that different than fighting the error most likely to occur for class because it's not this is this difference in generate codes and if you go to see a doctor do have a study the recovery that's you know most likely succeeded tell you this the most likely thing that you've got you know you you really more interest in the former than the latter and so that's why the optimal it's it's important thing about that thing you know this difference between those 2 and and that said though this the most likely error decoders works pretty well in practice and so a lot of folks look at this as a decoder but it's still difficult it's a NPhard general provably I you know it's not optimal and and for CSS codes you can express this as I would call integer programs so you can throw it in your favorite in know numerical calculator and the solvent to figure it out it and force users codes the facts you can think of as close as a topological code some geometry and so you could come up with some crazy the scissors but we care small for which there is an orderdisorder phase transition maps to that threshold things of in the special case of surface codes that procedure becomes what's called minimumweight perfect matching where you have all the syndrome bits and you tried appear together syndrome bits a in and just pairwise you have to think about higher order correlations that algorithm generally runs in time due to the 7 comma decimal 5 reduce the distance of the code so you know a computer scientist will tell you this is only polynomial but a programmer will tell you my god you know it's it's it's that that's a lot of are easily though Austin foure collaborators have tell the algorithm specifically for the surface code and show that you can actually paralyzes down to constant complexity so that's pretty exciting that you know that's you get going better than that and so and this assumption we made before of instantaneous gospel computing is still plausible if you use a decoder like this it's not optimal but it's it's pretty good and it's very fast and this a intermittent out well 4 of the concatenated codes I'm going to try to decode the concatenated code all at once you can do it level by level so each level of the code you decode it and hope that that does something you know does well overall and that's called a level decoding we can go and so sent some information back and forth between the levels and through a kind of belief propagation network renormalizationgroup approach where you get make guesses for the recovery of the finescale and pass messages back and forth between a levels of concatenation iterate and can find a convergent solution and that as we've fast also it's not constant but it's it's log in in the distance as opposed to some polynomial and distance so I'd say to you know and practice something like that you know what is constant Log depth the Dakotas is the 1 1 use these polynomial time are NPhard decoders might be good for assessing the best performance you could get out of these codes so not worry about the decoding complexity just asking you know and what hope from these codes they're useful for that but not for anything in practice the OK so now we get it
50:45
included gates handling with we've done fall top error correction so draw the pieces for that now how do I process coded form again here there are 2 general strategies for this so 1 is to use the transversal operations the OC use co deformation so in transversal operations the ideas that you at each 1 your code words a sort of some blocks of drawn as like a plane here any time you wanted to encode a computation like say control not you transversely between the the nearest neighbor planes and so if you do the same operation on every between every pair of qubits in neighboring play now if you don't have a current pancake form here like this then you're going to have to move added a bunch of additional movement things to make them nearest neighbor kind of operations the pairwise and that's also add difficulties and complexities so so this is a good of a protreaty Kwan computer for a 2 D plot computers is it present some challenges a 2D quantum computer however can do not deformations you can take the lattice and deform the lattice and and make it go through a bunch of gyrations and come back to itself if you do it the right way you can actually have encoded by you get that information transforming you could do an encoded gate that way there is a set the socalled to arrive Eurocodes that can affect to universal quantum computation just by deformation by deformations unfortunately these are not stabilizer codes though and it's not known how to efficiently extract the syndrome from these things here but they're kind of need examples of how co deformation give you a very powerful of results so I
52:24
had a transversal gates were quasar a hint of a little bit in the extraction protocols all distinguish between local transversal gates switcher where gates act independently in each physical Cuba the code block and strictly transverse were told so transversal operation might do like a C not between the 1st matching sort of Cuba phase in the next ones and us controlled Y and X 1 whatever it was to be different but independent of the other ones were the strictly transversal gate will be exactly the same between every pair of of matched students and we know that for any stabilizer code we can do X Z and not a transversely and even a destructive measurement transversely so by destructive I mean when you measure the bits the bits are gone there's no it's not like you projected at the code word into an eigenstate of sigma logical excellent was the the bits are just a year after the code space so we can do that transversely but we can actually for CSS codes to a lot of thing strictly transversely on so we can do X each trick which adversely say for Oblig Cisco's and even some even like ones and we have x is the strictly transversal we can do the had a mod for strong CSS so these are codes for which the H 1 H 2 in Todd's talk would be the exact same matrix I'm like the Steam code for example yet this acid phase gate transversal for where called doubly even CSS codes so and codes of a certainly the the link has to be but the to alkyls 1 mod 4 and if you do that then controlled either to Opel's 1 day it'll give you the escape transversely the unserviceability the controls was the z z the I control I answered to patrol I is a single qubit gates so like I S is for example controlled by so w and basically just saying that every other way of every generated stabilizes a multiple of 8 the fancy way of saying that forms starting to use it's a multiple of 8 I it's going to leave so there's ways to get you know encoding gates to be Trans strictly transversal but you can get them all transversal this is the socalled eastern no theorem is proved earlier for stabilizer codes but the synovium works for all codes but however there is a kind of a neat work with the 3 dimensional color codes but you ask about this later but there's a way to make is that the only non transversal operation you have as quantum error correction and says you have to do that anyway and that's local plant processing media that such a big deal so if you have a threedimensional computer I you could get away with transverse operations everywhere except for the quantum error correction of once you have encoded of magic states then the sth gates of become worse so if he had a if you can be just as a source these fiber to power for plus states this architecture earlier they all involve operations that I said were transversal so that means that there's a transversal way to get these gates offer for any code of your stock over here in this land without having these nice things you can still get everything transversal modulo not on high prepared this private to fire for plastic and so we need to have some way to prepare these magic states so we can do a transversely because we can get everything transversal but I need to get highfidelity copies of these a prepared by some mechanism and in fact you can even get the so called the nondestructive measurements by using that as well you don't need to have them but they're panty death of cold deformation ImageNet
56:12
there's a whole lot about co deformation bizarre are arbitrary talking about that and so on but the basic idea this is that you have a service go to 1 of those axes checks were removed from the code and that creates a logical qubit and then we can of do cut out we can move move these kind of holes around 1 another to effect a copied so like a spacetime picture of the holes breeding around 1 another and as they bring around 1 another you can get all logic to happen by the end of it's it's very similar to I'd say cluster take on competition where you're driving things around the just measurements and you hear more about this Robert
56:55
of so magic how do I get an encoded magic states I'm supposed to what is this thing come from in the 1st place well I can just 2 techniques we can use we can injected by teleportation or by biko deformation so in teleportation what we do is we but we assume that we've got some way to make a a coded 0 like for example you through quantum errorcorrection through these distillation protocols I talked about earlier I would make a belt there we on encode half of it and teleport the magic state in and we get the encoded magic state that way but the problem with that is that we've decoded appears very exposed and that region so that would be a concern about noise there after surface code so another way we can do is we just growth we could take our magic state and just are growing a code around it like a spider weaving a web around around around and so that becomes better and better protected but at the early stages is not very well protected depending how slowly the spider goes around it's more exposed to noise in the early stages of growth but either way these will give you states that air of magic states in the code but they might have no perfect fidelity because this injection process itself is not perfect so we wanted distill them in encoded
58:15
form so we wanna suppose we have a many copies of the the magic state but that not perfect we worry a higher quality of the maleic higher quality versus these walls enduring coded form we can assume that the circuitry use distill these all perfect is is encoded arbitrarily reliable that to distill these type 4 states it would turn that we can run this the coherence the encoded circuit in reverse and and you said so this the read circuit should say Romola's Circuit reversed you get 15 copies in 1 copy comes out and the pirated you can use 7 copies in and get 1 copy of using the Steen encoder runner Versailles I colors on encoding to distinguish it from decoding so I their threshold for this and that in terms of officials are much higher than the threshold for fault on all the other operations and fault on on the beauty so the power for state you can show that it's like 14 comma decimal 6 per cent you can reach up to 50 per cent for the the pirate to states and I but there's a lot of repetition evolved here just like there is for in so distillation reducing this overhead is an area of active research on that kind of thing that I think is being studied in that archive Minsky's yes program operates an exciting program that looks at the kinds of ways to reduce the overhead in fault or on duty hassle me after share with just a couple of words about the accuracy threshold so the the as I mentioned that the tutorial do itself to talk about how 1 estimates of threshold on either using MonteCarlo numerical techniques using analysis techniques with exotic terms like extended rectangles malignant pair counting self avoiding walks there really are tutorials unto themselves of the values that you see the literature they yeah they range from you know to about 3 per cent to like 10 minus six year depending on the noise models that you look at and where it'll end up but you know I don't know what's most realistic I don't know it's an active area the upper bounds are particularly hard to come by the best upper about know 29 comma decimal 3 per cent in every area of threshold so are higher than that but that's not particularly informative I I would say but you know it's something you can put as it's difficult to of deserted upper bound and the law banning requires specialized techniques so I this is the you know we're getting more at a research level stuff what all these threshold what would and maybe you hear more about this and some of his later talks and I was that it you know we were is in getting a higher specialty care what the lowest manner resources that span of the data the game at fault or upon you so to
1:00:59
summarize I've told you about the 4 components of a fault computed particle of getting a kind of a highlevel view of what all these things are about and that explains how you then use those 4 components to simulator circuit and ideal circuit arbitrarily well so in April these techniques together it does allow you to do 1 computing even in the presence of faults to both the data and the processing you can see main idea circuit arbitrarily well and the amount of overhead that you incur is only poly logarithmic in the size of the original cell with that I think retention thank it but the question on the quest so just did you hear that of priority decrepit that's so question was the 68 per cent and number that I quoted for the overhead for the insula pipeline and the questions were what algorithm was that for that was an I I actually forget the the specific governors Morgan architecture generally it's true that it does depend on what the album is to get the specifics of is that paper was posted focus on architectures that I'm I don't the the conjecture all remember exactly no the question get tired as equations so I believe this overhead frenzel distillation of the generic yes I believe that it will take quite that for any grammatical take a substantial fraction of the resources whether it's always 68 per cent and and I don't believe that but but I I believe it'll be substantially I yes yeah I do OK to think of an improvement we have a bit of a break now and then that 1050 will continue alone OK have 11 the
00:00
Abstimmung <Frequenz>
Bit
Quantencomputer
Prozess <Physik>
Punkt
Natürliche Zahl
Familie <Mathematik>
Computerunterstütztes Verfahren
Login
Übergang
Fehlertoleranz
Arithmetischer Ausdruck
Datenverarbeitung
Lineares Funktional
Schwellwertverfahren
Reihe
Stellenring
Quantencomputer
Ideal <Mathematik>
Quellcode
Bitrate
Ereignishorizont
Datenfeld
Menge
Einheit <Mathematik>
Datenverarbeitungssystem
Festspeicher
Information
Ordnung <Mathematik>
Fehlermeldung
Lesen <Datenverarbeitung>
Telekommunikation
Selbst organisierendes System
Relationentheorie
Klasse <Mathematik>
Schaltnetz
Geräusch
EMail
Physikalische Theorie
Data Mining
Informationsmodellierung
Bildschirmmaske
Software
Flächentheorie
Datentyp
Datenstruktur
Fehlererkennungscode
Protokoll <Datenverarbeitungssystem>
QuickSort
Flächeninhalt
Codierung
05:36
Quantencomputer
Gewichtete Summe
Momentenproblem
Extrempunkt
Gebundener Zustand
Fehlertoleranz
Deskriptive Statistik
Algorithmus
Gerade
Feuchteleitung
Kraftfahrzeugmechatroniker
Schwellwertverfahren
Hardware
Güte der Anpassung
Inverse
Bitrate
Rechnen
Menge
Einheit <Mathematik>
Würfel
Ordnung <Mathematik>
Smartphone
Fehlermeldung
Instantiierung
Subtraktion
Stabilitätstheorie <Logik>
Relationentheorie
Klasse <Mathematik>
Geräusch
Systemzusammenbruch
Demoszene <Programmierung>
Virtuelle Maschine
Bildschirmmaske
Informationsmodellierung
Spieltheorie
Diskrete Simulation
Programmbibliothek
Zeitrichtung
Turm <Mathematik>
Maßerweiterung
Datenstruktur
Analysis
Soundverarbeitung
Protokoll <Datenverarbeitungssystem>
Stochastische Abhängigkeit
sincFunktion
Unendlichkeit
Digitaltechnik
Gamecontroller
Simulation
Kantenfärbung
Dämon <Informatik>
Grenzwertberechnung
Bit
Punkt
Prozess <Physik>
Klassische Physik
Ausbreitungsfunktion
Versionsverwaltung
Familie <Mathematik>
Kartesische Koordinaten
Computerunterstütztes Verfahren
Computer
Eins
Übergang
Einheit <Mathematik>
Quantenschaltung
Existenzsatz
Theorem
Uniforme Struktur
Default
Figurierte Zahl
Parallele Schnittstelle
Einflussgröße
Funktion <Mathematik>
Dimension 2
Nichtlinearer Operator
Lineares Funktional
Parametersystem
Zentrische Streckung
Konstruktor <Informatik>
Datenhaltung
Klassische Physik
Quantencomputer
Stellenring
Nummerung
Quellcode
Mustererkennung
EinAusgabe
Rechenschieber
Arithmetisches Mittel
Datenfeld
Verknüpfungsglied
Information
Decodierung
Overhead <Kommunikationstechnik>
Schlüsselverwaltung
MessagePassing
Fehlerfortpflanzung
Aggregatzustand
HausdorffDimension
Zahlenbereich
Code
Physikalische Theorie
Ausdruck <Logik>
Datensatz
Quantenkommunikation
Zusammenhängender Graph
Softwareentwickler
Computersimulation
Leistung <Physik>
Touchscreen
Beobachtungsstudie
Fehlererkennungscode
Finitismus
Physikalisches System
QuickSort
Objekt <Kategorie>
Minimalgrad
Flächeninhalt
Holonomiegruppe
TuringMaschine
Basisvektor
Mereologie
20:03
Atomarität <Informatik>
Gruppenkeim
Computerunterstütztes Verfahren
EulerWinkel
Drehung
Fehlertoleranz
Deskriptive Statistik
Zahlensystem
Algorithmus
Theorem
Stützpunkt <Mathematik>
MIDI <Musikelektronik>
Umkehrung <Mathematik>
Große Vereinheitlichung
Phasenumwandlung
Einflussgröße
Lineares Funktional
Nichtlinearer Operator
Konstruktor <Informatik>
Approximation
Kategorie <Mathematik>
Datenhaltung
Winkel
Quantencomputer
Stellenring
Gleitendes Mittel
Bitrate
Frequenz
Verknüpfungsglied
Menge
Einheit <Mathematik>
HadamardMatrix
Information
Ordnung <Mathematik>
Aggregatzustand
Standardabweichung
Proxy Server
Folge <Mathematik>
Automatische Handlungsplanung
Zahlenbereich
Interaktives Fernsehen
Implementierung
Sprachsynthese
Polygon
Code
Logarithmus
Flächentheorie
Diskrete Simulation
Spirale
Ganze Funktion
Grundraum
Leistung <Physik>
SchreibLeseKopf
Finitismus
Softwarepiraterie
QuickSort
Minimalgrad
Fluid
Mereologie
Digitaltechnik
Basisvektor
Gamecontroller
Codierung
Simulation
HillDifferentialgleichung
Inverter <Schaltung>
Grenzwertberechnung
25:19
SierpinskiDichtung
Stabilitätstheorie <Logik>
Subtraktion
HausdorffDimension
Formale Sprache
Physikalismus
Formale Grammatik
Familie <Mathematik>
Abgeschlossene Menge
Computerunterstütztes Verfahren
Physikalische Theorie
Code
RaumZeit
Übergang
Unendlichkeit
Netzwerktopologie
Arbeit <Physik>
Code
Datenstruktur
Einflussgröße
DoSAttacke
Einfach zusammenhängender Raum
Nichtlinearer Operator
Addition
Schwellwertverfahren
Fehlererkennungscode
Homologie
Güte der Anpassung
Stellenring
MailingListe
pBlock
Unendlichkeit
Randwert
Kollaboration <Informatik>
Datenfeld
Flächeninhalt
Verbandstheorie
Parkettierung
Digitaltechnik
Codierung
Fehlermeldung
29:10
Resultante
Bit
Gewichtete Summe
Prozess <Physik>
Punkt
Natürliche Zahl
Familie <Mathematik>
Kardinalzahl
HamiltonOperator
Übergang
Eins
Fehlertoleranz
Netzwerktopologie
Code
Nichtunterscheidbarkeit
Kurvenanpassung
Einflussgröße
Korrelationsfunktion
Phasenumwandlung
Parametersystem
Lineares Funktional
Zentrische Streckung
Nichtlinearer Operator
Schwellwertverfahren
Standardmodell <Elementarteilchenphysik>
Güte der Anpassung
Stellenring
Wendepunkt
EinAusgabe
Kontextbezogenes System
Verknüpfungsglied
Menge
Rechter Winkel
Decodierung
URL
Schlüsselverwaltung
Lesen <Datenverarbeitung>
Fitnessfunktion
Fehlermeldung
Familie <Mathematik>
Web Site
Invarianz
HausdorffDimension
Geräusch
Transinformation
Term
Code
Multiplikation
Informationsmodellierung
Flächentheorie
Diskrete Simulation
Äußere Algebra eines Moduls
Zeiger <Informatik>
Graphiktablett
Schätzwert
Fehlererkennungscode
Physikalisches System
Grundrechenart
QuickSort
Renormierungsgruppe
Mapping <Computergraphik>
Digitaltechnik
Hypermedia
Codierung
Wort <Informatik>
Modelltheorie
35:14
Bit
Prozess <Physik>
Ausbreitungsfunktion
Versionsverwaltung
Element <Mathematik>
TOE
Gebundener Zustand
Übergang
Fehlertoleranz
Netzwerktopologie
Einheit <Mathematik>
Meter
Nichtlinearer Operator
Bruchrechnung
Deltafunktion
Kategorie <Mathematik>
Kraft
Klassische Physik
Ideal <Mathematik>
Transversalschwingung
Scheduling
Generator <Informatik>
Verknüpfungsglied
Verbandstheorie
Einheit <Mathematik>
Würfel
Information
Computerunterstützte Übersetzung
Ordnung <Mathematik>
Fehlerfortpflanzung
Aggregatzustand
Fehlermeldung
Stabilitätstheorie <Logik>
Geräusch
Zahlenbereich
Polygon
Code
Weg <Topologie>
Knotenmenge
Flächentheorie
Zeitrichtung
Abstand
Datenstruktur
Ganze Funktion
Qubit
Fehlererkennungscode
Kreisfläche
Protokoll <Datenverarbeitungssystem>
Einfache Genauigkeit
Digitaltechnik
Mereologie
Codierung
Gamecontroller
42:38
Resultante
Bit
Prozess <Physik>
Programmverifikation
Zellularer Automat
Zahlenbereich
TOE
Statistische Hypothese
Code
Demoszene <Programmierung>
Abstand
Figurierte Zahl
Ganze Funktion
Protokoll <Datenverarbeitungssystem>
Kraft
Programmverifikation
Einfache Genauigkeit
Algorithmische Programmiersprache
Scheduling
Minimalgrad
Verbandstheorie
Mereologie
Digitaltechnik
Wort <Informatik>
Lateinisches Quadrat
Decodierung
Information
Partikelsystem
Computerunterstützte Übersetzung
Aggregatzustand
Fehlermeldung
46:09
Resultante
Programmiergerät
Bit
Prozess <Physik>
Minimierung
Ausbreitungsfunktion
Programmverifikation
Iteration
Computer
Computerunterstütztes Verfahren
Komplex <Algebra>
Login
Übergang
Algorithmus
Vier
Einheit <Mathematik>
Korrelationsfunktion
Phasenumwandlung
Dimension 2
Nichtlinearer Operator
Addition
Schwellwertverfahren
Datennetz
Quantencomputer
Systemaufruf
Plot <Graphische Darstellung>
Strömungsrichtung
pBlock
Rechnen
Algorithmische Programmiersprache
Konstante
Kollaboration <Informatik>
Generator <Informatik>
Verknüpfungsglied
Forcing
Einheit <Mathematik>
Grundsätze ordnungsmäßiger Datenverarbeitung
Lineare Optimierung
Strategisches Spiel
Information
Decodierung
Ordnung <Mathematik>
MessagePassing
Fehlermeldung
Ebene
Subtraktion
Klasse <Mathematik>
Räumliche Anordnung
Code
Bildschirmmaske
Flächentheorie
Abstand
Grundraum
Informatik
Qubit
Beobachtungsstudie
Fehlererkennungscode
Matching <Graphentheorie>
Programmverifikation
Paarvergleich
QuickSort
Mapping <Computergraphik>
Gamecontroller
Codierung
Wort <Informatik>
Wiederherstellung <Informatik>
Computerarchitektur
52:22
Eigenwertproblem
Matrizenrechnung
Stabilitätstheorie <Logik>
Bit
Prozess <Physik>
Atomarität <Informatik>
tTest
CaseModding
Computer
Zwölf
Mathematische Logik
SigmaAlgebra
Code
RaumZeit
Eins
Bildschirmmaske
Multiplikation
Code
Theorem
Restklasse
Maskierung <Informatik>
Urbild <Mathematik>
Einflussgröße
Phasenumwandlung
Leistung <Physik>
Qubit
Kraftfahrzeugmechatroniker
Nichtlinearer Operator
Fehlererkennungscode
Protokoll <Datenverarbeitungssystem>
Stellenring
Quantencomputer
Einfache Genauigkeit
pBlock
Quellcode
Binder <Informatik>
QuickSort
Transversalschwingung
Dienst <Informatik>
Verknüpfungsglied
Hypermedia
Gamecontroller
Codierung
Wort <Informatik>
HillDifferentialgleichung
Computerarchitektur
Aggregatzustand
56:53
Prozess <Physik>
Gemeinsamer Speicher
Rechteck
Geräusch
Kardinalzahl
Term
Gesetz <Physik>
Code
Gebundener Zustand
Übergang
Informationsmodellierung
Bildschirmmaske
Flächentheorie
Reverse Engineering
Spieltheorie
Datentyp
Maßerweiterung
Optimierung
Analysis
Leistung <Physik>
Schätzwert
Nichtlinearer Operator
Fehlererkennungscode
Schwellwertverfahren
Protokoll <Datenverarbeitungssystem>
Spider <Programm>
Softwarepiraterie
Quantencomputer
Packprogramm
Flächeninhalt
Einheit <Mathematik>
Injektivität
Digitaltechnik
Wort <Informatik>
Kantenfärbung
Decodierung
Overhead <Kommunikationstechnik>
Aggregatzustand
Lesen <Datenverarbeitung>
1:00:58
Umwandlungsenthalpie
Bruchrechnung
Bit
Prozess <Physik>
Sichtenkonzept
Zellularer Automat
Zahlenbereich
Ideal <Mathematik>
Gleichungssystem
Fokalpunkt
Algorithmus
Diskrete Simulation
Digitaltechnik
Kontrollstruktur
Zusammenhängender Graph
Partikelsystem
Computerarchitektur
Persönliche Identifikationsnummer
Overhead <Kommunikationstechnik>
Metadaten
Formale Metadaten
Titel  Faulttolerant quantum computing 
Serientitel  Second International Conference on Quantum Error Correction (QEC11) 
Autor 
Landahl, Andrew

Lizenz 
CCNamensnennung  keine kommerzielle Nutzung  keine Bearbeitung 3.0 Deutschland: Sie dürfen das Werk bzw. den Inhalt in unveränderter Form zu jedem legalen und nichtkommerziellen Zweck nutzen, vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. 
DOI  10.5446/35306 
Herausgeber  University of Southern California (USC) 
Erscheinungsjahr  2011 
Sprache  Englisch 
Inhaltliche Metadaten
Fachgebiet  Informatik, Mathematik, Physik 
Abstract  This is a tutorial on faulttolerant quantum computing, with an emphasis on concatenatedcoding and topologicalcoding approaches in the quantum circuit model. The tutorial will cover key protocols used in faulttolerant quantum computing schemes, including syndrome extraction, syndrome decoding, and encoded computation with magic states. It will also cover wellestablished analysis techniques, including extended rectangles and Monte Carlo simulation. A basic familiarity with stabilizerbased quantum error correction is assumed. 