Bestand wählen
Merken

Quantum computers still work with 25% of their qubits missing

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
so on talk about today is correcting for the case where you'll implementation of your key richly quantum computer and is susceptible to loss errors and which room I expect to happen at much higher rates than than than what we call a computation error so that those are the kind of Parlier is computed computational subspace so this is joint work with with Thom states who's in the audience somewhere from you Q and we have a bunch of publications already and also the some related talks going on later which all i'll mention at the end of my talk OK so just outlined what are we talking about how so and so so these loss errors or equivalently leakage errors these really a serious problem for practical implementations of quantum information
processing I'll talk a bit about particular implementations on the next slide so really and the fact that this is such a serious problem really motivates optimize scheme so so we can their error-correcting seems they're really optimize to correct philosophers and also of all to call tolerance schemes in particular all talk about is that the the surface codes which referred a lot about this week and the extremely robust to loss areas with a very high threshold and it turns out that the threshold is related to you percolation threshold and furthermore and fault tolerance schemes that derive from these codes in particular the sky the Robert talked about on Monday we also have lost and talk about yesterday and the psychological schemes these are also an extremely robust we already knew that they were robust and from the simulation results than previously and we already knew that they were very robust to computational letters and book I what I hope to show today is that they're also extremely robust to losses in particular just to to tell you ahead of time and what we find is that we can tolerate up to 25 per cent loss of OSIRIS in this model OK so
many implementations loss there is a really the dominant source of noise so in particular if you think about quantum computing the photons and photons tensor preserve the polarization for a very long time so we you have all these all these mechanisms in any implementation that uses photons is a cubic carrying entity there's all these different mechanisms by which you can lose your photons so in particular you can think about mode mismatch imperfect single-photon sources and inefficient sectors all of those things effectively amounts to losing you a million in any kind of atomic implementation so trapped atoms in optical lattices or wine perhaps you have this issue of imperfect loading for instance and in optical lattices and just that story single lines is a really difficult thing to do and we should be too surprised if those a single trapped patterns vacation go missing and then finally this is a slightly different model and but it can be addressed with a similar techniques to what I'm going to talk about today and that's if you have some solid state schemes such a superconducting qubits or quantum dots or something like that and you should expect when you make a large array of cubits that that some that'll be some fabrication errors and so some subset of the some subset of your device is probably not going to work and so the kinds of techniques and ball talk about today can also be applied to that it is slightly OK so just to review the the
toric code which we've heard a lot about already in story code the cubits live on the edges of this of this L Y lattice and 1
can and impose periodic boundary conditions is stabilizer which means but it's the valid codeword states of the plus 1 eigen space of all of these stabilizer operators in this 2 different types 2 different to generate a stabilizer code you we have these star operators which are just a tensor product a full power bx operators round rich at a single noted this lattice we also have these plaquette operators which a native paddies and operators around a face of the lattice and it should be should be pretty obvious that these 2 guys are gonna commu with each other slightly less obvious that when these guys kind of overlap that they also commutes but if you think about it you you'll see that always whenever you how a class like this that did the stabilizes kind of clash at 2 sites and so when you calculate commutate you'll get to minus signs and from from these x and z operator these overlapping stabilizes always just the 2 cubits and so they always commutes and so that tells us that the this these these operators form about its generator for stabilizer codes all mutually commuting into the OK we also know that 1 of the stars that can be expressed as a product of all of the other all the others so 1 of these operators on this lattice and is not independent so that tells us and that the generator the smallest set can generate a whole stabilizer takes this form and in fact to L 2 L squared minus 1 independent generators and of low bit of arithmetic that tells us that at least for these boundary conditions we have to be encoded cubits OK so now we have the
the the stabilizer and we should ask what the the encoded operators look like so is instructed to 1st consider the fate of operators on the on the states that's up on 1 of the codeword states OK so if we have a chain of said operators here then it's gonna anti commutes and with the 2 star operators at the end so this guy can't be about it and codeword operator but what this does tell us this is is that if we wanna find operators that do you commute with a stabilizer we have to foot form closed loops it turns out was to different salts disease things other called a logically trivial loops so these are things a form loops that can be and they can be tiled by the black cats and then there are these nontrivial but warned all the way around the lattice OK and it's these guys the they can be with a stabilizer but they're not they're not generated by the stabilizer not part the stabilizer so these guys are the logical operators if what's important is the is the homology class of these operators which is just jargon for the sense in which there aren't they wander around and the source system living on its always and any any operator any logical operator but wines around the torus in the same sense has the same effect on encoded states and that's a really useful facts thumb gonna make use of them surely further explaining how to correct loss errors and and a really important thing is that there's a lot of redundancy and how we can find set operator and EZ operator the goes from the from the from the bottom to the top of this lattice and thoughts and at the same place in this in the case of the Torah code there is about its operator and so the whole family of these guys but all encode the informational measure the encoded information in the same ways we have a lot of freedom there and in in how we readout states of the SCO
OK so this is just a review of of the work that was done in the past bills group almost 10 years ago now to determine that the the error-correcting threshold of this code the correction procedure as a stabilizer codes which is the conventional correction procedure well we just go through a measure all the generators of this code these generators reveal the end points of the error chains and for an error chain we need to find a correction shady prime such that the sum of these 2 is trivial so the folder trivially so this guy's an elements of the stabilizer and so it's that net effect of these 2 guys is to return the code to a to a valid state and this is done with the minimum weight matching algorithm which we've heard quite a lot about already this week what's well why Carrington in press still found in a paper in 2002 is at the at the threshold for this code is 10 comma decimal 3 percent so that's a numerical result but it it corresponds to a phase transition classical statistical mechanical problem this is the value that we get OK and now I
want to consider the effect of loss errors so by Lassa is important that the defining characteristic of losses on to make use of it is that we know where they are so if you have a loss or leakage error in principle is a measurement you can do you will tell you whether the key which is there or not which does not to disturb the logical state of the qubit sewing equipment equivalent that our model you could think of the polarizing noise we have an extra piece of information which is that you know whether deport depolarizing noises occur in we have either those our models and then this than that that actually the helps us enormously in in decoding this this code OK so as I mentioned before we can take this 1 of these encoded logical operators here and as a whole family of different operators that encode the same information in particular what I can do is I can take the original encoded Z modify pop modified by black which has the effect of deforming it by 1 square and that will give me a a new operating being a 2nd I can take products with those as many of the black as low like and get a deformed path so what I've tried to show here is we lost a bunch of cubits which is a bunch of deleted edges on the original lattice what I can do is I can find a path goes all the way across the lattice like this theory so and so I can and I can decode and this code in the presence of these of these loss errors provided I can find such a path and this is a very well studied problem the in probability theory it's just percolation and act and the probability of being up to find this the least in the limit of large lattices as well understood so this is gonna correspond to bond percolation threshold for the square lattice into I mentions and this is this is a well known results relevant numbers square lattice of a bond percolation threshold which is no point 5 so what this tells us is that the threshold for loss errors for the Torah code is 50 % which is much higher than the biblical phase-flip errors OK so that's that's what we would do if there were no other errors and of course that's not to realistic assumption we we wanted know if this code still works when we have lost errors and bit flip or faced parents same on there so when Cuba's a lost but it turns out that we can no longer measure these individual stalls of black cats and ambiguously so if we have if this cube here is lost that it means that these 2 like cats can no longer be measured to and them and unambiguous way so the solution to this is that we just take products of these guys the rather measuring these 2 you original generators we just measure their products and that is guaranteed also to be a valid state stabilizer operator for this code OK so now that we have effectively different lattice with these with these larger at Superfly cats is what we call them and these guys can now be measured unambiguously and then we just have a modified version of the minimum-weight perfect matching problem so the way we implement this is it we just take we just construct this graph which represents the the original stabilize elements and then we want to merge nodes on this graph and this gives us a a reduced graphs so every time we we take product of stabilizes what we do is we so if we have lost the at the key that the was originally corresponding to the edge between nodes a and b we remove that edge we merge the corresponding nodes into a single node and this new node inherits all of the other all of the of the edges of the original and being nodes have and then we hand this reduced graph the to the minimum wage per matching over and then we just do a whole bunch Monte-Carlo simulations of this part of this process we can ask what happens when the simultaneously loss errors and therefore mutational basis errors that we get this picture OK so what's happening here is on this axis here is the is the probability of a computational error so let's think of think the bits the power or face the parrot and this axis here is the is the probability of loss errors and each of these red points here is that the but determined through and numerical simulations so these each of these is a sincere a different threshold for different values of the loss rates OK what we find is if we if we fit this some kind of and font size effects down here but I want into just yet but if we just take say these these points appear and then this blue line is just a quadratic fit to those points and what we find is that blue curve it's this axis exact come where the percolation argument predicts at 50 % so this is kind of good evidence the the percolation argument is this is that a given is threshold lots of and so we have this very large region here of this parameter space where it turns out that we do we can and we can use the Torah code to correct for both loss and computational error OK so then that's all
very well and good and that tells how the surface code the haves and you have to pay the tells about the forms of the surface code we have lost errors and it's this this fairly and reasonable assumptions there so we we we can always measure these these parity check operators these stabilizes rather the solvent plaquettes and with with perfect fidelity so we know that's not good enough if we going to build a quantum computer we have to assume that everything is noisy we have to assume that all that as well as storage areas we have tuition all a case of Eastern encode through that that that the error correcting code and and and the readout and so we have to assume everything has some noise OK so there Robert has already
described on on Monday and how to do that with the surface code this topological fault-tolerant quantum computation schemes and so on so there's a a sequence of papers I suppose going back to around 2004 which introduced the solid is and is inspired by a topological quantum computing and but in fact everything there is is described in terms of measurement based although all the one-way quantum computer and this this game is gone number nice properties so 1 is that it's a translation invariant and only involves nearest neighbor gates and something I will mention today but but Robert mentioned on Monday that this all works in 2 dimensions so oldest everything all the stripes they will be in terms of three-dimensional cluster states in fact is fairly straightforward to squash everything down to 2 dimensions and just and just think of this this 3rd dimension is a succumb simulated time axis OK and then the really nice thing about this is that the threshold numerically has been shown to be at least 0 comma decimal 7 so far I've census is a very high threshold and they're probably various optimizations is already talked about some of them and which can which can push this to what 1 OK so just to just to review this Scheme
again the measurement base quantum computing scheme to st start with 1 of these cluster states on a 3-D lattice OK and this is the unit cell about cluster states so we have cubits at the at the center of every edge of the cell in the center of every face OK we got 2 different types of qubits the red ones in the blue ones OK and then these black lines here represent just controls that dates between the was so this this is a standard and cluster states I we will prepare this as we prepare each key but initially in the in an excellent state in the postwar on modern state of the the power reactor operator and then we apply these controls and gates everywhere crosses lattice and then these Q is a divided into 3 different
types and they all have a different role to play in this scheme so we have these these defects cubits which which of these guys and these are the shaded region these are all measured in the z basis then we have also have these the the cubits the vacuum cubits which is everything else everything at this kind of colored in white here these guys all measures in the x basis and then finally we have the the small at ask this radicalized which account sprinkle about amongst the slightest and these measured either in the wide basis all the the X warning basis and so what's the point in all of that well these 2 guys here and topologically implements the Clifford group or at least I'm a lot subset of the Clifford Group with which isn't quite universal and these remaining measurements and are required to to make everything universal and we do that by magic state purification OK
so and just sit undergo explain all scheme in in much detail what's already been covered in a couple of different sorts this week and but just in slightly more detailed lanes I do nothing this this seems ideally the the identity theory so this is just like a region of that big cluster states and that has to defect regions so these 2 cylinders here these the regions that were gonna measure in the z basis OK and then everything else is going to be measured by ENIAC spaces OK so everything else is vacuum so that means we we just a single qubit measurements and those guys molecular basis and in this scheme that the logical cubits are encoded in the surface code so you can think of each kind of space like playing of this lattice each kind of slides and in this direction it has a lot of the logical cubits encoded in the surface code so what so what that means is that we take we take care service code with that stalls imply cats everywhere and then just on a couple of slides or the throughout the whole read traversal defect regions and we don't enforce the the stabilizer operators what actually gives us is apparent coded cubits and with logical operators that the the have all but the whole all all threat between 2 holes so if you look at this input plane here what you can see here is encoded X operator which is this guy was just as a lack of that defects and then we have encoded Z operator which is this guy which just between the 2 what we want to show is that this sequence of measurements the maps this all teleports effect this this input plane onto the output and to to understand how this works in a bit more detail than most conducive way to see this is to think of these and stable operators that stabilize operators that find the because the state case we have a bunch of eigen value equations that define the states OK and each of these K. I. operators is just a single cluster state operator and located on the face of this cubic lattice so it has an X. The middle and has that's around the outside and products of this guy has I have a really intuitive form OK so products of these of these face operators just give us what what are called correlation services so these correlations services look like this and the interior but in the middle of each face we just have taxes and around the perimeter of the whole surface we have sets and and this adds in the middle here these will cancel out because Z squared just gives us back identity and the so as these correlations services you can really use understand how this great work and in particular you just think about what the measurements will be X measurements on on those stabilizes almost correlation measures operators give and then you can quite easily show and after you've done all of those acts measurements on everything except the input playing the output playing the projects and the remaining cubits the input plane in the output plane cubits into this maximally entangled states OK and then is straightforward to show and just but just by measuring the input slice NEX basis we we map these operators to these output operations OK so the input status and been teleported after doing a lot so now what happens and if we include errors into this picture OK so again just consider and these
correlations services so if we if we think about these products of these K. operators around individual cues of this lattice they also take on this my school now a closed surface like this has got no boundaries and those ads all of this adds of canceled out so now we just have this the 6 Floyd it's operator this kind of parity check operators parity check you and this plays a very similar role as the plaquettes did in the surface code in the two-dimensional village OK so so what this tells us is the cubes with product minus 1 reveal the locations of endpoints of error-checking so we have this picture where we go through in and and further value of all of these cube operators only by doing single-key by operator so we don't need to do and a 6 s a 6 body and measuring hair we can infer all this just biding single Cuba measurements and and that will give us and syndrome the looks like this will have a bunch of dudes the have the wrong sign and then again we just and that's send this off to the minimum wage matching problem and we just need to find and so this is an error chain which would give us to on cubes like this so we just these find corrections like this so so these guys form trivial loops became trivial loops in this in this sense mean that with the loops needs must must not thread between these 2 guys while yeah them that they must thread between these 2 all that these 2 defects all and they must wander around so that's the conditions for successful correct so how do we correct loss errors that
and this and this scheme so the 1st idea is analogous to how we act how we dealt with losses in the in the toric code and you remember an earlier in the talk what I said was all you do is you just deform the and the logical operators by multiplying by plaquettes so in this scheme what we do and instead is we is we actually want to deform the correlation surfaces right so we need to deform the correlation services because if 1 of the key 1 of those in the correlation surface is lost then we can't measure the parity so we do so we deform the whole thing we do that just by multiplying by these by these close
cubes OK so it so we can do that and we can deform the surfaces so so they avoid the lost key that's so if we assume that there's a couple of cubits lost here then we can just always find 1 always but it's correct Solera then we can find to keep such that when we multiply this original surface by the Cubs' we have a new surface that is and topologically equivalent to the original surface but now avoids the lot cubits and as long as we don't use too many cubits were able to re read all these correlations services and the gates still works we can still in in in further than the the parity that we need to to do this teleportation so it
turns out the re reading these correlations surfaces that's the dual to the problem of bond percolation on this 3-D lattice OK so weak stadia threshold currents coincides with the percolation threshold for bond percolation 3 dimensions so this is what we cannot believe before we've the Many measurements OK so we believe that this could be some region here when when we have no losses errors is gonna be correctable we also have this percolation argument that tells us for everything to 0 comma decimal 2 4 8 probability of lost you've it should also be correctable OK and then we don't know just yet what happens in the middle but that's what I'm going to do next so so now we want to know
what happens and when when we have both also there and and the point is we just use the same tricks that we used for the surface code we still uses parity checks to detect these computational errors it's the same idea it's just everything is gone up in 1 dimension so in so we now join these parity check cubes together and to avoid the lost faces due to the loss errors so instead of
using this kind of summation let's say we lose this this cube here which means that we can no longer inferred value this Q where we just measure this this larger and stabilizer operates so now we have a minimum weight matching on this modified graph OK
and we can simulate that in the same way as we did in the two-dimensional case so we just perform a Monte-Carlo simulations on the order of 100 thousand simulations altogether too and hasn't simulations altogether for a variety of different finite-size lattices and we can use this to infer the value of the threshold for various different and parameter values and the error model that we use is that we assume everything is a bit noisy so we assume that the computational errors that occur in the preparation of the steps and the preparations that storage step it turns out in this model you can get away with storing the cube is just for a single unit of time we also assume that the errors and the controls that dates errors in the measurement errors in the measurements we assume that all this happens and with the same rights which we denote by P we furthermore assumed that and the loss errors will occur with some rates given by P lost and then having done all of these simulations we can we can infer that the that the that they're correctable region of parameter space that's this this rather large region down here and so we can go to about at 6 % sorry no point 6 per cent probability of computational errors and use all of these different processes and at an and again we can tolerate losses all the way up to 25 cents we do something very similar here as we did in the previous case again these finite-size effects and the related to but in the percolation problem you you get these very large percolated regions that can that can actually for these simulations on the small lattices that can take up the whole lattice so what we find is that when we're very close to the percolation threshold we get some funny and effects where is essentially stating breaks down OK so we leave leave out these points down here which were a bit dubious about from from Moffett's we just quadratic to the values that we obtain here and yet again and we find that that this quadratic curve to within 2 within of confidence interval actually passes through this axis around about where we expected to from these from this bond percolation arguments so that kind of convinces us that that that that this beast numerics of sand and also this percolation argument is that is the right picture so that's
what was the end of my talk so just to conclude we develop methods for overcoming lost errors and both in the in the surface code and this in this fault-tolerant quantum computing sky and what we found is that the that as a small modification is really a modification of how we do the processing and the classical post-processing in in rational scheme for fault-tolerant measurement base quantum computing we find this is extremely robust to both computational errors and lost errors we find this this very large region of parameter space where we can correct for for both types of errors OK so I should just finish up by just highlighting some some other work that's going on and no I'm not involved in all of this but I think most of the people who are involved in this here so defeat here multi-use a PhD student and acts also Imperial College and together with Austin Fowler very so the people they've and assisted in in a photonic implementation this scheme and you can read about here we can talk to the read about it and Simon Benjamin and and his student Liyang in Singapore have also looked at the case where where the gates new computer can fail with some probability but it's it's kind of how as it turns out that and the tricks that we use here can also be applied to that situation to sign is going to talk about tomorrow and then there are a couple of posters related or it is where you have monitor Mr. Gates or why you have come to fix defects in your in your computer and both of those posters so that's in my talk thanks for your attention the the time for a couple questions but before that there is an allergy is there will be a group groupware conference photograph that's taken outside this conference center and will happen immediately before lunch so please don't disappear this in the food we keep you anyway thank yeah thanks for great talk just good question so the moment you have lost just before formation measurement online correct and that yeah OK that's really the point at which I should mention and so so this particular model we have lots either the preparations that or just before the measurements that so so tightly what we neglected to these thresholds is let's say there were losses other intermediates on points after have you started to use the phrase kicked we have been neglected the precise well it does go ahead and other caveats that so we have looked at what the threshold for that process is quite clear that if you have if you have losses intermediates finds the errors will be localized and and so you should that's again a very high threshold for those for that process is well it will be as high as this 25 per cent but it will be much higher than you would expect for the for the located errors FIL questions so I have 1 that's maybe very naive when you do this you please give you have multiple qubits that you have interacting and you said that you account for errors in their preparation steps that halogen your model account for the fact that those errors are then correlated physically across multiple qubits will spread if you have a sequential application of the so 1 thing is they don't spread very far because the the circuits for creating these cluster states a constant that so they would spread very far but they be but you're right you will get correlated errors so those are accounted for in all noise model but we don't we don't make it any special effort to touch the correct for those infected I think some simulations a gym Harrington also lost power show that if you account for that you can push this threshold of that's a slightly can push it from we we get a vandal comma decimal 6 OK just just over nor comma decimal 6 we can push out toward 1 % if you if you and make you will matching algorithm a bit more sophisticated sake carnivores before for the questions because the woman in this kind of
Fehlermeldung
Einfügungsdämpfung
Quantencomputer
Mengentheoretische Topologie
Flächentheorie
Quantencomputer
Partielle Differentiation
Implementierung
Unterraum
Font
Fehlerschranke
Datenverarbeitungssystem
Wissenschaftliches Rechnen
Quantisierung <Physik>
Schwellwertverfahren
Information
Fehlerschranke
Qubit
Fehlermeldung
Resultante
Stereometrie
Einfügungsdämpfung
Bit
Quantencomputer
Mengentheoretische Topologie
Flächentheorie
Implementierung
Geräusch
Ordinalzahl
Computerunterstütztes Verfahren
Kubischer Graph
Technische Optik
Fehlertoleranz
Open Source
Font
Informationsmodellierung
Perkolation
Flächentheorie
Mustersprache
Quantisierung <Physik>
Schwellwertverfahren
Gerade
Qubit
Kraftfahrzeugmechatroniker
ATM
Fehlermeldung
Schwellwertverfahren
Architektur <Informatik>
Systemaufruf
Einfache Genauigkeit
Nummerung
Quellcode
Codierung
Teilmenge
Rechenschieber
Verbandstheorie
Polarisation
Flächeninhalt
Fehlerschranke
Codierung
Wissenschaftliches Rechnen
Übertrag
Simulation
Fehlerschranke
Perkolation
Einfügungsdämpfung
Instantiierung
Aggregatzustand
Stabilitätstheorie <Logik>
Bit
Web Site
Klasse <Mathematik>
Nummerung
Nichtlinearer Operator
Raum-Zeit
Gittermodell
Bildschirmmaske
Eigenwert
Typentheorie
Datentyp
Kommutativgesetz
Biprodukt
Generator <Informatik>
Qubit
Leistung <Physik>
Nichtlinearer Operator
Torus
Stochastische Abhängigkeit
Validität
Biprodukt
Kommutator <Quantentheorie>
Frequenz
Codierung
Tensorprodukt
Randwert
Generator <Informatik>
Menge
Verbandstheorie
Codierung
Aggregatzustand
Resultante
Einfügungsdämpfung
Gewichtete Summe
Extrempunkt
Familie <Mathematik>
Gruppenkeim
Element <Mathematik>
Algorithmus
Minimum
Einflussgröße
Statistische Mechanik
Phasenumwandlung
Kette <Mathematik>
Nichtlinearer Operator
Schwellwertverfahren
Torus
Klassische Physik
Turbo-Code
Quellcode
Kommutator <Quantentheorie>
Algorithmische Programmiersprache
Codierung
Generator <Informatik>
Verkettung <Informatik>
Menge
Verbandstheorie
Fehlerschranke
Messprozess
Information
Aggregatzustand
Fehlermeldung
Stabilitätstheorie <Logik>
Subtraktion
Gewicht <Mathematik>
Thumbnail
Relationentheorie
Polygonnetz
Nichtlinearer Operator
Mathematische Logik
Homologiegruppe
Loop
Bildschirmmaske
Gewicht <Mathematik>
Torus
Soundverarbeitung
Algorithmus
Fehlermeldung
Physikalisches System
Matching
Codierung
Resultante
Einfügungsdämpfung
Bit
Quantencomputer
Punkt
Prozess <Physik>
Mengentheoretische Topologie
Extrempunkt
Versionsverwaltung
Familie <Mathematik>
Ungerichteter Graph
Element <Mathematik>
Raum-Zeit
Perfekte Gruppe
Font
Wahrscheinlichkeitsrechnung
Qubit
Kurvenanpassung
Einflussgröße
Gerade
Differenzenrechnung
Parametersystem
Nichtlinearer Operator
Schwellwertverfahren
Güte der Anpassung
Quantencomputer
Turbo-Code
Nummerung
Bitrate
Matching
Biprodukt
Ranking
Generator <Informatik>
Verbandstheorie
Gerade Zahl
Würfel
Fehlerschranke
Messprozess
Information
Fehlerschranke
Perkolation
Computerunterstützte Übersetzung
Charakteristisches Polynom
Schlüsselverwaltung
Fitnessfunktion
Fehlermeldung
Aggregatzustand
Stabilitätstheorie <Logik>
Subtraktion
Geräusch
Nummerung
Nichtlinearer Operator
Äquivalenzklasse
Physikalische Theorie
Graph
Bildschirmmaske
Knotenmenge
Informationsmodellierung
Perkolation
Flächentheorie
Diskrete Simulation
Inverser Limes
Schwellwertverfahren
Speicher <Informatik>
Soundverarbeitung
Qubit
Algorithmus
Matching <Graphentheorie>
Graph
Validität
Matching
Quadratzahl
Flächeninhalt
Mereologie
Basisvektor
Codierung
Einfügungsdämpfung
Subtraktion
Folge <Mathematik>
Quantencomputer
Mengentheoretische Topologie
Invarianz
Hausdorff-Dimension
Minimierung
Zellularer Automat
Nummerung
Term
Eins
Fehlertoleranz
Netzwerktopologie
Gittermodell
Spieltheorie
Flächentheorie
Diskrete Simulation
Datentyp
Translation <Mathematik>
Quantisierung <Physik>
Qubit
Gerade
Einflussgröße
Leistung <Physik>
Qubit
Nichtlinearer Operator
Schwellwertverfahren
Kategorie <Mathematik>
Quantencomputer
Magnetkarte
Nummerung
Verknüpfungsglied
Verbandstheorie
Dimension 3
Codierung
Gamecontroller
Schlüsselverwaltung
Standardabweichung
Aggregatzustand
Korrelationsfunktion
Bit
Punkt
Mengentheoretische Topologie
Flächentheorie
Program Slicing
Gruppenkeim
Gleichungssystem
Raum-Zeit
Richtung
Eigenwert
Polygonzug
Typentheorie
Nichtunterscheidbarkeit
Qubit
Korrelationsfunktion
Einflussgröße
Funktion <Mathematik>
Nichtlinearer Operator
Nummerung
Ein-Ausgabe
Biprodukt
Codierung
Dialekt
Teilmenge
Rechenschieber
Dienst <Informatik>
Ebene
Funktion <Mathematik>
Verbandstheorie
Hochvakuum
Fehlerschranke
Ein-Ausgabe
Hochvakuum
Projektive Ebene
Fehlerschranke
Computerunterstützte Übersetzung
Aggregatzustand
Kreiszylinder
Lesen <Datenverarbeitung>
Ebene
Clifford-Algebra
Subtraktion
Folge <Mathematik>
Stabilitätstheorie <Logik>
Identitätsverwaltung
Nummerung
Dichtematrix
Mathematische Logik
Gittermodell
Bildschirmmaske
Flächentheorie
Datentyp
Grundraum
Qubit
Soundverarbeitung
Division
Einfache Genauigkeit
Umfang
Quick-Sort
Mapping <Computergraphik>
Basisvektor
Codierung
ICC-Gruppe
Korrelationsfunktion
Einfügungsdämpfung
Flächentheorie
Abgeschlossene Menge
Extrempunkt
Gittermodell
Loop
Gewicht <Mathematik>
Vorzeichen <Mathematik>
Flächentheorie
Thread
Biprodukt
Qubit
Korrelationsfunktion
Einflussgröße
Kette <Mathematik>
Nichtlinearer Operator
Fehlermeldung
Einfache Genauigkeit
Nummerung
Automatische Differentiation
Biprodukt
Matching
Randwert
Dienst <Informatik>
Verkettung <Informatik>
Verbandstheorie
Gerade Zahl
Würfel
Fehlerschranke
Würfel
Codierung
URL
Fehlerschranke
Einfügungsdämpfung
Fehlermeldung
Korrelationsfunktion
Dualitätstheorie
Einfügungsdämpfung
Hausdorff-Dimension
Flächentheorie
Permutation
Gittermodell
Perkolation
Flächentheorie
Schwellwertverfahren
Qubit
Einflussgröße
Korrelationsfunktion
Parametersystem
Schwellwertverfahren
Gruppe <Mathematik>
Strömungsrichtung
Dienst <Informatik>
Verknüpfungsglied
Verbandstheorie
Fehlerschranke
Würfel
Gerade Zahl
Softwareschwachstelle
Perkolation
Fehlerschranke
Einfügungsdämpfung
Lesen <Datenverarbeitung>
Einfügungsdämpfung
Gewichtete Summe
Punkt
Gewicht <Mathematik>
Graph
Extrempunkt
Hausdorff-Dimension
Extrempunkt
Graph
Fehlerschranke
Flächentheorie
Gerade Zahl
Würfel
Würfel
Codierung
Gerade Zahl
Fehlerschranke
Einfügungsdämpfung
Fehlermeldung
Hydrostatik
Einfügungsdämpfung
Bit
Quantencomputer
Punkt
Prozess <Physik>
Momentenproblem
Gruppenkeim
t-Test
Kartesische Koordinaten
Kardinalzahl
Computer
Raum-Zeit
Fehlertoleranz
Einheit <Mathematik>
Algorithmus
Einflussgröße
Parametersystem
Schwellwertverfahren
Quantencomputer
Klassische Physik
Speicher <Informatik>
Nummerung
Bitrate
Gleichheitszeichen
Dialekt
Verknüpfungsglied
Verbandstheorie
Fehlerschranke
Würfel
Rechter Winkel
Wissenschaftliches Rechnen
Dateiformat
Technische Optik
Messprozess
URL
Fehlerschranke
Perkolation
Ordnung <Mathematik>
Simulation
Lesen <Datenverarbeitung>
Varietät <Mathematik>
Fehlermeldung
Lineare Abbildung
Subtraktion
Messfehler
Implementierung
Geräusch
Nummerung
Wiederherstellung <Informatik>
Gittermodell
Informationsmodellierung
Bereichsschätzung
Digitale Photographie
Flächentheorie
Diskrete Simulation
Datentyp
Speicher <Informatik>
Leistung <Physik>
Soundverarbeitung
Qubit
Fehlermeldung
Datenmodell
Einfache Genauigkeit
Groupware
TLS
Rationale Zahl
Digitaltechnik
Codierung
Gamecontroller
Overhead <Kommunikationstechnik>
Bitrate
Quadratische Funktion
Einfügungsdämpfung

Metadaten

Formale Metadaten

Titel Quantum computers still work with 25% of their qubits missing
Serientitel Second International Conference on Quantum Error Correction (QEC11)
Autor Barrett, Sean
Lizenz CC-Namensnennung - keine kommerzielle Nutzung - keine Bearbeitung 3.0 Deutschland:
Sie dürfen das Werk bzw. den Inhalt in unveränderter Form zu jedem legalen und nicht-kommerziellen Zweck nutzen, vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
DOI 10.5446/35319
Herausgeber University of Southern California (USC)
Erscheinungsjahr 2011
Sprache Englisch

Inhaltliche Metadaten

Fachgebiet Informatik, Mathematik, Physik
Abstract I will describe recent results from an ongoing project which examines the robustness of Kitaev's surface codes, and related FTQC schemes (due to Raussendorf and coworkers) to loss errors. The key insight is that, in a topologically ordered system, the quantum information is encoded in delocalized degrees of freedom that can be "deformed" to avoid missing physical qubits. This allows one to relate error correction and fault tolerance thresholds to percolation thresholds. Furthermore, stabilizer operators can be deformed in a similar way, which means that surface codes retain their robustness to arbitrary types of error, even when significant numbers of qubits are lost. We present numerical evidence, utilizing these insights, to show that: (1) the surface code can tolerate up to 50 percent loss errors, and (2) Raussendorf's FTQC scheme can tolerate up 25 percent loss errors. The numerics indicate both schemes retain good performance when loss and computational errors are simultaneously present. Finally we will describe extensions to other error models, in particular the case where logic gates can fail but in a heralded manner.

Zugehöriges Material

Ähnliche Filme

Loading...
Feedback