Quantum computers still work with 25% of their qubits missing
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 48 | |
Author | ||
License | CC Attribution - NonCommercial - NoDerivatives 3.0 Germany: You are free to use, copy, distribute and transmit the work or content in unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/35319 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
12
23
24
29
31
37
38
00:00
ComputerQuantum computerQubitFehlerschrankePartial derivativeError messageSurfaceThresholding (image processing)TopologyComputational physicsComputer fontQuantumImplementationInsertion lossLinear subspaceError messageBitQuantum computerInformationQubitCASE <Informatik>Bit rateFehlerschrankeComputer animation
00:57
FehlerschrankeError messageCodeSurfaceTopologyQuantum computerComputational physicsPercolationThresholding (image processing)Computer fontThresholding (image processing)SurfaceNumbering schemeBitInsertion lossComputerModel theoryCodeImplementationResultantSlide ruleSystem callPercolationMultiplication signAreaFault-tolerant systemSimulationFehlerschrankeComputer animation
02:08
Insertion lossSource codeFehlerschrankeArchitectureAsynchronous Transfer ModeSubsetAtomic numberOpticsInsertion lossModel theoryQuantum computerImplementationMechanism designQubitPolarization (waves)Single-precision floating-point formatLattice (group)Error messageNoise (electronics)Different (Kate Ryan album)State of matterFehlerschrankeMultiplication signQuantumNumbering schemePolarization (waves)Line (geometry)Instance (computer science)Solid geometrySource codePattern languageCubic graphCarry (arithmetic)Computer animation
03:40
QubitTorusCodeLattice (group)Type theoryOperator (mathematics)Commutative propertyNumbering schemeElectric generatorProduct (business)Boundary value problemFrequencyQubitLattice (group)CodeStability theoryType theoryOperator (mathematics)Form (programming)Independence (probability theory)BitTensorproduktSet (mathematics)Eigenvalues and eigenvectorsSpacetimeValidity (statistics)State of matterElectric generatorWebsiteCommutatorProduct (business)Power (physics)Social classWord10 (number)Sign (mathematics)Different (Kate Ryan album)Computer animation
05:44
Operator (mathematics)ChainCodeTorusInsertion lossState of matterCodeFamilyTorusCommutatorOperator (mathematics)Greatest elementChainWordDatabase normalizationHomologiegruppePhysical systemLogicAffine spaceLattice (group)Validity (statistics)CASE <Informatik>InformationMereologyProgrammschleifeSet (mathematics)Source codeFehlerschrankeStability theoryForm (programming)MeasurementDifferent (Kate Ryan album)Turbo-CodeSound effectThumbnailComputer animation
07:42
Error messageChainMatching (graph theory)WeightAlgorithmMeasurementPolygon meshMeasurementElectric generatorCodeThresholding (image processing)Procedural programmingError messageSummierbarkeitGroup actionElement (mathematics)Stability theorySound effectClassical physicsPhase transitionStatistical mechanicsChainAlgorithmState of matterResultantMaxima and minimaWeightPrime idealComputer animation
08:55
FehlerschrankeInsertion lossQubitOperator (mathematics)Thresholding (image processing)PercolationMeasurementGraph (mathematics)AlgorithmMatching (graph theory)RankingDifferent (Kate Ryan album)Point (geometry)FinitismusSound effectParameter (computer programming)Thresholding (image processing)Fitness functionFehlerschrankeCurveComputerLine (geometry)Computer simulationInformationQubitModel theoryGoodness of fitOperator (mathematics)Product (business)Lattice (group)Numbering schemeSpacetimeMeasurementCharacteristic polynomialWeightGraph (mathematics)Maxima and minimaInsertion lossState of matterError messageMatching (graph theory)CodeNoise (electronics)Finite differenceEquivalence relationLimit (category theory)FamilySquare numberPercolationResultantStability theoryBitPhase transitionInheritance (object-oriented programming)Validity (statistics)Basis <Mathematik>Revision controlMultiplication signBit rateKey (cryptography)MereologyProcess (computing)Electric generatorCubeComputer fontTurbo-CodeTheoryComputer-assisted translationGraph (mathematics)Computer animation
14:30
Quantum computerNumbering schemeTopologyCodeParity (mathematics)SurfaceData storage deviceQuantumError messagePerfect groupLogic gateStability theoryOperator (mathematics)FehlerschrankeForm (programming)CASE <Informatik>Noise (electronics)AreaQuantum computerComputer animation
15:27
Numbering schemeQuantum computerTopologyQuantum computerThresholding (image processing)Three-dimensional spaceCategory of beingMultiplication signComputer simulationTranslation (relic)Logic gateDimensional analysisInvariant (mathematics)Mathematical optimizationGame theoryNumbering schemeNetwork topologySurfaceCodeMagnetic stripe cardFault-tolerant systemState of matterTerm (mathematics)MeasurementSequenceQuantumCartesian coordinate systemComputer animation
16:46
Lattice (group)QubitNumbering schemeTopologyState of matterLattice (group)Eigenvalues and eigenvectorsNumbering schemeQubitQuantum computer1 (number)Cellular automatonPower (physics)Type theoryLogic gateGame controllerLine (geometry)Operator (mathematics)Different (Kate Ryan album)Key (cryptography)QuantumStandard deviationMeasurementComputer animation
17:40
Type theoryDivision (mathematics)QubitLattice (group)Numbering schemeTopologyClifford algebraClifford algebraMeasurementVacuumDialectBasis <Mathematik>Universe (mathematics)SubsetPoint (geometry)State of matterQubitNumbering schemeType theoryGroup actionComputer animation
18:42
Identity managementQubitCodeSurfaceVacuumInfinite conjugacy class propertyFehlerschrankeCross-correlationoutputFunction (mathematics)Plane (geometry)outputOperator (mathematics)Cross-correlationSurfaceSingle-precision floating-point formatEigenvalues and eigenvectorsDensity matrixMeasurementPlanningDialectFunction (mathematics)Surjective functionProduct (business)BitCASE <Informatik>SpacetimeLattice (group)Nichtlineares GleichungssystemSlide ruleState of matterQubitForm (programming)Identity managementBasis <Mathematik>Numbering schemeService (economics)PerimeterTraverse (surveying)CodeDifferent (Kate Ryan album)Direction (geometry)FehlerschrankeReading (process)Computer-assisted translationStability theoryProgram slicingCylinder (geometry)Projective planeMappingLogicSequenceQuicksortSound effectThread (computing)OrbitLogic gateVacuumComputer animation
22:35
Product (business)FehlerschrankeCubeLattice (group)ChainError messageWeightMaxima and minimaMatching (graph theory)Uniform resource locatorAutomatic differentiationSign (mathematics)Single-precision floating-point formatOperator (mathematics)SurfaceProduct (business)Lattice (group)Parity (mathematics)Boundary value problemThread (computing)Error messageInsertion lossService (economics)Cross-correlationCubeFehlerschrankeMeasurementChainCodeProgrammschleifeFehlererkennungForm (programming)Arithmetic meanMaxima and minimaWeightCondition numberComputer animation
24:13
QubitSurfaceInsertion lossFehlerschrankeCross-correlationCodeOperator (mathematics)Insertion lossFehlerschrankeCross-correlationSurfaceParity (mathematics)Numbering schemeMeasurementService (economics)Closed setComputer animation
24:58
Cross-correlationQubitSurfaceFehlerschrankeInsertion lossLogic gateSurfaceService (economics)CubeParity (mathematics)Cross-correlationQubitComputer animation
25:39
Duality (mathematics)Musical ensembleLattice (group)PermutationThresholding (image processing)PercolationInsertion lossPercolationParameter (computer programming)Thresholding (image processing)FehlerschrankeQubitDimensional analysisCross-correlationReading (process)MeasurementLattice (group)Vulnerability (computing)Ocean currentComputer animation
26:22
FehlerschrankeParity (mathematics)Insertion lossGraph (mathematics)Maxima and minimaCubeParity (mathematics)QuicksortCodeFehlerschrankeDimensional analysisPoint (geometry)SurfaceError messageInsertion lossCubeStability theoryOperator (mathematics)QubitMaxima and minimaWeightGraph (mathematics)SummierbarkeitComputer animation
27:03
FehlerschrankeInsertion lossModel theoryComputational physicsBit rateEquals signData storage deviceMeasurementSimulationError messageLattice (group)Data recoveryConfidence intervalQuadratic functionCASE <Informatik>Computer simulationThresholding (image processing)DialectModel theoryFehlerschrankeOrder (biology)Sound effectData storage deviceDifferent (Kate Ryan album)Error messageVariety (linguistics)Control flowPercolationLattice (group)Numeral (linguistics)Point (geometry)Multiplication signParameter (computer programming)Compilation albumGame controllerBit rateInsertion lossFinitismusSpacetimeTransport Layer SecurityRight angleObservational errorExecution unitSingle-precision floating-point formatCubeBitMeasurementComputer animation
29:39
FehlerschrankeOpticsLinear mapQuantum computerFluid staticsOverhead (computing)Computational physicsError messageNumbering schemeFehlerschrankeThresholding (image processing)Digital electronicsInsertion lossCartesian coordinate systemError messageQuantum computerState of matterMoment (mathematics)Phase transitionProcess (computing)CASE <Informatik>MeasurementNoise (electronics)Model theoryClassical physicsAlgorithmSound effectBitComputer simulationFault-tolerant systemQuantumNumbering schemeSurfaceCodeQubitSpacetimeParameter (computer programming)Student's t-testImplementationTheory of relativityComputerLogic gateType theoryDigital photographyMultiplication signFile formatPoint (geometry)Uniform resource locatorRational numberReading (process)Power (physics)Collaborative softwareGroup actionLecture/Conference
Transcript: English(auto-generated)
00:00
So what I'm going to talk about today is correcting for the case where your implementation of your qubits or your quantum computer is susceptible to loss errors, which we might expect to happen at a much higher rate than what we call computational errors.
00:20
So those are the kind of palliaries that keep you within the computational subspace. So this is joint work with Tom Stace, who's in the audience somewhere, from UQ, and we have a bunch of publications already, and also there's some related talks going on later, which I'll mention at the end of my talk. Okay, so just to outline what I'll be talking about.
00:51
So these loss errors, or equivalently leakage errors, these are really a serious problem for practical implementations of quantum information processing, and I'll talk a bit more about particular implementations on the next slide. So really, the fact that this is such a serious
01:05
problem really motivates optimised schemes, so we can think about error-correcting schemes that are really optimised to correct for loss errors, and also fault-tolerant schemes. And in particular, what I'll talk about is the surface codes, which we've heard a lot about this week.
01:21
These are extremely robust to loss errors with a very high threshold, and it turns out that the threshold is related to the percolation threshold. And furthermore, fault-tolerant schemes that are derived from these codes, in particular the scheme that Robert talked about on Monday, and we also heard Austin talk about yesterday, the topological schemes.
01:40
These are also extremely robust. We already knew that they were robust from the simulation results than previously. We already knew that they were very robust to computational errors, but what I hope to show today is that they're also extremely robust to loss errors. And in particular, just to, I'm going to tell you ahead of time, what we find is that we can
02:02
tolerate up to 25% loss errors in this model. Okay, so in many implementations, loss errors are really the dominant source of noise. So in particular, if you think about quantum computing with photons, photons tend to
02:20
preserve their polarisation for very long times, but you have all these mechanisms in any implementation that uses photons as the cubic carrying entity. There's all these different mechanisms by which you can lose your photons. So in particular, you can think about mode mismatch, imperfect single photon sources, and inefficient detectors. All of those things effectively amount to losing your qubits.
02:46
And similarly, in any kind of atomic implementation, so trapped atoms and optical lattices or ion traps, you have this issue of imperfect loading, for instance, in optical lattices, and just storing atoms or single ions is a really difficult thing to do,
03:01
and we shouldn't be too surprised if those single trapped atoms occasionally go missing. And then finally, this is a slightly different error model, but it can be addressed with similar techniques to what I'm going to talk about today, and that's if you have some solid state scheme, such as superconducting qubits or quantum dots or something like that,
03:22
you should expect when you make a large array of qubits that there'll be some fabrication errors, and so some subset of your qubits, some subset of your devices are probably not going to work. And so the kinds of techniques that I'll talk about today can also be applied to that, although it's a slightly different error model.
03:41
Okay, so just to review the Toric code, which we've heard a lot about already, in the Toric code, the qubits live on the edges of this L by L lattice, and one can impose periodic boundary conditions. It's a stabilizer, which means that it's the valid code word states of the plus one eigenspace
04:04
of all of these stabilizer operators, and there's two different types, two different types of generator for the stabilizer code. We have these star operators, which are just tens of products of four Pauli X operators around a single node of this lattice.
04:21
We also have these Plaquette operators, which are made of Pauli Z operators around a face of this lattice. Okay, and it should be pretty obvious that these two guys are going to commute with each other, slightly less obvious that when these guys kind of overlap that they also commute,
04:42
but if you think about it, you'll see that always whenever you have a clash like this, the stabilizers kind of clash at two sites, and so when you calculate the commutator, you'll get two minus signs from these X and Z operators, so these overlapping stabilizers always share two qubits, and so they always commute.
05:03
And so that tells us that these operators form a valid generator for a stabilizer code. They're all mutually commuting observables. We also know that one of the stars can be expressed as a product of all of the others, so one of these operators on this lattice is not independent,
05:23
so that tells us that the generator, the smallest set that can generate the whole stabilizer, takes this form, and in fact there are two L squared minus one independent generators, and a little bit of arithmetic that tells us that at least for these boundary conditions,
05:40
we have two encoded qubits. Okay, so now we have the stabilizer, we should ask what the encoded operators look like, so it's instructive to first consider the affine of operators on a state that's on one of the code word states.
06:03
Okay, so if we have this chain of Z operators here, then it's going to anti-commute with the two star operators at the end, so this guy can't be a valid code word operator, but what this does tell us is that if we want to find operators that do commute with a stabilizer, we have to form closed loops, and it turns out that there's two different sorts.
06:23
There's these things that are called homologically trivial loops, so these are things that form loops that can be tiled by the plaquettes, and then there are these non-trivial guys that wind all the way around the lattice, and it's these guys that are, they commute with a stabilizer, but they're not generated by the stabilizer, they're not part of the stabilizer,
06:45
so these guys are the logical operators, and in fact what's important is the homology class of these operators, which is just jargon for the sense in which they wind around the torus, the system is living on a torus, and any logical operator that winds around the torus in the same sense
07:06
has the same effect on encoded states, and that's a really useful fact that I'm going to make use of shortly for explaining how to correct for loss errors, and the really important thing is that there's a lot of redundancy in how we can define this Z operator.
07:21
Any Z operator that goes from the bottom to the top of this lattice and starts and ends at the same place in the case of the toric code is a valid operator, and so there's a whole family of these guys that all encode the information, or all measure the encoded information in the same way, so we have a lot of freedom there in how we read out the state of this code.
07:43
Okay, so this is just a review of the work that was done in a press skills group almost ten years ago now to determine the error correcting threshold of this code. The correction procedure is a stabilizer code, so it's just a conventional correction procedure
08:01
where we just go through and measure all the generators of this code. These generators reveal the end points of the error chains, and for an error chain E, we need to find a correction chain E prime such that the sum of these two is trivial, so that it forms a trivial loop, so this guy is an element of the stabilizer, and so the net effect of these two guys
08:22
is to return the code to a valid state, and this is done with the minimum weight matching algorithm, which we've heard quite a lot about already this week, and what Wang, Harrington, and Preskill found in a paper in 2002 is that the threshold for this code is 10.3%,
08:42
so that's a numerical result, but it corresponds to phase transition in a classical statistical mechanical problem, and yeah, this is the value that we get. Okay, so now I want to consider the effect of loss errors, so by loss errors, the important,
09:01
the defining characteristic of loss errors that I'm going to make use of is that we know where they are, so if you have a loss or a leakage error, in principle, there's a measurement that you can do that will tell you whether the qubit is there or not, which doesn't actually disturb the logical state of the qubit, so an equivalent error model you could think of depolarizing noise, where you have an extra piece of information
09:22
which is that you know where the depolarizing noise has occurred, and if we have either of those error models, then that actually helps us enormously in decoding this code. So as I mentioned before, we can take this, one of these encoded logical operators here,
09:40
and there's a whole family of different operators that encode the same information. In particular, what I can do is I can take the original encoded Z, modify it by a plaquette which has the effect of deforming it by one square, and that will give me a new operator. Keep doing this, I can take products with as many of the plaquettes as I like and get a deformed path.
10:01
So what I've tried to show here is we've lost a bunch of qubits, which is a bunch of deleted edges on the original lattice, and what I can do is I can find a path that goes all the way across the lattice like this, okay? So I can decode this code in the presence of these loss errors,
10:25
provided I can find such a path, and this is a very well-studied problem in probability theory, it's just percolation, and the probability of being able to find this, at least in the limit of large lattices, is well understood. So this is going to correspond to the bond percolation threshold
10:44
for the square lattice in two dimensions, and this is a well-known result. The relevant number is a square lattice bond percolation threshold, which is 0.5. So what this tells us is that the threshold for loss errors for the toric code is 50%, which is much higher than for the bit flip or the phase flip errors, okay?
11:05
So that's what we would do if there were no other errors, but of course that's not a particularly realistic assumption, we want to know if this code still works when we have loss errors and bit flip or phase flip errors at the same time, okay?
11:21
So when qubits are lost, it turns out that we can no longer measure these individual stars and plaquettes unambiguously, so if we have, if this qubit here is lost, then it means that these two plaquettes can no longer be measured in an unambiguous way. So the solution to this is that we just take products of these guys,
11:41
so rather than measuring these two original generators, we just measure their product, and that is guaranteed also to be a valid stabilizer operator for this code, okay? So now we have effectively a different lattice with these larger plaquettes, super plaquettes is what we call them,
12:01
and these guys can now be measured unambiguously. And then we just have a modified version of the minimum weight perfect matching problem, so the way we implement this is that we just take, we just construct this graph which represents the original stabilizer elements,
12:20
and then we want to merge nodes on this graph, and this gives us a reduced graph, so every time we take a product of stabilizers, what we do is we, so if we have lost the qubit that was originally corresponding to the edge between nodes A and B, we remove that edge, and we merge the corresponding nodes into a single node,
12:42
and this new node inherits all of the other edges that the original A and B nodes had, and then we hand this reduced graph to the minimum weight perfect matching algorithm, and then we just do a whole bunch of Monte Carlo simulations of this process, and we can ask what happens when there are simultaneously loss errors
13:04
and computational basis errors, and we get this picture, okay? So what's happening here is this axis here is the probability of a computational error, so that's think of a bit flip error or a phase flip error, and this axis here is the probability of loss errors,
13:23
and each of these red points here is actually determined through numerical simulation, so each of these is essentially a different threshold for different values of the loss rate, okay? And what we find is if we fit, there's some kind of finite size effects down here
13:41
that I don't want to get into just yet, but if we just take, say, these points up here, and then this blue line is just a quadratic fit to those points, and what we find is that blue curve hits this axis exactly where the percolation argument predicts at 50%, so this is kind of good evidence that the percolation argument is actually giving us the threshold for loss errors.
14:06
So we have this very large region here of this parameter space where it turns out that we can use the toric code to correct for both loss and computational errors.
14:31
Okay, so that's all very well and good. That tells us how the surface code behaves.
14:43
It tells us about the performance of the surface code when we have loss errors, but there's some fairly unreasonable assumptions there that we could always measure these parity check operators, these stabilizers rather,
15:00
the stars and plaquettes with perfect fidelity. So we know that that's not good enough. If we're going to build a quantum computer, we have to assume that everything is noisy. We have to assume that as well as storage errors, we have to assume that all the gates that we use to encode the error correcting code and the readout and so on, we have to assume that everything has some noise.
15:24
Okay, so Robert has already described on Monday how to do that with the surface code, and it's this topological fault-tolerant quantum computation scheme. And so there's a sequence of papers, I suppose going back to around 2004,
15:44
which introduced these ideas, and it's inspired by topological quantum computing, but in fact everything is described in terms of measurement-based or the one-way quantum computer. And this scheme has got a number of nice properties.
16:01
So one is that it's translationally invariant, and it only involves nearest neighbor gates. And something I won't mention today, but Robert mentioned on Monday, that this all works in two dimensions. So everything I'll describe today will be in terms of three-dimensional cluster states. In fact, it's fairly straightforward to kind of squash everything down into two dimensions
16:21
and just think of this third dimension as a kind of simulated time axis. And then the really nice thing about this is that the threshold numerically has been shown to be at least 0.75%. So this is a very high threshold, and there are probably various optimizations.
16:40
Well, Austin has already talked about some of them, which can push this up towards 1%. Okay, so just to review this scheme again, the measurement-based quantum computing scheme, so we actually start with one of these cluster states on a 3D lattice. And this is a unit cell of that cluster state.
17:02
So we have qubits at the center of every edge of this cell and the center of every face. We've got two different types of qubits, the red ones and the blue ones. And then these black lines here represent just control Z gates that act between these qubits.
17:21
So this is just a standard cluster state. So how we will prepare this is we prepare each qubit initially in an X eigenstate, in the plus one eigenstate of the power of the X operator, and then we apply these control Z gates everywhere across this lattice.
17:41
And then these qubits are divided into three different types, and they all have a different role to play in this scheme. So we have these defect qubits, which are these guys in these shaded regions. These are all measured in the Z basis. And then we also have these V qubits, the vacuum qubits, which is everything else, everything that's kind of colored in white here.
18:01
These guys are all measured in the X basis. And then finally we have these small S qubits, these red guys, which are kind of sprinkled about in amongst this lattice. And these are measured either in the Y basis or the X plus Y basis. And so what's the point in all of that? Well, these two guys here topologically implement the Clifford group,
18:26
or at least a large subset of the Clifford group, which isn't quite universal. And these remaining measurements are required to make everything universal. And we do that by a magic state purification.
18:43
So I'm not going to explain the whole scheme in much detail. It's already been covered in a couple of different talks this week. But just in slightly more detail, I'll explain how you do nothing in this scheme, so how you do the identity gate.
19:02
So this is just like a region of that big cluster state that has two defect regions. So these two cylinders here, these are the regions that we're going to measure in the Z basis. And then everything else is going to be measured in the X basis. So everything else is vacuumed. So that means we just do single qubit measurements on those guys in the vacuum basis.
19:25
And in this scheme, the logical qubits are encoded in the surface code. So you can think of each kind of space-like plane of this lattice, each kind of slice in this direction has the logical qubits encoded in the surface code.
19:42
So what that means is that we take a surface code with stars and plaquettes everywhere, and then just on a couple of sites, or actually throughout these whole defect regions, we don't enforce the stabilizer operators. And what that actually gives us is a pair of encoded qubits with logical operators
20:04
that either kind of orbit the hole or thread between two holes. So if you look at this input plane here, what you can see here is an encoded X operator, which is this guy, which just does a lap of that defect. And then we have the encoded Z operator, which is this guy,
20:22
which just threads between the two. And what we want to show is that this sequence of measurements actually maps this, or teleports, in effect, this input plane onto the output plane. And to understand how this works in a bit more detail, I think the most intuitive way to see this is to think of these stabilizer operators
20:46
that define the cluster state. So we have a bunch of eigenvalue equations that define this state. And each of these KI operators is just a single cluster state operator located on the face of this cubic lattice.
21:03
So it has an X in the middle, and it has Zs all around the outside. And products of this guy have a really intuitive form. So products of these face operators just give us what are called correlation surfaces. So these correlation surfaces look like this.
21:20
In the interior, in the middle of each face, we just have Xs. And then around the perimeter of the whole surface, we have Zs. And the Zs in the middle here, these have all canceled out because Z squared just gives us back the identity. So it's these correlation surfaces that you can really use to understand how this gate works.
21:42
And in particular, if you just think about what the measurements, the X measurements on those stabilizers, on those correlation surface operators give, then you can quite easily show that after you've done all of those X measurements on everything except the input plane and the output plane, that you project the remaining qubits, the input plane and the output plane qubits,
22:04
into this maximally entangled state. And then it's straightforward to show that just by measuring the input slice in the X basis, we map these input operators to these output encoded operators.
22:24
So the input state has then been teleported after doing all that to the output plane. So now what happens if we include errors into this picture? Okay, so again, just consider these correlation surfaces. So if we think about these products of these KI operators around individual cubes of this lattice,
22:45
they also take on this nice form. Now, a closed surface like this has got no boundary. So now Zs, all of the Zs have canceled out. So now we just have this six-sided operator, this kind of parity check operator,
23:01
this parity check cube. And this plays a very similar role as the Plaquettes did in the surface code, in the two-dimensional version. So what this tells us is that cubes with product minus one reveal the locations of endpoints of error chains. So we have this picture where we go through and infer the value of all of these cube operators
23:20
only by doing single-qubit operators. So we don't need to do a six-body measurement here. We can infer all of this just by doing single-qubit measurements. And that will give us a syndrome that looks like this. So we'll have a bunch of cubes that have the wrong sign. And then again, we just send this off to the minimum weight matching problem.
23:45
So this is an error chain which would give us two minor cubes like this. And we just need to find corrections like this. So these guys form trivial loops. And trivial loops in this sense mean that the loops must not thread between these two guys.
24:03
Well, yeah, they mustn't thread between these two defects, and they mustn't wind around. So that's the condition for successful error correction. So how do we correct for loss errors in this scheme? So the first idea is analogous to how we dealt with loss errors in the Toric code.
24:30
And you remember earlier in the talk, what I said was all you do is you just deform the logical operators by multiplying them by pliquettes. So in this scheme, what we do instead is we actually want to deform the correlation surfaces, right?
24:45
So we need to deform the correlation surfaces because if one of the correlation surface is lost, then we can't measure the parity of the whole surface. So we deform the whole thing, and we do that just by multiplying by these closed cubes, OK?
25:03
So we can do that, and we can deform these surfaces so that they avoid the lost qubits. So if we assume that there's a couple of qubits lost here, then we can just always find, well, not always, but if it's a correctable error, then we can find cubes such that when we multiply this original surface by the cubes, we have a new surface that is topologically equivalent to the original surface, but now it avoids the lost qubits, OK?
25:28
And as long as we don't lose too many qubits, we're able to reroute all of these correlation surfaces, and the gate still works. We can still infer the parity that we need to do this teleportation, OK?
25:40
So it turns out that rerouting these correlation surfaces is actually due to the problem of bond percolation on this 3D lattice, OK? So its failure threshold coincides with the percolation threshold for bond percolation in three dimensions. So this is what we kind of believe before we've done any measurements, OK? So we believe that there's going to be some region here when we have no loss errors that's going to be correctable,
26:05
and we also have this percolation argument that tells us that anything up to 0.248 probability of lost qubits should also be correctable, OK? And then we don't know just yet what happens in the middle, but that's what I'm going to tell you next, OK?
26:24
So now we want to know what happens when we have both sorts of error, and the point is we can just use the same tricks that we used for the surface code. We can still use these parity checks to detect these computational errors, and it's the same idea, it's just everything has gone up in one dimension. So we now join these parity check cubes together to avoid the lost faces due to the loss errors.
26:46
So instead of using this kind of summation, let's say we lose this qubit here, which means that we can no longer infer the value of this cube where we just measure this larger stabilizer operator. So now we have a minimum weight matching on this modified graph, OK?
27:05
And we can simulate that in the same way as we did in the two-dimensional case. So we just perform Monte Carlo simulations on the order of 100,000 simulations altogether, or 200,000 simulations altogether for a variety of different finite-sized lattices,
27:23
and we can use this to infer the value of the threshold for various different parameter values, OK? And the error model that we use is that we assume that everything is a bit noisy. So we assume that the computational errors occur in the preparation of the step. In the preparation step, the storage step, it turns out in this model you can get away with storing the cube, it's just for a single unit of time.
27:49
We also assume that there are errors in the control Z gates, errors in the measurements, and we assume that all of this happens with the same rate, which we denote by P comp.
28:02
And we furthermore assume that the loss errors all occur with some rate given by P loss, OK? And then having done all of these simulations, we can infer that the correctable region of parameter space is actually this rather large region down here.
28:21
So we can go up to about 0.6% probability of computational errors due to all of these different error processes. And again, we can tolerate losses all the way up to 25%. We do something very similar here as we did in the previous case.
28:42
Again, there are these finite size effects that are related to, in the percolation problem, you get these very large percolated regions that can actually, for these simulations on these small lattices, they can take up the whole lattice. So what we find is that when we're very close to the percolation threshold, we get some funny effects where essentially the scaling breaks down, OK?
29:05
So we leave out these points down here, which we're a bit dubious about from our fit, and we just fit the quadratic to the values that we obtain up here. And yet again, we find that this quadratic curve to within our confidence interval actually passes
29:22
through this axis at a roundabout where we expect it to from these bond percolation arguments. So that kind of convinces us that these numerics are sound and also that this percolation argument is the right picture. OK, so that's almost the end of my talk.
29:45
So just to conclude, we've developed methods for overcoming loss errors both in the surface code and in this fault tolerant quantum computing scheme.
30:00
And what we found is that a small modification, it's really a modification of how we do the post-processing, the classical post-processing in Rausendorf's scheme for fault tolerant measurement based quantum computing. We find that this is extremely robust to both computational errors and loss errors.
30:20
We find that there's this very large region of parameter space where we can correct for both types of errors. OK, so I should just finish up by just highlighting some other work that's going on. I'm not involved in all of this, but I think most of the people who are involved in this are here. So David Herrera-Marti, who is a Ph.D. student at also Imperial College, together
30:48
with Austin Fowler and various other people, they've actually studied a photonic implementation of this scheme. And you can read about that here or you can talk to David about it.
31:01
Simon Benjamin and his student Lee Ying in Singapore have also looked at the case where the gates in your computer can fail with some probability, but it's kind of heralded errors. It turns out that the tricks that we use here can also be applied to that situation. So Simon's going to talk about that tomorrow.
31:20
And then there are a couple of posters on related ideas where you have non-deterministic gates or where you have kind of fixed defects in your computer, and both of those posters are upstairs. OK, so that's the end of my talk. Thanks for your attention.
31:44
We have time for a couple of questions, but before that there will be a group or conference photograph that's taken outside this conference center and will happen immediately before lunch, so please don't disappear. I presume the food would keep you anyway.
32:08
Yeah, thanks for a great talk. Just a question. So at the moment you have lost just before measurement only, correct? Yeah, OK, that's a really good point, which I should have mentioned. So this particular model, we have loss either at the preparation step or just before the measurement step.
32:25
So actually what we neglect here for these thresholds is let's say there were losses at intermediate times, right? So after you've started doing your C phase gates, we actually neglect that effect. Other questions? Well, another caveat to that.
32:42
So we haven't looked at what the threshold for that process was, but it's quite clear that if you have losses at intermediate times, the errors will be localized, and so you should actually get a very high threshold for that process as well. It won't be as high as 25%, but it will be much higher than you would expect for the unlocated errors.
33:05
Further questions? So I have one that's maybe very naive. When you do the C phase gate, you have multiple qubits that you have interacting, and you said that you accounted for errors in that preparation step. How does your model account for the fact that those errors are then correlated physically
33:21
across multiple qubits and will spread if you have sequential application of these C phase? So one thing is that they don't spread very far because the circuits for creating these cluster states are constant-depth. So they won't spread very far, but you're right, you will get correlated errors. So those are accounted for in our noise model, but we don't make any special effort to actually correct for those.
33:45
And in fact, I think some simulations that Jim Harrington and also Austin Fowler have done show that if you account for that, you can push this threshold up ever so slightly. So you can push it up from, I mean, we get about 0.6 here, or just over 0.6, but you
34:01
can push that up towards 1% if you make your matching algorithm a bit more sophisticated to take account of those effects. Any further questions? Let's thank our speaker one more time.