Fault-tolerant quantum computation with asymmetric Bacon-Shor codes
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 48 | |
Author | ||
License | CC Attribution - NonCommercial - NoDerivatives 3.0 Germany: You are free to use, copy, distribute and transmit the work or content in unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/35290 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
12
23
24
29
31
37
38
00:00
Codierung <Programmierung>Vertex (graph theory)AlgebraCodeOperator (mathematics)InformationRow (database)QubitQuantumProduct (business)Vertex (graph theory)LogicFluxPattern languageFehlererkennungGauge theoryParity (mathematics)Uniformer RaumPhysicalismSingle-precision floating-point formatTwo-body problemBlock (periodic table)Source codeCategory of beingCartesian coordinate systemLevel (video gaming)AsymmetryFamilyAdditionStability theoryTranslation (relic)Form (programming)MeasurementVorwärtsfehlerkorrekturBitNoise (electronics)Interactive televisionEnumerated typeDifferent (Kate Ryan album)Arithmetic meanCorrespondence (mathematics)Diffuser (automotive)Physical systemCASE <Informatik>Data structureFraction (mathematics)AreaComputer animation
03:05
Parity (mathematics)LengthClique-widthMachine codeDifferent (Kate Ryan album)FehlererkennungRow (database)NumberBlock (periodic table)Product (business)Computer animation
03:24
Singuläres IntegralLogic gateLogic gateCodeUniverse (mathematics)MeasurementQubitCommutatorState of matterSet (mathematics)Multiplication signFunction (mathematics)Phase transitionNoise (electronics)Fundamental theorem of algebraPower (physics)Category of beingConstructor (object-oriented programming)Game controllerPrice indexAreaLogicOperator (mathematics)Different (Kate Ryan album)Single-precision floating-point formatBasis <Mathematik>Computer animation
05:36
Maxima and minimaNoise (electronics)Machine codeState of matterUniverse (mathematics)CASE <Informatik>CodeLevel (video gaming)InjektivitätLogic gatePoint (geometry)OscillationSet (mathematics)Fundamental theorem of algebraComputer animation
06:25
Trigonometric functionsoutputParity (mathematics)State of matterGame controllerDigital electronicsOperator (mathematics)InformationMultiplication signFehlererkennungProjective planeBlock (periodic table)BitQubitFerry CorstenMachine codeMeasurementProduct (business)CASE <Informatik>Computer animation
07:49
Phase transitionMachine codeCommutatorResultantOperator (mathematics)TrailGame controllerPolygonQubitMeasurementLogic gateBlock (periodic table)Local ringConnectivity (graph theory)Key (cryptography)State of matterType theoryBitComputer animation
08:46
Fault-tolerant systemMeasurementMultiplicationMultiplication signBasis <Mathematik>QubitParity (mathematics)VotingGroup actionResultantComputer animation
09:16
MeasurementQubitParity (mathematics)State of matterData structureInformationVotingComputer-assisted translationSingle-precision floating-point formatPrisoner's dilemmaBlock (periodic table)Propagation of uncertaintyMeasurementFault-tolerant systemInteractive televisionRow (database)Game controllerComputer animation
10:24
MeasurementState of matterComputer-assisted translationCASE <Informatik>Computer animation
10:46
Measurement1 (number)State of matterRow (database)MeasurementComputer-assisted translationImpulse responseMusical ensembleArmInformationParity (mathematics)QubitComputer animation
11:29
MeasurementComputer-assisted translationParity (mathematics)MeasurementState of matterBlock (periodic table)Range (statistics)Computer animation
11:50
Mathematical analysisMassPlotterLimit of a functionParameter (computer programming)Sound effectMachine codeState of matter2 (number)MeasurementPrime idealPolynomialEndliche ModelltheorieNoise (electronics)StochasticComputer-assisted translationDegree (graph theory)Block (periodic table)Observational studyDifferent (Kate Ryan album)Multiplication signLocal ringNumberMereologyMathematical analysisComputer animation
13:04
LengthBit ratePoint (geometry)Thresholding (image processing)Clique-widthSound effectCodeMachine codeMultiplication signWebsiteParameter (computer programming)Range (statistics)Mathematical optimizationDiagram
13:59
LengthNumberTerm (mathematics)CurveLogic gateObservational studyHand fanNoise (electronics)Different (Kate Ryan album)CodePhysicalismArchaeological field surveyStandard deviationMachine codeGame controllerComputer virusRectangleOverhead (computing)MereologyBit rateModel theoryMultiplication signDiagram
15:21
QubitPhase transitionNumberNumbering schemeLogic gateData structureMathematical analysisState of matterPlanar graphAsymmetryReduction of orderMachine codeDimensional analysisFunction (mathematics)Similarity (geometry)Slide ruleSingle-precision floating-point formatCode2 (number)Computer-assisted translationLevel (video gaming)LengthGame controllerTransverse waveBit rateOperator (mathematics)Different (Kate Ryan album)CausalityMereologyElectronic mailing listLibrary (computing)Lecture/Conference
Transcript: English(auto-generated)
00:00
dominates over other sources of noise, like bit flips or whatever else. And we expect this to be true for many useful systems, for example, superconducting flux qubits. And so we want to be able to design error-correcting codes and then fault-tiling gadgets that can take advantage of this dephasing bias in the noise.
00:21
So we'll use bake and shore codes, which we've heard a lot about in this conference. So I'll just briefly review their properties. So they're a family of quantum error-correcting subsystem codes. They encode a single qubit into an m by m block of physical qubits. And the asymmetry in the title refers to the fact that we'll consider codes which are, say, wider than they
00:40
are tall. And what that corresponds to is that we have independently tunable different levels of z and x error correction. For example, this 3 by 5 block, which I've depicted, can correct up to two z errors and then a single x error in addition. So we have risers of this form
01:00
where the stabilizers are a product of x's along two adjacent columns or z's along two different rows. And logical operators, again, are a product of x's along a single row or z's along a single column. We also have this gauge structure
01:21
of gauge qubits, which we don't use to encode information. And we can think of the gauge qubits as being generated by a translation of these two patterns, either z's in a vertical alignment or x's in a horizontal alignment. And what this essentially corresponds to is the fact that only the parity of, say,
01:42
the z information in a single column matters because any other information we can remove by application of the gauge qubits. And another nice property which we've heard about is that using the gauge qubits, we can actually build up a measure of, say, a stabilizer.
02:03
And so we can just use two-body interactions to build these up. So to do error correction, imagine we have some pattern of z errors that has occurred. Then we can measure using the xx gauge operators.
02:24
We can measure the parity information in each adjacent two qubits. And we can build this up into a stabilizer. And then we can continue doing this for each pair of columns to get the syndrome information. And then using the syndrome information,
02:41
we can apply a correction. Say, in this case, we would apply it based on the syndrome to put a z in the first two columns. It doesn't matter where. And then after we apply this correction, it may look like we still have a lot of errors. But actually, all these remaining operations are just gauge of freedom. So there's no information encoded them.
03:01
So this is equivalent to no error. In general, the error correction will fail if more than half of the columns with the code have an odd number of z errors or more than half of the rows having an odd number of x errors. And because we have different length and width of the block,
03:22
we have different protection. So we've seen how these codes offer different power to treat z errors versus x errors. And now I'm going to show you how we can also design fault-tolerant gadgets that provide protection, again, treating z errors, giving more protection to z errors
03:44
than x errors. And some of the key ideas in this construction are that we use a fundamental gate set, which is compatible with this idea of bias noise. We use a teleported CNOT gate as our main encoded gate.
04:01
And we apply magic state distillation to achieve a fully universal set of gates. So what I mean by a bias compatible gate set is that we want our z errors to be more common than x errors. So if we have a gate like the Hadamard gate, which transforms a z error into an x error,
04:20
then we'll automatically lose that bias, even if the z error started out as more common every time we would start to get more and more x errors. And we also want to try to avoid cascading errors in gates, as in the CNOT, where a single z error can propagate to two z errors on the output, or similarly, a single x error can propagate to two x errors.
04:44
So for our fundamental gate set, we'll choose just three operations, preparation of qubits in the logical plus state, the controlled phase gate, and measurement in the x basis. And the controlled z gate has some nice properties.
05:01
So a z error would just commute through it. And an x error will produce on the output an x error and also a z error on the other output. So errors can only spread as far as a qubit that they come from or a qubit that's
05:21
directly connected to that qubit through a controlled z gate. And also depict these pictorially as follows. So a plus will indicate this plus preparation. This will indicate a controlled z gate. And this will be a measurement. So we'll start with this fundamental gate set, which
05:41
I've described. And we'll use our bake and short codes to implement this other gate set. And at that point, we'll have some weaker level of noise and we'll have lost the bias in the noise. And so to reach arbitrarily low noise, we can top of this in additional code.
06:01
And we'll use magic state injection and distillation to provide a universal set of gates. We might also be interested in the case where after the bake and short code, even though the error isn't arbitrarily low, it's low enough for our purposes. And in that case, we can just inject and distill directly
06:20
into the universality. So to do the teleported controlled not gate, we'll use this circuit here. And to see why this produces a controlled not, I'll just consider the case, for example, where the first input is 1 and the second input is 0.
06:43
So we should expect the second input to be flipped by the controlled not. And in this circuit, the control qubit comes in on this block and comes out here. And the target qubit will come in on this block and exit on the fourth block.
07:01
So these are all logical operations. So if we measure the first ZZ measurement, then that'll project onto the portion of this state with even parity on the first two qubits.
07:21
And then when we measure the second one, we project onto even parity of these last three. So we end up with just this state. And if we measure the intermediate two, we're just left with exactly what we wanted. We've flipped the second bit. This circuit will also inherently perform the error correction.
07:42
So we basically teleport the information onto these fresh end silos every time we go through a controlled not gate. So we have three components we need to do. We need to do plus preparation. And the way we can do that, first by preparing each individual qubit of the code
08:01
block in a plus state. This will commute with all the X-type stabilizers and do the correct thing. But it doesn't commute with the Z-type operators. So to fix that, we'll measure those. And we can do that by introducing some ancillary qubits, preparing them also in the plus state,
08:20
and then coupling them with these controlled phase gates. And if we measure the controlled phase gates, if all these results were zero, then we've prepared exactly the state plus. If these results are different, then we've actually prepared some other state. But it only differs by local poly operations on individual qubits.
08:43
And we can just keep track of that. And we should repeat this measurement multiple times for fault tolerance. So to do an X is actually very simple. We can just measure each of these qubits in the X basis.
09:04
We form groups according to the column and compute the parity of the result of each column. And we simply take a majority vote of the outcomes, the parity outcomes, of each column. So this is a very simple X measurement. And we want to do Z measurements as well. And we want to do this in a non-destructive way. So we want to just take out the parity information
09:23
and not disturb the rest of the state. We can imagine doing this with a single ancillary qubit for each row, which we prepare in the plus state, interact with controlled Z gates, and measure. And then we take the majority vote of the row outcomes.
09:43
This might have a problem because it's not fault tolerant since a single X error on one of these ancillary qubits could propagate to errors on all of the data qubits. So a possible solution is to use a structure like this, where these are where we prepare this.
10:01
And what this essentially does is it prepares a cat state along this block and this block. And then we use the cat state to put the parity information out of the cat state and measure it. But then there's a trade-off involved because now we have a very large cat state. And the larger it is, there's more chances
10:21
for it to have errors. So we could imagine some intermediate trade-off where we have some smaller cat state. And now there are one to many errors. But they can't propagate as far. And so perhaps there's some tolerable trade-off, especially since the X errors are assumed to be less common.
10:47
In any case, to show how this measurement works, we prepare all the ancillas in plus states and measure these intermediate ones. And we prepare a cat state on these qubits, one for each row.
11:03
And then we couple the data qubits to the cat state. And we measure each of the qubits in the cat state. And that will tell us the parity information. Again, we have the problem where if these measurements are
11:22
not all 0, then we haven't prepared exactly the cat state we've wanted. But we know how it differs from the cat state that we do want. And to do these longer ZZ and Z parity measurements of two blocks or three blocks, we can just imagine extending this cat state to the adjacent blocks
11:42
if they're arranged adjacent to each other in a ribbon. So to analyze the error, we'll study this under a local stochastic bias noise model.
12:01
So there'll be two separate error rates, epsilon for dephasing errors and a second rate, epsilon prime, for arbitrary errors, which is assumed to be weaker. We'll define the bias as the ratio of these two error strengths. As I mentioned, a key difficulty is ensuring that the cat states are prepared correctly because if there's some errors during those measurements
12:22
and we assume that the cat state is something that it's actually not, that could actually lead to an error almost immediately. And so we need to be very careful in the analysis for that. And once we've taken care of all these things, we can arrive at an analytic upper bound on the effective noise strength. It's just a polynomial in epsilon and epsilon prime.
12:43
And its degree is given by the code parameter, the block size, and the number of times we repeat each of these different kinds of measurements. And so we can, for a given error strength and bias, we can search for the best parameters that minimize the effective error strength.
13:03
And that's done in this plot. So here we have five different values of the bias going from 1, 10, 100, 1,000, and 10,000. And you can see for bias equals 1, these codes actually don't, in this range,
13:21
they don't do any better than not encoding at all. But once you go to higher and higher biases, they do better and better. So at bias of 10 to the 4, we have a pseudo threshold around 2 times 10 to the minus 3. And at an error rate of 10 to the minus 4, we can get an effective error strength,
13:41
which is below 10 to the minus 13. And so again, each of these points represents a different value of all these code parameters, the length and width and the repetition rates. And so we've optimized that at each point. Can also look at the resource requirements for this code.
14:05
So in this plot, again, we have this time just four different values of the bias. And we can see that as you go to higher and higher bias, you can do better and better.
14:20
And this black curve shows some data from a survey of codes for depolarizing noise and a numerical study of how well these codes perform in terms of this x-axis, which is the number of controlled not for the black curve or controlled z-gates for the other curves,
14:40
the number of those gates in a given rectangle versus the logical failure rate, assuming a physical error rate of 10 to the minus 4. And if you had some bias, say 10 to the 4, but you ignored it, you would probably get very close
15:00
to this black curve and changing through different kinds of codes. But if you take advantage of this bias, you can actually get to codes which, for the same overhead in terms of number of gates, give you an increased amount of protection.
15:23
So to summarize, we've designed fault-to-harland gadgets for these asymmetric bacon short codes. We have a provable upper bound on the error rate, which achieves a significant reduction in the error strength for a modest number of gates. And because of the structure of these bacon short codes, we can actually possibly also lay out these qubits and gates
15:43
in a geometrically local fashion as well. Thank you. Questions for Peter?
16:01
If my brain were working, I could probably work that out from your resource slide. But for, say, 10 to the minus 4, where you had really low error rates with, I forget what the bias was, how many levels of concatenation, was it just the two layers or do you need more?
16:20
So for this plot, it's just the bacon short code and there's no second level of concatenation. Presumably you could do something similar with a planar code.
16:41
If you just want to encode a single qubit in a planar code, you could make its dimensions asymmetric. How does this scheme compare with that? Have you thought about that at all? Yeah, so I haven't done any analysis on that, but it's definitely true that you can think of other schemes where you have two independent control of these by a different size
17:03
and length.
17:20
Yeah, so all the gates are two-qubit operations. So you mean this picture? So essentially what you're doing is you're preparing,
17:45
you're doing a transversal controlled phase gate with a twist between this blue thing, which is the data qubit, and then the other qubits are a second code,
18:00
which is kind of the opposite. So if this is a three-by-five or two-by-five, then the other is five-by-two and is prepared in the plus state. But these intermediate qubits are just used to prepare the cat state. And the reason for preparing the cat states are to prevent this problem from this previous slide
18:25
where a single X error on one of these ancilla qubits could lead to multiple errors on the output. Let us thank Peter and the rest of the speakers today.
18:44
And let's.