We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Quantum Error Correction and Quantum Error-Correcting Codes

00:00

Formal Metadata

Title
Quantum Error Correction and Quantum Error-Correcting Codes
Title of Series
Number of Parts
48
Author
License
CC Attribution - NonCommercial - NoDerivatives 3.0 Germany:
You are free to use, copy, distribute and transmit the work or content in unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
This tutorial briefly introduces the important principles of quantum error correction and quantum error-correcting codes. We look at classical error correction, and the important limitations in working with quantum information. The basic structure of a quantum code is laid out, and how errors are detected and corrected. We introduce stabilizer codes and their connection to classical linear codes, and show how quantum codes can be constructed from classical codes. The problem of encoding and decoding is briefly discussed. Finally we touch on degeneracy and passive error correction in quantum codes.
Codierung <Programmierung>Quantum field theoryError detection and correctionError messageVorwärtsfehlerkorrekturPresentation of a groupData miningQuantum chromodynamicsQuantum field theoryComputer animation
VorwärtsfehlerkorrekturSocial classCASE <Informatik>Quantum field theoryCategory of beingArrow of timeError messageField (computer science)Digital electronicsError detection and correctionOnline helpMultiplication signClassical physicsAmsterdam Ordnance DatumFault-tolerant systemData structureQuantum information scienceStability theoryConnected spaceInformationAreaLinear codeCodierung <Programmierung>Real numberOperator (mathematics)Point (geometry)AnalogyQuantum mechanicsComputer animation
MathematicsMaxima and minimaLink (knot theory)Physical systemClassical physicsError messageQuantum computerVorwärtsfehlerkorrekturError detection and correctionLengthDifferent (Kate Ryan album)Interactive televisionMultiplication signOperator (mathematics)Process (computing)Hamiltonian (quantum mechanics)Physical lawInformationBitParity (mathematics)FreewareRule of inferenceEndliche ModelltheorieSound effectEvoluteThermal expansionIntegrated development environmentState of matterLevel (video gaming)CurvatureCASE <Informatik>InternetworkingInstance (computer science)Quantum mechanicsCommunications protocolLie groupLink (knot theory)
Maxima and minimaExecution unitVorwärtsfehlerkorrekturClassical physicsInformationError messageCoding theoryError detection and correctionCategory of beingString (computer science)MultiplicationSet (mathematics)WordBitSocial classParameter (computer programming)Noise (electronics)Area
Maxima and minimaInformationError messageWordOperator (mathematics)Set (mathematics)Different (Kate Ryan album)Limit (category theory)RotationQuantum computerVorwärtsfehlerkorrekturAnalog computerData structure1 (number)State observerCASE <Informatik>Moment (mathematics)BitAnalogyAdditionError detection and correctionQuantum stateQuantum mechanicsComputerState of matterSuperposition principlePower (physics)NumberOrder (biology)ExistenceTheoryCloningCategory of beingObject (grammar)Analytic continuationNeuroinformatikWater vaporElectronic mailing listContinuum hypothesisPoint (geometry)Computer animation
Maxima and minimaHill differential equationUser interfaceConvex hullParity (mathematics)BitCASE <Informatik>Digital electronicsWordSpacetimeVorwärtsfehlerkorrekturLinear subspaceClassical physicsState of matterQubitPhysical systemGame controllerError messageMeasurementKnotSuperposition principleSeries (mathematics)InformationProbability distributionUnitärer OperatorConnectivity (graph theory)Error detection and correctionBasis <Mathematik>Quantum stateHilbert spaceLogic gateQuantum entanglementQuantum circuitType theoryCodierung <Programmierung>Object (grammar)Function (mathematics)1 (number)Equivalence relationOperator (mathematics)CloningSingle-precision floating-point format2 (number)System callComputer animation
Information managementVorwärtsfehlerkorrekturCubeQubitRotationBitOperator (mathematics)State of matterContinuum hypothesisMultiplication signSuperposition principleLengthInformationQueue (abstract data type)Computer animation
Convex hullMeasurementState of matterCombinational logicSet (mathematics)Error messageVorwärtsfehlerkorrekturBitSingle-precision floating-point formatIdentity managementCASE <Informatik>Sign (mathematics)Physical systemTrigonometryRotationOperator (mathematics)Computer animation
Hill differential equationConvex hullMathematicsHexagonError messageDegrees of freedom (physics and chemistry)BitQubitInformationPhase transitionQuantum mechanicsPhysical systemConnectivity (graph theory)Mechanism designOperator (mathematics)CurvatureInstance (computer science)Basis <Mathematik>Digital electronicsHadamard matrixLogic gateComputer animation
Maxima and minimaLemma (mathematics)Motion blurBasis <Mathematik>Game controllerDigital electronicsPhase transitionKnotError messageBitData structureVorwärtsfehlerkorrekturForm (programming)Superposition principleQubitPolygonIdentity managementMultiplication signOperator (mathematics)State of matterTotal S.A.Cartesian coordinate systemComputer animation
StrutThumbnailExecution unitMaxima and minimaMUDOperator (mathematics)State observerQubitBitError messageVorwärtsfehlerkorrekturIdentity managementData structureStability theoryUnitärer OperatorSet (mathematics)TensorproduktLinear subspaceDatabase normalizationNumberEigenvalues and eigenvectorsSpacetimeSuperposition principleForm (programming)State of matterParity (mathematics)Multiplication signTheoryOrder (biology)OrthogonalityGroup actionElectric generatorPhase transitionWordPolygonAreaCommutatorNP-hardP (complexity)Quantum field theoryComputer animation
Gamma functionOperator (mathematics)Term (mathematics)Phase transitionData structureSet (mathematics)VorwärtsfehlerkorrekturStability theoryDiscrete groupError messageIdentity managementBasis <Mathematik>Parity (mathematics)Electric generatorGroup actionMappingMathematicsString (computer science)QubitOrder (biology)CommutatorNumbering schemeBitDivisorSpacetimeEigenvalues and eigenvectorsTensorproduktAbelsche GruppeCorrespondence (mathematics)Positional notationNumberField (computer science)Uniform resource locatorHexagonPolygonAreaSymbol tableState observerMilitary baseEigenraumStreaming mediaComputer animation
Gamma functionRow (database)Product (business)BitSkalarproduktraumSymplectic vector spaceQubitString (computer science)PhysicalismSpacetimeMultiplication signSet (mathematics)Operator (mathematics)Linear codeDegrees of freedom (physics and chemistry)CommutatorStability theoryLevel (video gaming)Prime idealData structureClassical physicsMatrix (mathematics)Parity (mathematics)Linear algebraError messageVorwärtsfehlerkorrekturCorrespondence (mathematics)Binary codeElectric generatorMereologyVariable (mathematics)Disk read-and-write headResultantComputer animation
QuadrilateralMoving averageColor managementRankingMathematicsExecution unitMaxima and minimaSlide ruleString (computer science)Error messageVorwärtsfehlerkorrekturMaxima and minimaWordElectronic signatureElectric generatorCommutatorVector space1 (number)Series (mathematics)Condition numberStability theoryClassical physicsBitSpacetimeState of matterRevision controlSet (mathematics)Positional notationException handlingMatrix (mathematics)Identity managementMultiplication signOrder (biology)Category of beingLogic gateStandard deviationProduct (business)Constructor (object-oriented programming)Parity (mathematics)Term (mathematics)MereologySymplectic vector spaceRow (database)Linear codeNumberParameter (computer programming)DistanceBinary codeMultiplicationFingerprintError detection and correctionValidity (statistics)Operator (mathematics)Eigenvalues and eigenvectorsData structureQubitTheoryGroup actionSymmetric matrixSkalarproduktraumNatural numberLecture/Conference
Execution unitVorwärtsfehlerkorrekturData structureClassical physicsEigenvalues and eigenvectorsError messageOperator (mathematics)Set (mathematics)CommutatorBitSingle-precision floating-point formatDimensional analysisCodierung <Programmierung>CurveSpacetimeDistanceQuicksortNumbering schemeWell-formed formulaXML
MathematicsConvex hullQubitBit rateDatabase normalizationWordInformationVorwärtsfehlerkorrekturNumberBasis <Mathematik>Stability theoryElectronic mailing listMatrix (mathematics)Representation (politics)Formal grammarParity (mathematics)LengthLinear codeElectric generatorIsometrie <Mathematik>SpacetimePoint (geometry)Set (mathematics)Linear subspaceBlock (periodic table)Classical physicsVector spaceRight angleDimensional analysisFocus (optics)CurveXMLLecture/Conference
Maxima and minimaEmailElement (mathematics)Row (database)Error messageBasis <Mathematik>Identity managementBitParity (mathematics)Channel capacityVorwärtsfehlerkorrekturStability theoryConjugacy classMatrix (mathematics)Logic gatePhase transitionProduct (business)Classical physicsRootBlock (periodic table)Limit (category theory)QubitConstructor (object-oriented programming)Order (biology)Galois-FeldSingle-precision floating-point formatRevision controlBit rateTheory of relativityDistanceOperator (mathematics)PolygonCurvePlanar ternary ringTranscodierungForm (programming)Standard deviationMilitary baseSpacetimeComputer animation
Execution unitEigenvalues and eigenvectorsVorwärtsfehlerkorrekturStability theoryUnitärer OperatorQubitGame controllerDigital electronicsState of matterOperator (mathematics)Error messageBitMappingElectric generatorQuantum entanglementFault-tolerant systemCodierung <Programmierung>Basis <Mathematik>Parity (mathematics)Unitäre GruppeForm (programming)Order (biology)NeuroinformatikPhase transitionLogic gateQuantum computerType theoryExecution unitConnectivity (graph theory)BuildingPolygonMarginal distributionArc (geometry)Selectivity (electronic)Computer animation
Stability theoryKnotPhase transitionOperator (mathematics)VorwärtsfehlerkorrekturInformationMatrix (mathematics)Series (mathematics)QubitLogic gateDigital electronicsElectric generatorPolygonCase moddingLevel (video gaming)Form (programming)Computer animation
Maxima and minimaExecution unitEmpennagePhase transitionEquivalence relationStability theoryInverse functionError messageUnitäre GruppeInformationOperator (mathematics)Digital electronicsSocial classConjugacy classVorwärtsfehlerkorrekturAxiom of choiceState of matterSet (mathematics)BitElectric generatorDifferent (Kate Ryan album)TrailElement (mathematics)Group actionLogicMereologyQubitCommutatorMultiplication signAutomatic differentiationPolygonComputer animation
Gamma functionMaxima and minimaInformationPersonal area network1 (number)Hospital information systemVorwärtsfehlerkorrekturError messageData structureInformationOpcodeSet (mathematics)Symmetry (physics)Linear subspaceFunctional (mathematics)Operator (mathematics)Multiplication signWordQubitStandard deviationElement (mathematics)Stability theoryCategory of beingSpeech synthesisRow (database)AdditionCASE <Informatik>Physical systemConnectivity (graph theory)Axiom of choiceState of matterTrailShared memoryFreewareAreaDifferenz <Mathematik>Revision control
Translation memoryPhysical systemInformationCommutatorStability theoryVorwärtsfehlerkorrekturOperator (mathematics)Term (mathematics)QubitMereologyInstance (computer science)Electric generatorBitComputer animation
Maxima and minimaQuantum computerPhysical systemOperator (mathematics)Error messageFault-tolerant systemPower (physics)Stability theoryInformationMultiplication signState of matterSound effectThresholding (image processing)Coma BerenicesLogic gateComputer animation
Execution unitStability theoryVorwärtsfehlerkorrekturError detection and correctionError messageData structureEquivalence relationWordOperator (mathematics)Multiplication signBitUniqueness quantificationStandard deviationClassical physicsConnected spaceQuantum field theoryQubitState of matterPower (physics)Linear codeFlow separationDiscrete groupMeasurementOrder (biology)SummierbarkeitDigital electronicsString (computer science)Social classLogic gatePhase transitionGame controllerObservational errorDevolution (biology)Codierung <Programmierung>KnotMarginal distributionPhysical systemPolygonObservational studySimilarity (geometry)
Transcript: English(auto-generated)
Okay, so the first thing is you have to be able to find the microphone. So I'd like to join Daniel in welcoming you all to USC and to this conference. When I volunteered to give the first talk, I wasn't allowing for two things. One is that being the first talk, I would be operating on very little sleep and not
enough caffeine. And the other is that, of course, it is incumbent on me to set the tone with a dynamic, exciting presentation and, of course, there's nothing that helps people stay awake like listening to a lecture on quantum error correction.
But no doubt, your reaction when hearing there was going to be a one-hour tutorial on quantum error correcting codes was the same as mine, which is, well, what will we do with the remaining 45 minutes. But when I started to write down all of the stuff that I really ought to do, I realized
that maybe my problem was actually the opposite and we may be here well into the night. I'll try to keep to time, however, with the help of our friendly moderator. Okay, the one thing that's really working in my favor here is that 95 percent of you, if not 99 percent, already know everything that I'm about to say.
So if you do, you should feel free to take a nap for the next hour and then wake up when Andrew Landau will get up to give the tutorial on fault tolerance. So this is what I'm going to talk about. First, what is quantum error correction? What's a quantum error?
I'll look briefly at how classical error correcting codes work because that's really the inspiration that led to the ideas of quantum error correction and quantum error correcting codes. I'll look at some of the, what I would say are misconceptions about how the peculiar properties of quantum mechanics prevent error correction from being possible and then look
at the examples of the first quantum codes to be discovered. From those, I will abstract the structure, the general structure, not completely, not the most general structure, but a very general structure for a quantum error correcting code that will then lead me to the class on which the overwhelming amount of work in this
field has been done, which is the class of stabilizer codes and show their connection to classical linear codes. There's a little arrow in the middle of my talk. Then I'll talk about how one designs the circuits to measure the error syndromes that
are used in correction and also to encode and decode quantum information. For many people, they don't worry about exactly how encoding and decoding are done, but in certain areas like quantum communication, this is very important. Even in fault tolerance where you never really decode, the use of encoding circuits is often
very helpful in designing ancillas and things like that. I'll say a very little amount about the phenomenon of degeneracy, which is peculiar to quantum error correction and had no real practical analog in classical error correcting codes.
Finally, I'll just say a very small amount about the significant generalization of what I'm talking about in these points, which is the operator or subsystem quantum error correcting codes. What is quantum error correction? The question here really is what is a quantum error?
A quantum error is obviously something that arises in the operation of, say, a quantum computer or information processing protocol, which is undesired and could disturb the outcome. Of course, the errors, operations themselves, are subject to the laws of quantum mechanics,
so only certain things are allowed. The two most natural possibilities that would leap immediately to mind are either unitary errors in which we're trying to do some evolution, but our Hamiltonian is not exactly what we think it is or is too strong, too weak, operates for different lengths of
time or so forth, and then decoherence due to interaction with the environment. And in both of these cases, we can abstract away the process that we're describing as the state that we want, psi here, being multiplied by some operator. So this operator could be a unitary operator, which represents the difference between the
evolution we want and the evolution we get, or it could be one term out of a Kraus expansion for a CPTP map. Now, this is not completely general, but it does include the largely of the effects that we worry about in quantum computing.
So we're trying to correct errors of that type, and our model is going to be classical error correcting codes. So in a classical error correcting code, we'll imagine that our system consists of bits, and the canonical error that we could get would be a bit flip, either a zero becoming
a one or a one becoming a zero. So for a very simple error model, the binary symmetric channel, you could have every bit have a probability p of being flipped, and so after about one over p steps, any bit would be essentially randomized. Now, to get around that problem, what's used and has been known for a very long time is
redundant coding. So the simplest kind of classical error correcting code is just to make multiple copies of each of the bits. So a very simple example, we could just keep two copies, so represent zero and one by zero, zero and one, one, and then if an error happens and flips one of these
bits, we'll get one of these pairs, either zero, one or one, zero, which should never arise if the system is error free. So if we ever see one of these two pairs, then we know an error has happened. So this is error detection. Now error detection is all you need for certain purposes.
For instance, in communicating over the internet, often all they do is add a parity check and check whether the parity check has been flipped or not. If an error is detected, then they just send the information again. That's not going to be good enough for most of what we're doing, but classically you can
just add one more copy and you get the simplest quantum error correcting code, the three-bit repetition or majority rule code. So if any one of these three bits gets flipped, we can look at the three bits and take a majority, so if there's only one zero, then we assume it was one, one, one.
If there's only one, one, we assume that it's zero, zero, zero. So from this single example, I will now extrapolate and derive all of classical coding theory in brief. What are the properties that these codes have? First, information is stored redundantly across multiple bits.
Now here, this is really based on the fact that I'm assuming that errors are local errors, that individual bits are flipped. Of course, I can be more general than that and assume some broader class of possible errors that might include correlated errors. For most of this talk, I will assume that noise is local, but one can make very similar
arguments starting from any naturally occurring set of errors. So if errors are local, then we want to spread the information across multiple bits, so no single bit can erase our information. We can detect errors by having only certain code words allowed.
Only certain strings of bits are allowed. If we see anything other than those, we know an error has occurred, so that's error detection. And if we want to actually correct, we would like the code words to remain distinct even after an error has occurred. So with only two copies, of course, if we see a 01 or a 10, we don't know if we started
from 00 or 11, but with three copies, we do. Of course, if two errors occurred, then we would correct wrongly. And that brings us to the final point, which is the limitation of error correction
through error correcting codes. We can't correct every error. Why? Because the difference between an error and an operation is entirely in our mind. An error is an operation that we didn't want to do, whereas an operation that we did want to do. If you could have any possible error, then your code word could be transformed into anything
including another code word. So we can never do more than correct some set of correctable errors. It's always limited. And what we like to do is to design our code to correct some particular set of errors that we consider likely or important.
So we won't want our theory of quantum error correction to work in the same way. But quantum information has some restrictions that classical information does not have that may complicate just reproducing this kind of structure. So these were the questions or objections that were raised very early as the theory
of quantum error correction was just coming into existence. Many people said quantum error correction should be impossible because of the properties of quantum information. So what are these properties? Famously, there's no cloning. A general quantum state cannot be copied.
And everyone, well, not everyone, but many people said, well, you need to copy information in order to do redundant coding. You can't copy quantum information. Therefore, quantum error correction is impossible QED. If we could get past that, we have an additional issue, which is we would like to preserve
not just individual code words, but arbitrary superpositions of them. In quantum states, to make use of the power of a quantum computer, we would really like to be able to operate on arbitrary superpositions of our code words. When you measure to detect whether an error has occurred, you might destroy that superposition.
Then bit flips are a discrete set, but the number of possible operators E that could multiply our state is continuously infinite. So it seems like there are vastly more possible errors in quantum mechanics than there are in classical digital information.
And in fact, it is this problem that makes analog computing an ineffective technique compared to digital computing. In analog computing, you use continuous quantities to represent the quantities of your computation. But then you have this problem that small errors are undetectable.
What's the difference between doing exactly the rotation that you want and doing the rotation by some slightly different angle? Many of the people who have claimed that quantum computing can't possibly work make this analogy to analog computing.
And finally, bit flips. Bits can be flipped in any order, but in quantum mechanics, in addition to the continuum of errors, there are also non-commuting errors, so completely observables that could be disturbed by different kinds of errors. So this seems like a pretty serious list of obstacles, and one might not be blamed for
throwing up one's hands and saying, oh, well, quantum computing, it's a nifty idea, but it requires you to be able to do things perfectly, and we live in an imperfect world. Anyone who's been through the recent windstorm knows how imperfect the world can be.
And I was a skeptic myself in the very early days. So this is like my moment of confession here. But fortunately, I was wrong along with all these other people. So I don't normally feel relieved at being wrong, but this is one case where I do.
So let's look at these objections, at least the two initial ones. First, do we actually need to clone quantum states to do coding? And the answer is that we do not. If you think about it, a classical bit, 0 versus 1, and two code words, 0, 0, 0, versus
1, 1, 1, contain exactly the same information. So we haven't made a copy of an arbitrary state when we do an encoding of this type. So to put that in a different way that one doesn't normally see it, if you think
of a state of a bit as not being a value 0 or 1, but as being a probability distribution over the possible value 0 and 1, classical probability distributions also cannot be cloned. If you have only one copy of a system in a probabilistic state, you can't create
another one in the same probabilistic state. That doesn't stop people from doing classical error correction. And while pure quantum states are the same as probability distributions, it turns out that this objection does not stop quantum error correction either.
What about superpositions? So classically, when we wanted to check if an error had occurred, we looked at our code word and we saw if it was one of our allowed code words, 0, 0, 0, or 1, 1, 1, or some code word containing an error. But quantum mechanically, that would be a bad thing to do, because if we had a superposition
of our two code words, we would destroy it. However, it turns out that actually measuring the individual bits is not necessary, even classically, to do error correction. Instead, we can measure a series of parodies that tell us whether or not an error has
occurred without actually telling us the value of the data we're trying to protect. So to take the simple example here, if we look at the parity between the first and second bit and between the second and third bit, for 0, 0, 0, both of those parodies are 0. Those two bits are the same. And the same is true for the code word 1, 1, 1.
Those two parodies are 0 in that case. So if an error occurs, say, on the middle bit, taking us to 0, 1, 0 or 1, 0, 1, those two parodies are both flipped. And we can tell which error occurred just by knowing the values of these two parodies without needing to know what the actual code word is.
Now classically, when we measure something like this, we generally do it by measuring the individual subsystem, the individual bits. But quantum mechanically, we don't have to. And we'll see shortly a circuit that allows us to measure these parodies without measuring the underlying data. So if you do a well-designed code with a well-designed method of measuring these parodies, you can
correct errors without having to measure the encoded information. So we'll see how this works in the very simple code. This is really just the same classical repetition code. So we're protecting only against bit flips.
Now what's the quantum equivalent of a bit flip is a not operation, multiplying by a how the X operator. So this operator takes the basis state 0 to 1 and the basis state 1 to 0. And we do our encoding by embedding this system, this single qubit state, in a state
of three qubits. So here we have our state that we want to protect, and we have two extra qubits that start in the state 0, what we call ancillas or ancillary bits. We then do a unitary transformation, which takes us to this entangled state of the three bits. And we haven't copied psi, this isn't three copies of this state.
This is in fact an entangled state that contains the same information. So no cloning doesn't apply. Quantum circuit that does this encoding, we just do two controlled knots from our information qubit onto our two ancillas, and the output state is the encoded state
that we want. Now if a single bit flip occurs, say on bit 1, then we get a superposition of two other states, 100 and 011. And similarly if we have bit flips on the other two bits, we have two other possible
states that will result. And an important thing to notice is that our original state in each of these three states containing an error are all orthogonal to each other. So our state here lives in some two dimensional subspace of the eight dimensional Hilbert space of three qubits, and these three states live in three other two dimensional
subspaces, each of which is orthogonal to our original space and to each other. To figure out if an error occurs though, we have to do something cleverer than measuring these three bits, because as soon as we do that, we destroy the state that we're trying
to preserve. So what do we do instead? Well instead, we want to measure these parities, and here's a circuit that does that. So again, we're using controlled knots. We bring in two ancilla bits. We do a controlled knot from the first two bits onto the first ancilla bit, and a controlled knot from the second and third bits onto the second ancilla bit, and then
we measure them in the computational basis. If we get zero, the parity of these two bits is zero. If we get zero here, the parity of these two bits is zero. Otherwise we could get one here or here. So there's four possible outcomes to this measurement. They tell us the value of these two parity bits, which tells us if either no error has
occurred or a bit has been flipped, and which bit, bit one, two, or three. So here are two key components for a quantum code. First we have to be able to measure the error syndromes, that is these parities.
This is the information that tells us what error has occurred. We want to measure that and no more than that. And the second thing is to correct the errors once we've made these measurements, we can just do a unitary transformation on whichever of the three bits was flipped. The X gate is a unitary transformation, so
we can do that conditioned on this measurement outcome. Now already we've gone beyond what the same classical code can do. This classical code protects classical information, but we've seen that we can use the same code to protect arbitrary superpositions. And in fact we can do more than that.
Suppose instead of a simple bit flip, we apply one of these continuum of operators, that is an X rotation on a single cube. So an X rotation is equivalent to turning on a Hamiltonian proportional to X for some length of time. And we can expand this operator out as a superposition of the operators I and X.
If we apply that operator to say the first qubit of our code word, we get a superposition of two states. One is our original correct state, and the other is the state with a single bit flip on the first qubit.
Because this is a superposition, and those states are orthogonal, when we do this measurement, we will see either this state or this state with probabilities cosine squared and sine squared. And if we see this, we can apply the correction.
If we see that, we do nothing. And we're left in both cases with a system having no errors. So that in fact, not only can this correct bit flips, it can correct arbitrary X rotations. And more than that, it can actually correct any linear combination
of the error operators in this correctable set. So any linear combination of the identity and a single X on bits one, two, or three represents an error that this code can correct. So bit flips are all very well, but of course, quantum mechanical systems have many more degrees of freedom
than just the values of the bits. For instance, we could have phase flips where we flip the relative phase between the zero and one components of a qubit. This is the same as applying a z operator. And we can protect against this kind of error in exactly the same way we
protect it against bit flips, only now instead of encoding our information in the computational basis, we encode it in the X's, where plus is zero plus one and minus is zero minus one. We can encode that using essentially this same circuit, just applying a Hadamard gate on each of the three qubits at the end
that switches us between the Z and the X basis. And we measure the syndromes, again using a circuit exactly like this, only we apply a Hadamard before and after each of these controls to switch from the X basis to the Z basis just long enough
to do the control knot and then we switch back. So we can protect against phase errors in exactly the same way we can protect against bit flip errors. So that's all very well, but neither this code or the one we saw before can correct against both bit flip and phase flip errors.
And the first person to show how to do that was Peter Schorr, followed very soon after by Andy Steen, but we'll see his qubit. Peter Schorr fixed this problem by a technique called concatenation. So he constructed a code in nine qubits.
This code has three subsystems and the structure of it is the same as the structure of a phase flip code. So we have a zero plus one for the logical zero state and a zero minus one. But each of those bits now is itself protected by a bit flip code,
so is encoded in three qubits for a total of nine. And this code can protect against an arbitrary phase error on any of the nine qubits. And it can also protect against an arbitrary X error. In fact, it can protect against up to three, one on each of these triplets.
But because it can protect against both a phase flip and a bit flip error, it can also protect against them happening to a single qubit, which is the same as a Y. So a Pauli Y is just basically Z times X. Since it can protect against X, Y's and Z's, and obviously the identity,
which is no error at all, we can protect against an arbitrary superposition and these form a complete basis for all one qubit operators. So this code can correct any single qubit error. It was the first code found that could do that.
So from these examples, I will now do the same thing and deduce a general structure for quantum error correcting codes. It turns out that what I write here is not completely general. We can generalize it in a number of respects, some of which I will say as we're going along. But first of all, a quantum error correcting code
is a subspace of a larger Hilbert space, the code space. So in the first example, it was the subspace span by zero, zero, zero and one, one, one, okay. It has to be a subspace because we need some room, if you will, some extra redundancy.
With each quantum error correcting code, we can associate a set of correctable errors. Now, in fact, actually for a given code, there are often multiple sets that could be a set of correctable errors for that code.
So when I talked about the bit flip code, it corrects against bit flips, but it's also a code that would correct against an arbitrary Y error. The phase flip code protects against Z errors, but it could also protect against an arbitrary Y error. The thing is they can't correct against both
an X error and a Y error. You have to choose, right. So when we design a code, we're designing it and associate a particular set of correctable errors. Now, each error in that set will then map the code space, this subspace to another subspace, which is orthogonal to that original subspace.
And we want those subspaces to be orthogonal to each other for each of the errors in our correctable set. Now, this is something that's a little too narrow here. They don't have to be exactly orthogonal. And as we'll see later with the phenomenon of degeneracy,
it isn't actually necessary that every error have a distinct error space. But this is the basic idea. When a correctable error occurs, the state becomes mapped to another subspace orthogonal to the original one in a way that preserves the superpositions.
So a superposition in our original subspace gets mapped to a superposition with the same amplitudes in our new subspace. We can undo this by applying a unitary transformation. Of course, in order to do that, we need to figure out which error happened. And we do that by measuring an observable.
We measure an observable, the so-called error syndromes, which tell us which error occurred, and that tells us which unitary correction we need to apply. Now, we'll generalize this a little bit in the course of this talk, and there's a lot more that I won't have time to get to. But this is the basic structure that we start from.
Now, all the codes that I've written down so far are examples of what are called stabilizer codes. This was something recognized after the fact by Daniel Gottesman, who basically created this theory. So let's see what we mean by a stabilizer code.
So first, when we measured the parities in the bit flip code, we were actually measuring two of those tensor product of z on the first two qubits for the first parody, and on the second two qubits for the second parody. Now, these are tensor products of Pauli operators,
so like Pauli operators, they have eigenvalues of plus one and minus one. And our encoded state, which is a superposition of these two code words, is a plus one eigenstate of both of these observables. And in fact, that's how we will now define our code space. The code space is the simultaneous plus one eigenstate
of these two observables. So we call these observables the stabilizer generators. They're the generators because we can use these two operators to generate a group. And any operator in that group will have states
of this form in its plus one eigen space. Then the error operators that this code corrects again, that were x applied to either the first, second, or third qubit, and also the identity. So these x operators anti-commute
with one or the other or both of these two observables. And that's how we detect the error. When an x error occurs, we move into an orthogonal subspace where one or both of these operators has eigenvalue minus one. So by measuring these operators,
if we get plus one for both of them, we conclude no error has occurred. And if we get minus one for one or both of them, we can deduce an error has occurred and which error it was. So similarly, the phase flip code has exactly the same structure but with x's instead of z's.
So now we're measuring the parity in the x basis instead of the z basis. And finally, the shore code has eight stabilizer generators. So to detect the bit flip and phase flip errors in that code, we measure these observables.
So from here on out, I'm gonna emit the tensor product symbol, which takes up room and doesn't add anything to our understanding. So this is the general notation used in this field. In all three of these codes, the code space is the simultaneous plus one eigenspace of a set of commuting stabilizer generators.
That's important. All of these operators commute with each other, which means that we can measure them all simultaneously or in any order without causing disturbance to the values of the other operators. If they were anti-commuting, then in general, measuring one would disturb the value of the other.
We'll later see a generalization of this scheme that actually allows this, so-called quantum error correcting codes. But the basic scheme, these have to be commuting generators, which means that the group they generate is an abelian group. So here's a kind of mathematical picture
we're going to use. For simplicity, I'm going to make use of the discretization of errors and assume my set of correctable errors are themselves Pauli operators. Now the Pauli operators form a basis, so any error operator can be written in terms of Pauli operators.
We might lose something by making this change. So for instance, it might be more efficient. You might need to include a larger number of Pauli operators in your correctable error set in order to describe all of your physical errors. But it simplifies the mathematical notation. So for now, I'm going to assume that.
And once we assume that all of our operators are stabilizer generated and our error operators are Pauli operators, we can make use of a very useful mapping between the Pauli operators and strings of bits. Now the important thing is for these errors,
we don't actually care about their overall phase. When an error is applied, its overall phase is a global phase and therefore is physically irrelevant. So all we really care about is the commuting structure of this set of operators. And if that's all we care about, then we can write every operator
either in terms of whether it's a Z operator, an X operator, or a Y operator, which includes both a Z and an X, or neither, which would just be the identity. So for a Pauli operator on three qubits, we could use a string of six bits.
For each location, we indicate whether the opting on that location includes a factor of Z or a factor of X or both or neither. So this string here, zero, one, one, one, one, zero, corresponds to the operator X, Y, Z.
The first one has zero Zs and one X. The second one has one Z and one X, which is a Y. And the third one has one Z and no Xs, which is a Z. So there's a mapping between Pauli operators on N qubits and strings of two N bits.
These strings of two N bits have a structure where the bits come in pairs. And this is what's called a symplectic structure. So a symplectic space is a space where the degrees of freedom come in pairs.
And we can define a special kind of inner product. It's not a true inner product, but it's what we'll call a symplectic product between these strings of bits. So if we have a string of two N bits with a Z part and an X part and another one, so those are Z and X are the strings
and the second one is Z prime, X prime, the product we get is by taking the inner product of Z with X prime and X with Z prime and then adding them where this is all binary arithmetic. So what we end up with is a single bit when we take this symplectic product.
If we have two Pauli operators whose corresponding strings are U and V, it turns out that this symplectic product between their bit strings determines whether these operators are commuting or anti-commuting. What it's really doing is capturing whether we have Zs overlapping with Xs
an even or an odd number of times. So if NU and NV commute, then the symplectic product of their two bit strings will be zero. If they anti-commute, the symplectic product will be one. So that means that we can describe
a quantum error correcting code that encodes K qubits into N physical qubits using a matrix with N minus K rows and two N columns. So the two N columns represent the bit strings that represent each of the stabilizer generators
and the rows, there's one row per stabilizer generator. So to give an example here, here's a matrix. This is the Z half of the matrix and this is the X half of the matrix. Each row corresponds to one stabilizer generator. If we look at the first row, we see we have a one in the Z place and a zero here. So that would be a Z operator on the first qubit.
A one and a zero gives us zero and so forth. So that gives us the first stabilizer generator. Each of the subsequent rows gives us these three remaining operators. And we can check if these operators will commute with each other by taking the symplectic product between the rows.
So if the symplectic product between the rows is zero, then the set of operators that result are all commuting. The handy thing about this, it seems a little fiddly and abstract. Why not just work with the operators? But we can manipulate matrices like this using plain old ordinary linear algebra
over binary variables. And that can be very handy. If we think of this matrix as being itself, a parity check matrix for a classical code, we realize that there's a map between classical linear codes of this structure
and quantum stabilizer codes. So a stabilizer code, the code space is the simultaneous plus one eigenspace of a set of stabilizer operators, which are also Pauli operators. The correctable error set has to be given to you.
And for it to be correctable, one of these two conditions has to hold. So the first condition here is the one that I said before in group theoretic notation. If we consider two different errors in the set, they map us to two orthogonal subspaces
in such a way that if we first apply an error one and then the correction for a second error, we should not get back to the code space, right? So that implies that these two errors
have distinct syndromes. We can tell them apart and know which error correction to apply. However, there is an exception to this. If we have two errors that behave exactly the same, that is they map the state to the same error space, such that the same correction
would correct both of those errors, then that's okay. Even though we can't tell those errors apart, we can still correct them. So this is the phenomenon of degeneracy, which is a purely quantum phenomenon. Classical codes do not exhibit this. You can cook up classical versions of this, but they're rather contrived, okay.
So here's an example. We describe an error by a symplectic string. We see we have a one on the fourth bit on the Z side and the X side. So this string represents a Y error on the fourth qubit.
If we take our some tricks from the previous slide that represents the stabilizer generators and take its symplectic product with this error, we get a vector containing a series of zeros and ones. This is the signature, the syndrome, associated with this error.
In other words, this is saying that the first stabilizer generator commutes with this error but the second, third, and fourth stabilizer generators anti-commute with this error. So when we measure those stabilizer generators, we'll get a plus one for the first and a minus one for each of the other three.
And that's the fingerprint that tells us which error has occurred. Once we know that, we'll say, aha, this was a Y error on qubit number four and we can apply a Y gate to undo it. So we can figure out the properties of the correctable error set and the code
working in terms of these symmetricities. This symplectic structure is a classical code. I mean, we can think of this as being a parity check matrix
where we're defining the parity check using this funny symplectic product instead of the usual inner product. But it's not the kind that one normally encounters. However, we can build symplectic codes like that out of more standard kinds of classical linear codes. And one of the first examples of this was what's now called the CSS construction
after Calderbank, Shore, and Steen. So the idea here is you take a binary code or two binary codes, and you construct a symplectic matrix that looks like this. Now this H1 is entirely on the Z side.
There's only zeros on the X side. Two is entirely on the X side with only zeros on the Z side. So this will give us a set of stabilizer generators containing only the identity and z. And this will give us a set of stabilizer generators containing only the identity and x. In order for this to be a commuting set,
we require that H1 times H2 transpose, this is now just standard binary matrix multiplication, this has to be the zero matrix. And if that's true, then there's a set of Pauli operators that correspond to the rows of this matrix, which give us a valid quantum error correcting code.
If this H1 is the same as H2, then this condition is the same as saying that the rows of H are themselves code words. That is, the parity checks of this code are themselves contained in the code.
So a code that has that property is called dual containing. If we use two different codes, then what we're saying is that the parity checks of H1 are contained in the code described by H2 and vice versa. So if these two codes, classical linear codes, have the parameters n, k1, d1,
so that k1 is the number of bits encoded and d1 is the minimal distance of the code, and the second code is n1, k2, d2, obviously the n's have to be the same, then the quantum code that we construct in this way uses n qubits, it can encode k1 plus k2 minus n qubits,
and then the distance will be at least the minimum of these two distances. The way of thinking of that is, we can correct up to d, well, d1 over two,
or, sorry, x errors, and up to d2 over two z errors, so we can, whichever of these is the minimum is the restriction of how many general errors we can correct, okay. In principle, it could actually be higher than this, but it's at least this minimum.
So, that brings us to the Steen code. So the Steen code was designed and built from two copies of the classical Hamming code. So the Hamming code encodes four bits into seven bits and has a distance of three, which means it can correct a single bit flip error.
What Andy Steen did was he took two copies of this, this is the structure that we now attribute to it, one for correcting x errors, sorry, one for correcting z errors and one for correcting x errors, and he applied them both.
This set of operators commutes with this set of operators, so we can measure them all, and there's a simultaneous plus one eigenspace of these, which is of dimension two. How could we figure that out? Well, there are seven bits, and each of these two codes encoded four classical bits.
So using this formula, we would say that there's four plus four minus seven, or one qubit that's encoded. So we see the rate is much lower. We can't protect nearly as many qubits with the same size code word as we could classically, and that reflects the fact
that there are many more kinds of errors that quantum information is subject to. It requires more redundancy in a sense to protect against them all. So here are the basis vectors of the code space. They're kind of large and busy. So in fact, writing these down takes up more room than this set of stabilizer generators,
and as we go to larger and larger code blocks, this becomes more and more true. The size of the code words grows exponentially, whereas the size, the number of stabilizer generators and their length will grow only linearly. So one of the tremendous advantages of the stabilizer formalism
is that it's a much more compact representation for a code, and this is similar to classical linear codes where it's often much easier to represent the code by writing down its parity check matrix than by writing down the list of all possible code words, which may be exponentially long.
Okay, just one more point. We can think of a CSS code like this as being actually the intersection of two codes. We have one code here that protects against z errors, and that code could code for qubits.
We have a code here that protects against x errors. That could also encode for qubits. But if we want to protect against both of them, this defines sub-subspace, this defines sub-subspace. The intersection of those two subspaces is only two dimensional, so it can encode only two dimensions, one qubit.
Okay, now there's a natural isometry between these symplectic pairs of bits, z2,2, and the quaternary field, GF4.
So if we design a quaternary code, a classical code over quats, that is dual containing, so all of the operators, again, will be commuting, then we can map the check matrix for that code
into a symplectic matrix of this form, where each of the rows of this is mapped into two bits. There are four elements of the finite field of size four, and we map them to the bit pairs, zero, zero, zero, one, one, zero, and one, one, which would be the identity x, y, and z.
Oh, sorry, it's actually identity x, z, and y. Sorry about that. Naturally, I have to write them in that order because it's more confusing. So if you construct,
so this construction was proposed in a paper by Calderbrank, Rainshore, and Sloan, and you can prove relations between the classical code and the quantum code that you construct. If the classical code attains the singleton bound, which is a simple limit, how large the distance can be given the rate of the code,
then the quantum code attains the quantum version of the singleton bound, which is, as you see, a little bit worse because you have to protect against more kinds of errors. And similarly, if the classical code attains the Shannon limit, which, of course, they don't generally,
except in the limit as the block size becomes very, very large, but a code that does on the quaternary symmetric channel would attain the hashing limit for the, the quantum code would attain the hashing limit, which is not the capacity, but it is a bound on the capacity.
So for these more general stabilizer codes, how do we measure the error syndromes? So we saw an example with the bit flip code, and in fact, we can generalize from that. When we're measuring a product of Pauli operators, we can do that, we can get a parity,
where the parity is on the different bits, the parity is calculated using a different basis on the different bits. So for example, to contribute a zero or a one in the standard Z basis, we can just use a controlled not from the bit we want to include
to an ancilla bit in the state zero. If we want to measure a parity using the X basis, we can apply a Hadamard before and after to switch from the X to the Z basis, apply this controlled not, and then switch back. And for Y, it turns out you can do the same thing
using these two single qubit gates. This is a phase gate, more or less a square root of the Z operator, the phase gate and its Hermitian conjugate. And so this would contribute the parity
of this bit in the Y basis. So if we put that all together, if we wanted to measure a stabilizer generator, say YXZ on three qubits, we could put together a circuit that looks like this. So notice I write this YXZ, but I actually flip this bit in the opposite order
just to make this circuit a little more compact. The third bit, we apply a simple controlled not. The second bit, we first flip from the X to the Z basis and then flip back. And the first bit, we switch between the Y and the Z basis.
And then what we measure here will be the parity of this operator that is either a plus one or a minus one. And building up exactly these same components,
we can measure the eigenvalue of any Pauli operator of this type. Now, the way I've drawn this here, they all go to the same bit, which is fine if we assume that all of these operations are themselves error free.
In fault tolerant quantum computation, we can't assume that. And then you wouldn't wanna use a circuit of this form because a single error here could propagate up to all three of these qubits. So in fault tolerant quantum computation, instead of all being applied to the same bit, we would create an entangled state of three qubits here
and measure them all so that an error can propagate to at most one other qubit. And I won't say I think anything more about fault tolerant computation in this talk, but Andrew will say a great deal about it. Okay. Similarly, we can define an encoding and decoding circuit
for any stabilizer code in a way analogous to what we did for the bit flip code. Now, here's a way of thinking about it. If you're encoding k qubits into n qubits, you start with the state of your k qubits plus n minus k ancilla qubits
in a standard state, usually zero. We then are just implying a unitary transformation to go to the encoded state. Now, we wanna find the circuit that does this unitary, ue. So the thing to notice is that this initial state is also, in a sense, a stabilizer code.
This state has stabilizers, which are just z operators on each of the ancilla qubits. So z applied to qubit k plus one, k plus two, and fourth. So all of these simultaneously having eigenvalue plus one
is the same as saying these ancillas are all in the state zero. Since this is a stabilizer code and this is a stabilizer code, we can think of this unitary encoding operation as mapping each of these z operators to one of the stabilizer generators of our code.
And since we're mapping Pauli operators to Pauli operators, we can find a circuit for ue, which is what's called a Clifford circuit. It includes only controlled knots, Hadamards, and phase gates. Since these three gates map Pauli operators
to each other in a very well-known way, we can take a step-by-step operation to transform our initial stabilizer generators, these z operators, to our final stabilizer operators. So if you look at that for, say, the phase flip code here
we started with a z operator on qubits two and three, so we just have an information qubit and two ancillas. We apply our C knots. Because of the way C knots work, that's the same as doing column operations between the columns of this matrix. So this second column gets added to the first,
and the third also does. So we end up with a symplectic matrix of this form, and then when we apply Hadamards, all these z operators become x operators.
So you can plot your encoding circuit by figuring out the series of column operations you need to do to transform your starting matrix into the matrix for your final code. These encoding unitaries are not uniquely defined.
First, there's lots of different sets of generators for the same stabilizer group. Second, we don't really care in this what that unitary does to the ancillas if they don't start in the state zero. But by making an appropriate choice of the encoding unitary, you can choose the decoding unitary to simply be the inverse operation.
That is just the Hermitian conjugate of the encoding circuit. And you can choose that in such a way that if you imagine encoding, applying some error in decoding, those ancilla qubits at the end of this will actually hold values that tell you which error syndrome occurred,
which would be enough for you to figure out what you have to do to correct the error on the remaining information here. So this is a good way of understanding how error-correcting codes allow you to correct errors. These ants store extra information in them about which error has occurred.
So having more ancillas potentially allows you to correct more errors because you can keep more information about the errors that have occurred. So I'm almost out of time, so I'm gonna have to go rather quickly, I'm afraid. When we apply our encoding circuit, we can look at the Pauli operators
that act on our information qubits. These come in anti-commuting pairs, X and Z. And because they're Pauli operators, they'll be transformed by the encoding circuit into some new Pauli operators. So you can think of this like a Heisenberg picture. By tracking how these operators transform,
we can keep track of what is storing our quantum information. So in the bit flip code, the X Pauli operator becomes redundantly spread across three qubits, but the Z does not, which is why this can protect against bit flips but not against phase flips.
With the phase flip code, the opposite happens. We have multiple Zs and only a single X. And then with the STEAM, we can protect against both X and Z errors. But logical operators are not actually uniquely defined because they're equivalent to other operators
that include these operators multiplied by any element of the stabilizer. So logical operators are not a group, but they're instead an equivalence class of operators.
Just a couple of words about degeneracy. I mentioned before that it's possible for two distinct errors to have the same syndrome and the same correction in a quantum code. So this is what's degeneracy. For example, in the Shor code,
the qubits are divided into triplets and a Z error on any qubit in the triplet has the same syndrome and the same correction. The idea of degeneracy is actually not a property of the code but rather of the code together with the correctable error set.
So for any code, you can find some correctable error set where that code will be degenerate. So generally when we say a code is degenerate or not degenerate, we're speaking with respect to some standard set of errors like localized errors. But degeneracy itself is as much a function of our choice of correctable errors as it is of the code.
And in fact, you can have an extreme version of this. If you have degeneracy, if you have an error that's actually equal to an element of the stabilizer, we don't need correction at all. Being multiplied by an element of the stabilizer will not change the code. That means in the extreme case
where all of our correctable errors are elements of the stabilizer, we don't have to do anything at all. And that's what's called a decoherence-free subspace. It generally only arises in practice when there are special symmetries to our system. And I have almost no time to say anything about operator codes,
but I'll just say this much. If in addition to ancillas that can store information about the errors that occur, we can also have some extra systems that we don't keep track of. So we can't use them to keep track of errors, but on the other hand, we don't really care whether or not errors occur to them.
So a code with this structure we call an operator quantum error correcting code. And because of the presence of this subsystem, an operator code is not a subspace anymore, right? So for the standard codes, the code words all formed a subspace. That would only be true here if we could guarantee
that the different components had the same state, row G. However, this gives us some freedom to play around with what kinds of operations we do for our code. So, well, I'll skip this. Here's the most famous example, the so-called.
called Bacon Shore Code. We take the nine qubits of the shore code and we arrange them as a little three by three grid like this and then here are our stabilizer operators. Now you may look at these and without too much difficulty you'll realize that these guys do not all commute with each other. So for instance this stabilizer generator does not commute with this one.
That's why I put the term stabilizer in quotes. But it turns out that the non-commuting parts of these stabilizer generators are based on non-commuting operators that act on this gauge subsystem.
So even though measuring one of these operators might disturb the state of a different one, then the effect of that disturbance applies only to the gauge system and can be neglected.
Now we've actually lost some power by not using the gauge system to give us information about which error has occurred. But you see we've greatly simplified the kinds of operations we have to do because all of these stabilizers now are just two qubit operators.
They're much easier to measure and that can lead to a much higher error threshold than fault tolerant quantum computation. So I lied, I did say one more thing about fault tolerant quantum computation. So I'm sorry I didn't have more time to go into this. Just to sum up, quantum error correction is indeed possible despite the doubts of the
early skeptics and the principles are actually very similar to classical error correction based on the use of error correcting codes. In order to make these work for quantum systems, we have to make use of several important tricks, discretization of errors, and the ability to measure error syndromes without
measuring the states of the individual qubits. Almost all of the quantum error correcting codes that have been studied are stabilizer codes, almost. And there's a deep connection between these codes and classical linear codes. Though not all classical linear codes.
For stabilizer codes, because all your operators are pally operators, you can use just Clifford circuits containing only controlled knots, Hadamards, and phase gates in order to do your encoding, decoding, and stabilizer measurements. Quantum codes also exhibit this phenomenon called degeneracy, which can be exploited
for more effective and simpler circuits. And finally, operator or subsystem codes go beyond the structure that I talked about for most of this introduction.
And they give up some of the error correcting power of a standard code in exchange for both the ability to do a certain amount of passive error correction and the ability to simplify our operations. So thank you very much. Welcome to QEC 11.
So I think we have time for a question or two. It was a very nice introduction. Thank you. Thank you. I did have one curious question, though, about the classical degeneracy codes.
Oh, pardon me. I wasn't sure. So what's your question? Well, could you give us a simple example of one of those? I never considered those things. A simple example of what? Classical degeneracy code.
Oh, so you could define something that's like degeneracy for a classical code if instead of your code words being unique bit strings, you define your code word to be, say, an equivalence class of bit strings. So for example, if you have a string of n bits and your errors only flip two bits
at a time, then you can consider, say, all even strings to be one equivalence class and all odd strings to be one equivalence class. And then two bit flips will always leave you within that class. So there's passive error correction there. Like I said, it's a bit contrived.
But you can generalize from that example. Thank you again. Thank you. So our next speaker is Andrew Landau. He'll tell us a little about that.