We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Quantum Firmware: Engineering error-resistance at the physical level for robust quantum computation

00:00

Formale Metadaten

Titel
Quantum Firmware: Engineering error-resistance at the physical level for robust quantum computation
Serientitel
Anzahl der Teile
48
Autor
Lizenz
CC-Namensnennung - keine kommerzielle Nutzung - keine Bearbeitung 3.0 Deutschland:
Sie dürfen das Werk bzw. den Inhalt in unveränderter Form zu jedem legalen und nicht-kommerziellen Zweck nutzen, vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Realizing functional, useful quantum computers requires that the research community address both fundamental and practical challenges pertaining to how hardware errors are suppressed to tolerable levels. In this talk I will focus on efforts towards the development of dynamical error suppression as "quantum firmware:" protocols that are designed to suppress hardware errors at the physical level. We introduce an efficient, experiment-friendly filter-design framework for understanding the performance of various pulse sequences, making connections with familiar concepts from electrical engineering and digital signal processing. This perspective allows a concise formulation of known sequence characteristics, but also reveals previously unappreciated practical impacts of system-level constraints. In addition to studying dynamical decoupling, we extend this approach to nontrivial logic gates, providing a simple new technique to calculate and suppress hardware gate error rates. We validate the filter-design approach through experiments using trapped atomic ions as a model quantum system. Our results reveal the performance benefits of optimized dynamical decoupling sequences and demonstrate a technique for sequence optimization through multidimensional search and autonomous feedback.
FontQuantenfeldtheorieGeräuschProjektive EbeneGruppenoperationReiheBitUniformer RaumMereologiePhysikalische TheorieQuellcodePhysikalisches SystemInformationsspeicherungMAPLikelihood-FunktionVarietät <Mathematik>National Institute of Standards and TechnologyVorlesung/Konferenz
QuantenfeldtheorieFirmwareGeräuschFundamentalsatz der AlgebraInverser LimesProgrammierumgebungKontrollstrukturLoopOffene MengeRückkopplungQuantenchromodynamikTypentheoriePuls <Technik>Folge <Mathematik>FließgleichgewichtMehrrechnersystemQuantenfeldtheorieRauschenSoundverarbeitungVerknüpfungsgliedEinflussgrößeProgrammierumgebungFehlererkennungKartesische KoordinatenMultiplikationsoperatorOrdnung <Mathematik>VektorraumPerspektiveHamilton-OperatorFolge <Mathematik>Physikalisches SystemRückkopplungDifferenteKontextbezogenes SystemApproximationZweiFlächeninhaltFreiheitsgradKontrollstrukturMAPGibbs-VerteilungBitGruppenoperationQuantenchromodynamikNabel <Mathematik>VersionsverwaltungGamecontrollerAnalytische FortsetzungSchnittmengeProtokoll <Datenverarbeitungssystem>InstantiierungHalbleiterspeicherSchlüsselverwaltungEinsGRASS <Programm>Gebundener ZustandPuls <Technik>Nichtlinearer OperatorTermArithmetisches MittelStrategisches SpielFirmwareBitrateOffene MengeMaßerweiterungLoopAbgeschlossene MengeComputeranimation
FirmwareQuantenfeldtheorieBildverstehenDynamisches RAMÄhnlichkeitsgeometrieImplementierungFormale SpracheVirtuelle MaschineStrategisches SpielPhysikalische SchichtProgrammiergerätAbstraktionsebeneKontrollstrukturNebenbedingungProgrammierumgebungQuantenchromodynamikHardwarePhysikalische TheorieGruppenkeimFaltungsoperatorMarketinginformationssystemTheoretische PhysikFrequenzFolge <Mathematik>ProgrammbibliothekOrdnungsreduktionRauschenGruppenoperationFourier-EntwicklungHamilton-OperatorAlgorithmusPerspektiveDomain <Netzwerk>MultiplikationsoperatorGamecontrollerFormale GrammatikMathematikQuantencomputerFunktionalVarietät <Mathematik>Minkowski-MetrikPhysikalische TheorieNummernsystemMAPHalbleiterspeicherFrequenzMotion CapturingPhysikalisches SystemFormation <Mathematik>FlächeninhaltOptimierungProdukt <Mathematik>PunktspektrumTermAdditionLeistung <Physik>Zellularer AutomatFaltungsoperatorInstantiierungPotenz <Mathematik>LeckPhasenumwandlungInverser LimesInformationsspeicherungPunktStandardmodell <Elementarteilchenphysik>Dynamisches RAMExogene VariableIntegralNotebook-ComputerDigitaltechnikSchnittmengeComputerInformationPlotterRechter WinkelMultigraphBitSoundverarbeitungProgrammiergerätMereologiePuls <Technik>sinc-FunktionFolge <Mathematik>QuantenfeldtheorieInteraktives FernsehenOrdnung <Mathematik>FirmwareBefehl <Informatik>ZeitbereichQubitBitrateURLKartesische KoordinatenKanalkapazitätDichte <Physik>MaschinenspracheFehlererkennungHeegaard-ZerlegungZahlenbereichAusdruck <Logik>QuantenchromodynamikReiheReelle ZahlDruckverlaufNebenbedingungRahmenproblemRechenbuchStrategisches SpielSignalverarbeitungFluktuation <Statistik>Charakteristisches PolynomDreiecksfreier GraphZentralisatorRechenwerkUnendlichkeitHardwareWärmeausdehnungDatenfeldARM <Computerarchitektur>DatensatzChi-Quadrat-VerteilungZufallsvariableFitnessfunktionExploitGruppentheorieTopologieMessage-PassingBetafunktionLoopKlassische PhysikOffene MengeComputeranimation
NebenbedingungOrdnungsreduktionSystemprogrammierungZahlzeichenKonjunktive NormalformTermPhysikalisches SystemQuantenfeldtheorieFunktion <Mathematik>DigitalfilterOffene MengeStandardmodell <Elementarteilchenphysik>InjektivitätGeräuschRückkopplungGlobale OptimierungFreewareTotal <Mathematik>RauschenDifferenteAutonomic ComputingTelekommunikationMultiplikationsoperatorFamilie <Mathematik>Vorzeichen <Mathematik>GamecontrollerAnalogieschlussPhysikalisches SystemPartikelsystemFrequenzAblaufverfolgungSchwingungMultiplikationKurvenanpassungPuls <Technik>Physikalische TheorieURLFunktionalRationale ZahlDoppler-EffektEinfache GenauigkeitFolge <Mathematik>SystemplattformOrdnung <Mathematik>RückkopplungFitnessfunktionDatenstrukturCASE <Informatik>MannigfaltigkeitGeradeQuantenchromodynamikNebenbedingungBitrateGruppenoperationFluktuation <Statistik>PunktspektrumComputerEinflussgrößeLeistung <Physik>SkalarproduktQuarkconfinementZahlenbereichArithmetisches MittelSchnitt <Mathematik>DatenfeldEinsFramework <Informatik>InstantiierungQubitProjektive EbenePunktDickeParametersystemDreiecksfreier GraphGroße VereinheitlichungBildschirmmaskeAggregatzustandGüte der AnpassungRelativitätstheorieStatistische SchlussweiseOffene MengeFilter <Stochastik>Zentrische StreckungKardinalzahlSoundverarbeitungHamilton-OperatorMinimumQuick-Sortsinc-FunktionMikroprozessorGlobale OptimierungSuchverfahrenBeobachtungsstudieDigitalisierungMAPKette <Mathematik>Kontrast <Statistik>HardwareDiagrammPropagatorTermQuantenmechanikArray <Informatik>QuantenfeldtheorieSchnittmengeTrennschärfe <Statistik>Trigonometrische FunktionQuadratzahlGewicht <Ausgleichsrechnung>Irrationale ZahlComputeranimationDiagramm
DigitalisierungPhysikalisches SystemOrdnung <Mathematik>UnendlichkeitPuls <Technik>ZahlenbereichZentrische StreckungBitFolge <Mathematik>QuantenfeldtheorieSoundverarbeitungPunktKartesische KoordinatenVorlesung/Konferenz
KontrollstrukturHardwareTrigonometrische FunktionQuantenchromodynamikROM <Informatik>Operations ResearchGeräuschQuantenfeldtheorieRechenwerkAchtInklusion <Mathematik>Lateinisches QuadratFluktuation <Statistik>ProgrammierumgebungQuellcodeSchwingungTVD-VerfahrenVerschiebungsoperatorPhasenumwandlungKreisbewegungGruppoidLokales MinimumPhysikalische TheorieHamilton-OperatorInnerer PunktKonstanteMathematische LogikFunktion <Mathematik>DigitalfilterPrimitive <Informatik>Folge <Mathematik>Trägheitsmomentsinc-FunktionAnalytische MengeGamecontrollerPuls <Technik>MultiplikationsoperatorFreiheitsgradDifferenteVerknüpfungsgliedBeobachtungsstudieZahlenbereichProgrammierumgebungGeradeFolge <Mathematik>Rechter WinkelQuantenfeldtheorieKartesische KoordinatenSoundverarbeitungFirmwareRauschenRechenschieberMechanismus-Design-TheorieGlobale OptimierungNichtlinearer OperatorPunktVarietät <Mathematik>Strategisches SpielKontrollstrukturFunktionalRahmenproblemDarstellung <Mathematik>Fahne <Mathematik>HalbleiterspeicherSoftwarepiraterieInstantiierungt-TestMereologieSphäreFehlertoleranzQuellcodeAdressraumPropagatorMeterSchwingungMAPZahlensystemIdentitätsverwaltungMathematikReelle ZahlQubitOrdnung <Mathematik>FrequenzBitrateMetropolitan area networkSigma-AlgebraQuantenchromodynamikKreisbewegungTermHamilton-OperatorRechenbuchBildschirmmaskeTotal <Mathematik>Potenz <Mathematik>CASE <Informatik>Physikalische TheorieMotion CapturingNebenbedingungKlasse <Mathematik>QuantenmechanikPhysikalisches SystemResultanteDatensatzWärmeausdehnungMultifunktionChi-Quadrat-VerteilungSchnittmengeTeilmengeGrundraumKubischer GraphEinfache GenauigkeitPrädikatenlogik erster StufeMittelwertVorlesung/KonferenzComputeranimation
KontrollstrukturFunktion <Mathematik>DigitalfilterFolge <Mathematik>Primitive <Informatik>Data MiningDämpfungGammafunktionDynamisches SystemMaßstabOperations ResearchPhysikalische SchichtFirmwareQuantenfeldtheorieSystemprogrammierungSoundverarbeitungsinc-FunktionVakuumpolarisationQuaderZentrische StreckungFrequenzRauschenDatenfeldGroßkanonische GesamtheitHarmonischer OszillatorFilter <Stochastik>GamecontrollerAnalytische FortsetzungVerknüpfungsgliedSpannweite <Stochastik>DissipationPerspektivePuls <Technik>VektorraumPhysikalisches SystemPhasenumwandlungOptimierungPunktFunktionalNichtlineares GleichungssystemRechenbuchTeilbarkeitSchwingungReiheWärmeausdehnungOrdnung <Mathematik>IterationKugelp-BlockComputersimulationNichtlinearer OperatorMultiplikationsoperatorNebenbedingungFirmwareKollaboration <Informatik>Green-FunktionKonstruktor <Informatik>GeradeProgrammierumgebungRechenwerkKardinalzahlGruppenoperationDifferenteDienst <Informatik>DimensionsanalyseAnalytische MengePhysikalische TheorieSoftwareentwicklerSystemprogrammGlobale OptimierungQuantenfeldtheorieWasserdampftafelFahne <Mathematik>Kartesische KoordinatenPi <Zahl>KreisbewegungMomentenproblemVarietät <Mathematik>Ultraviolett-PhotoelektronenspektroskopiePrädikatenlogik erster StufeLogarithmusQuantenchromodynamikApproximationBitStörungstheorieSchnittmengeGrundsätze ordnungsmäßiger DatenverarbeitungQuantencomputerZeitbereichTotal <Mathematik>ErwartungswertComputeranimationDiagramm
DämpfungPrimitive <Informatik>Folge <Mathematik>Standardmodell <Elementarteilchenphysik>sinc-FunktionSpannweite <Stochastik>ZehnMultiplikationsoperatorFreier LadungsträgerKlassische PhysikBitPuls <Technik>DynamikKreisbewegungKontrollstrukturInverser LimesGebundener ZustandSymmetriebrechungAnaloge SignalverarbeitungSoundverarbeitungMittelwertMereologieElement <Gruppentheorie>Klasse <Mathematik>RauschenDifferenteZusammenhängender GraphRechter WinkelTopologieEinsSpinorGamecontrollerVorlesung/Konferenz
Transkript: Englisch(automatisch erzeugt)
Thank you. I'm going to stand down here because I think the likelihood of me falling off the stage is unusually high if I pace around too much. I'll talk today about a series of projects that involve experiment in theory. The experiment was mostly done while I was a postdoc at NIST in the ion storage group and I'm presently setting up a new experimental group at the University of
Sydney as part of the Center for Engineered Quantum Systems and I'll just acknowledge the variety of funding sources we have. I'll give generally a talk that follows this outline, a very, very brief bit of motivation. I think this group doesn't need very much. And then talk about this concept of quantum firmware for error suppression at the physical
level. I think that kind of gives away the show but I'll speak a little bit more about exactly what I'm thinking and then move on to a few different more technical topics. First, the idea of noise filtering which I've described as an experimental friendly analytical approach. Moving on to some demonstrations of quantum firmware in the lab, dynamical error suppression and then noise filtering in non-trivial gates which is a new extension beyond some
of the work that's existed previously. So, I mean, this is familiar to everybody, the idea that we're concerned primarily about the effects of environmental noise, unwanted degrees of freedom that are coupled to our system. These are effectively terms that are Hamiltonian that we can't write down or that we don't
know. And from an experimentalist perspective, we generally talk about some coherence times but from the perspective of quantum error correction which is maybe more interesting to this audience, we can think about how these times in a very rough way give rise to some lower bounds on the kinds of error rates p error probabilities that we can achieve. And we see that they're generally bounded from below by some operation time relative
to the coherence time. And this is a very rough approximation and we'll see that it doesn't hold up under most circumstances but it is kind of the motivation for suppressing the effects of decoherence in our system. And so how we deal with these errors, there are some fun things that are funny. How we deal with these errors comes from a couple of different techniques that we've
heard quite a lot about in the introductory talks yesterday and today. Of course, there's the closed loop version, quantum error correction where we involve measurement and feedback but I'll be interested in speaking about open loop control, dynamical error suppression strategies and open loop means we don't use measurement and feedback
in the great canonical example or a canonical example. To my mind is the sprinkler system. It comes on every day at a set time. It doesn't measure whether the grass is wet. It doesn't measure whether it's rained. It just comes on and it actually works pretty well. Most lawns survive with this kind of system. And the aim of the work I'll talk about today is to incorporate these open loop control protocols into a more general setting where we ultimately wish to improve the performance
of quantum error correction by driving down physical error rates. That's our main motivation. So I'll run through this in 10 seconds or less. We know that this is all based on the spin echo where we talk about easing originally in the context of magnetization vectors.
Han in 1950 showed this in nuclear magnetic resonance. But then dynamical error suppression and dynamical decoupling for the protection of quantum memory has emerged by taking these spin echoes and chaining them together into sequences in order to suppress errors at longer times. And really the beauty in this kind of application is that there's so much flexibility in how
we do the quantum control, what control we use, how many pulses, if we use pulses at all, pulse timing, the kinds of pulses as I said, and all of the art, maybe it's a black art even, is in the sequencing, how we chain these quantum control operations together to achieve some desired outcome.
And for the duration of this talk, I'll be talking generally about pulse control, mainly because it's what's been studied primarily in the literature for the last 10 years or so. But there are obviously many other approaches you can take, for instance, continuous control that Gershon-Kriske's group has looked at quite a bit. Something that I wish to convey to you is this notion of developing dynamical error
suppression strategies as quantum firmware. I admit it's cutesy, but I think it does in fact capture some real concepts. First is the idea that this is a very efficient and simple approach to suppressing errors at the physical level. It's very easy in the scheme of things compared to many of the quantum error correction,
topological quantum error correction and whatnot to implement. And from a system level perspective, we can very easily think about absorbing this into a kind of machine language where it's abstracted away and the programmer of your quantum computer or even your algorithm designer has no idea that this is going on in the background.
But it's also useful to note that this is potentially important, as Raymond was speaking about earlier, for kind of any quantum technology. It doesn't have to be quantum computation that uses this. I would generally argue that most things that wish to exploit quantum coherence in some way fit from this kind of protection against error. And so this quantum firmware can be useful in a variety of settings.
But back to quantum computation, if you start worrying about, well, how difficult it is to do all these things in the background of a quantum computer, it's useful to know that there is already a precedent for this and it exists in pretty much every laptop or I'm sure every laptop in this room and that's the idea of DRAM.
DRAM works, it stores information by charging a capacitor. Each cell is one resistor and one capacitor. And over time, the charge on that capacitor leaks off. It physically migrates off the capacitive trench into the substrate. So roughly once every millisecond, you perform what's called a RASCAS sequence or row access
strobe, column access strobe. You apply a voltage pulse, open loop control, and you refresh all the charges in your system. This kind of firmware exists and the only impact it has is that the level for programmers is that it induces some latency.
You have to wait until these refreshes are done, but otherwise you don't know about it. I mean, you don't worry about refreshes as you access memory and the concept that I'm wishing I'm trying to convey to you is similar here. So my group's interests are really in taking these high-level concepts and making them useful. The first is making these dynamical error suppression strategies more accessible.
And for those of you at the ARM meeting, I apologize. This is a recycle joke, but it got a good response there and it is, in fact, true. I really don't like when people talk about group theory to me because I don't understand it. The idea then is using this accessibility in how we interpret these dynamical error
suppression strategies and using them to calculate real error rates. Instead of assuming some P that's abstract, let's calculate what P is based on real environmental noise. Let's consider realistic constraints, as Lorenzo was discussing yesterday, imposed by hardware, and let's take these constraints into consideration when we try and design quantum control approaches to suppress errors.
So first I want to speak about this technical topic of noise filter. So if you want to understand how some environmental noise is going to impact your qubit, you can start off this way. We'll take some Hamiltonian that has an unperturbed qubit splitting and then a classical random variable beta of t where beta just captures the noise.
And before the theorists in the audience pounce on me and argue how this is insufficiently general, that is a true statement. But it turns out from an experimentalist perspective, this simple Hamiltonian of just sigma z noise to phasing noise actually captures almost everything we care about. There are very few circumstances in which it doesn't. And in fact, even kind of the most quantum mechanically, well, the system most expected
to be fully quantum mechanical in its interaction with the bath, that is the central spin problem, is actually better modeled by this kind of Hamiltonian where you assume a fluctuating Overhauser field in singlet-triplet qubits than by the detailed model of a spin interacting with a sea of spins. So this is where we start.
And then you can say over time you end up with some accumulation of phase that's the error in your system in the rotating frame. If you apply some dynamical error suppression which involves a series of pulses as we heard about, you can calculate what the net error is at the end by taking this nasty convolution of this time fluctuating beta of t and this control sequence, and you can try and
calculate that phi. You can do it, but it's certainly not intuitive as an approach to understand what kind of error you get. And it's pretty nasty mathematically. What's really nice is that we can exploit the fact that a convolution of the control in the time domain with the noise gives us a product in the Fourier domain.
And this was shown by many people, and I'm sure NMR did this 60 years ago. But Yurig and Chiwinski wrote a few papers a few years ago that were really lovely in calling out these relationships quite, and effectively taking any arbitrary sequence and writing down in the Fourier domain what's called now the filter function.
This is a spectral function that defines the action of your control sequence. So if we have some noise that's characterized in the lab statistically by a power spectrum, a power spectral density, this is arbitrage, pull this off the web. This is what we have to worry about, it's what we want to suppress.
So if we make our sequence by modifying the locations of the pulses in our control such that it filters out the parts of the noise power spectrum that are large, you suppress errors. This is the simple way it works, and you can write this down in terms of coherence function W which is an exponential of this chi of T where chi is this integral, a product of S,
the noise, and F which describes the action of your control sequence. It's that straightforward. And coherence is preserved so long as this filter function is small where the noise is large. Now you can take these filter functions and calculate them using this analytical formula
numerically for an arbitrary pulse sequence, and then you can analyze the action of that filter function, or you can analyze the filter function and then interpret its effect using graphs like this. This is the filter function on a log plot as a function of frequency in some dimensionless units. And there are a few characteristics that are important to call out.
The first is around zero frequency, that is in the low frequency limit, there is some slope to how this filter function increases. And it can be shown rigorously that this slope entirely captures the order of error suppression that we talk about in perturbative expansions. So the more steep this is, the better the order of error suppression.
We can then talk about things from filter design theory in electrical engineering or in digital signal processing where we talk about the 3 dB point of our filter. Where does it start to turn on? What's the stop band? And what region of space does it prevent and what region of frequency space does it pass? All these things from a very simple mathematical formalism, and it's important to note that
these filter functions are always high pass because if things fluctuate very rapidly compared to the interpulse time as we heard, then the noise gets through unimpeded and you don't do well with these sequences. Now this is nice to me because I can now understand the action of an arbitrary control
sequence by examining this thing that I can calculate in a way that's very similar to the way that I choose electronics. When I go to mini circuits, I don't calculate overlap integrals, I look at the filter response of a high pass filter. And I say, well, okay, this has a 3 dB point at roughly the frequency I care about because I have some noise at 10 megahertz and it's sufficiently low.
This is 80 dB of suppression that this is the right filter for me. I can do the same thing now in a quantum control setting. And what's really nice about this is in addition to simple analytical approach to compare sequences, we can also get some information that doesn't come out quite so explicitly.
For instance, the eurig dynamical decoupling sequence is really very interesting and I'll talk about this more in a few minutes. But in addition to the price that Daniel called out yesterday in terms of the number of qubits, there are other prices. For instance, in order to make this sequence behave as expected, because the pulse locations are irrational, you need infinite precision in order to do the sequencing.
And if you start to impose things like clock periods, that you can only define the location of a pulse with a certain precision, you see that the filter function, which is very steep here, meaning that noise in this regime is suppressed very strongly, starts to creep up. And as the precision is reduced, the effectiveness of this sequence gets squashed.
So this is revealed just by the numerics of the filter function, and I can understand it just by looking at this, much the same way I select filters in an electrical setting. So in order to deal with that, there was a study that we started with Lorenz and Dave
Hayes and Kaveh Kojaste looking at sequences that we call digital modulation sequences, whereby we no longer rely on these irrational locations for pulses, but instead impose a constraint that all pulses occur with inter-pulse periods, some multiple of a minimum period or a clock period. And what we caught onto was this idea of the Walsh functions.
These are a family of square waves, square wave analog of the sines and cosines in some respects. And this family of functions that were studied a lot in the 1970s for communications turned out to be really interesting because each function, which is a square wave in some form,
can be affiliated or associated directly with the control propagator for a dynamical decoupling sequence. The transitions correspond to pulse locations and a kind of diagram that's familiar if you are in this body of literature, but it has audio benefits. It's digitally compatible. These sequences are extremely easy to generate. Each one of them, even though they look funny, can be generated just by multiplying together
periodic square waves, which is great because it's compatible with very simple digital control electronics. In hardware, you don't need a full microprocessor to do the sequencing. It's a very nice unified mathematical framework with all sorts of benefits. For instance, these red curves that I've called out here are concatenated dynamical
decoupling. They are the CDD traces of different orders that pop out immediately from the Walsh family of sequences, and there are many others that are of interest. If you want to hear more about this, you should see Kaveh Kojasteh's talk on Thursday, so I'd encourage you to come to that or ask me questions afterwards. What about doing this in the lab? What about really doing experiments?
Our experimental platform is a crystal of trapped ions. Each blue dot here is a single beryllium ion, and they fluoresce at about 313 nanometers, which we can realize through some high-precision laser systems. When we laser cool them using a simple Doppler cooling, they crystallize into these nice arrays.
The crystal structure has been studied extensively in the 90s and early 2000s, and anybody who's interested in the system can ask me more. For the sake of time, I won't go into it very much. These are in a Penning trap, which is a slightly different kind of trap than you may be used to in ion traps. It's certainly different than what Rainer will talk to you about tomorrow, but the generalities
are similar. We get some three-dimensional charged particle confinement using electric and, in this case, magnetic fields, and then we use the level structure of the trapped ions as a means to realize a quantum mechanical manifold. This is the level structure of beryllium at 4.5 Tesla, which is used for our trapping. I'll call your attention to a transition here. It's a pure electron spin flip transition at 124 gigahertz, which is a nasty frequency,
but it's something we can control, and the presence of a strongly allowed cycling transition between the upper stable manifold and the double P3 halves excited state in beryllium, which is used not just for Doppler cooling, but also as a form of projective state selective
readout. The upstate is bright when you shine this laser on it, and the downstate is not, so we can measure our qubits in a projective way this way. If we do quantum control experiments, and I'll skip over a lot of the details of how we do that technically, we can, for instance, do a Ramsey experiment and see that over some time of order milliseconds, at the time we were doing these experiments, we get some
decay and fringe contrast if we're measuring the population of being in the upstate or the downstate. And this, of course, is due to some random term in the Hamiltonian, some noise, magnetic field fluctuations, that cause a net decay of coherence. Now here, T2 is about two and a half milliseconds, and it's pretty straightforward and not at
all unexpected that by applying some chains of pulses, these are CPMG, it's multi-pulse spin echo, we can take that, the data are just compressed here, the bottom scale is expanded, we see that the coherence time can be improved, about ten times for ten pulses in the system. And that's, it's fine, it's not very exciting.
But what's more exciting is how well the filter function approach works, right? So this is now error probability as a function of the length of an experiment for different pulse numbers, it's not four times pi pulses, it's four pi pulses. The different dots are two different sequences, UDD in open markers and CPMG in black markers.
What you see, of course, is that as you increase the number of pulses, the coherence stays good longer, low error is good. And more important, or I should say more important, without the AND, what we can do is generate these solid, and in this case, dash curve, that give us a theoretical fit
to our data, just using the measured noise power, so this is what we actually measure by putting an antenna into our magnet and measuring the fluctuation, and the analytically defined filter function for our sequence, right, appropriate for CPMG or UDD, and we spit out these curves, and you can see that with only a single free parameter,
which is just the strength of the noise, because there's an inductance in this antenna that we don't know. We get extremely good agreement between data and theory, and in particular, the presence of this funny spur at 153 hertz, which it turns out was due to a chiller, three labs down the hall, is entirely responsible for these funny bumps and wiggles that you see
at intermediate times. All we've done is taken the overlap integral of the filter function and this measured noise, and we get this kind of agreement, which was very exciting for us. This simple technique works extremely well. Now, we can do some other things. As you may have seen, we don't get very much of a difference between CPMG and UDD. At the time, we were really interested in demonstrating for the first time that this
UDD approach of modifying the filter function for a particular kind of noise, as we heard with a high-frequency cutoff that's sharp, can work, and so in our microwave system and replacing a stable oscillator with a frequency-modulated oscillator, we can generate noise in our system. This is noise in the control with a power spectrum that mimics something of interest,
right? So this is 1 over f with a, well, it's 1 over omega with a sharp high-frequency cutoff, and this is something that looks like omega, ohmic noise. The upshot is we can model the dynamics of other quantum systems and probe in detail the functionality of this Euring approach.
And so, just very, very quickly, as the noise gets stronger, so this is injected noise in an ohmic setting relative to the background, which is this dashed line. That's the stuff I showed you a few minutes ago. As the noise strength that's artificial goes up, the relative performance of CPMG and UDD gets flipped. So this was the first demonstration that this UDD approach, when we have strong high-frequency
noise, will in fact give a benefit, right? So this worked. It was really kind of nice to us. But we can go much further than that. We don't need to stick with things that are defined analytically in some arbitrary and idealized way. We can actually do feedback, measurement feedback on autonomous control in order to
generate new sequences that are numerically optimized. So what we did here is, well, this is CPMG and UDD, and then we pick a particular value of time. This is the length of our experiment. This is a semi-log VUT. And at that point, we start a multidimensional Nelder-Mead search algorithm that moves the
pulses around relative to one another to find the optimum error at that point. And then we can trace out, and we find that we get even better error suppression by doing this numerical optimization. What we're doing is tailoring the filter function of our sequence to the actual measured noise in our system. And here, this is injected high-frequency noise.
So we can do better than UDD. And in this case, these values are kind of meaningless because we're injecting very strong noise to swamp the back. But the upshot is we can do better by these numerical techniques. Now, I wanted to give an interlude because there's been a fair bit of discussion about this UDD sequence.
We heard about it yesterday. If you're not familiar, it's an optimized sequence that gives very nice scaling and error in the order of error suppression with the number of pulses. I was obviously very interested in studying it, but I've come to the conclusion over the past couple of years that it's likely to meet its end at some point soon. I don't think it's going to prove to be very useful.
The fact that you require this infinite precision in the pulse sequencing in order to get the benefits, its incompatibility with digital clocking, some other effects really suggest that it may not be the solution that's best tailored to large-scale systems. I'm sure it can find some niche applications, but I don't think it's the be-all, end-all
that some people anticipated originally. That doesn't mean it's not useful. What's really important about the URIG sequence is that we think about these problems of quantum control in a very different way. URIG's work gave us the filter function, this filter design approach, or the filter function analytical method.
He made us think about pulse timing as a degree of freedom that hadn't really been considered in quite as much detail previously. It's had a huge amount of benefit, but I personally am not convinced that it's going to have a very long future. It may. I may be wrong. We did a number of studies looking at different kinds of optimization published in a variety
of outlets a few years ago, but there's a key point that I wanted to get across, which is why this worked. In our system, it worked because our quantum control had pretty good fidelity. This was up on one of the slides earlier today that using this crystal and randomized
benchmarking, we got a single cubic gate fidelity of 99.92%. The real big change, Manny's original results, was that we moved from laser-mediated gates to microwave-mediated gates. In this system, again, 124 gigahertz is a pretty nasty place to work. This was pretty good for us, but it's this very high fidelity that gave us the ability
to do these studies without measuring quantitatively the effects of environmental noise without measuring just the effects of bad pulses. What I've told you about so far addresses this gate set by only focusing on one operation,
that is memory or the identity operator. What we're really interested in is expanding this general approach, filter design and quantum firmware and quantum control for error suppression towards other things that are of interest to anybody who's looking to apply a universal gate set. I wanted to mention something briefly. I think the decouple-then-compute strategy is incomplete in that it ignores what happens
during the operations, during the compute part. I think we need both approaches. We need to worry about the decoupling, and then we need to make these things robust against errors as well. The key question is how do we accurately calculate P for a particular gate in the presence
of environmental noise that's oftentimes dependent, and how do we improve gate error? This is the subject of some work I'll talk to you about for the next few minutes. It's work done by an extremely talented undergraduate student, Todd Green at Sydney, and I think it will relate to some of the work we'll hear about later today and maybe
tomorrow as well. If you just think about performing some quantum operation that's non-trivial, for instance a pi over two pulse where we go from the North Pole to the equatorial plane in this simple depiction, if we have some non-zero detuning error, this is written in the lab frame, what
you find, and this is early quantum mechanics or first semester quantum mechanics, is that the net effect of that operation gives you a rotation about a shifted tilted axis and that's incomplete. You perform some operation that's not what you wanted, you don't end up on the green dotted line, you end up somewhere over here, but what's important to note there is that
a pure defacing environment, this is just a sigma z error, during a control operation will give you this general depolarization error. You end up off the equatorial plane as well as accumulating a phase. What becomes really nasty is that if you think about this delta being a function of t, it becomes difficult to treat this analytically.
You're not rotating about some fixed tilted axis, it's now an axis that moves in time. And it's again important to note that this is not so abstract as to be useless, this captures a wide variety of environmental sources, but also intrinsic sources that we have to start worrying about when we think about error rates at the fault tolerance level.
The most important of those is instability in the master oscillator. Phase noise in a master oscillator is manifest as this kind of z noise delta of t, right? The frequency of your oscillator is changing in time in a statistical way. So what we use in order to address this problem is effective Hamiltonian theory. It goes back to some work from Lorenzo and many years ago, where we take this Hamiltonian,
and I apologize, I've again changed notation, now it's eta of t for the noise, and we can write down an effective Hamiltonian and a control propagator, or excuse me, a propagator that looks like the effect of a time independent average Hamiltonian instead of a nasty time
dependent Hamiltonian. These are some technical details of how we do it, such that if we wish to implement some gate, O, some operation O, you write down the total propagator for it as O times some error. So this exponential captures the error. And if you have questions about this, you can grab me afterwards.
The question then is, can we apply this to some non-trivial gate and calculate the effect on the fidelity of noise during this gate? So yes we can, we use the trace fidelity. We get something that looks like an exponential of these terms here, which are subscript L are the Cartesian coordinates, so these are the terms that are proportional to Pauli
operators x, y, and z, such that our average fidelity in the presence of this environmental noise is given by a form that looks very reminiscent to this calculation I did before, where the coherence is e to the minus chi of t, and chi is some overlap between noise and a filter function. Well here is something that, after a first order magnus expansion, looks very much like
a filter function, where each one of these corresponds to x, y, or z. So now, for a non-trivial gate, not just the identity operator, we can write down a filter function that captures the effect of the quantum control and the effect of the noise.
So for some class of sequences that are of interest, this is a subset of what we can do. We can apply this to arbitrary modulation, but it's useful to think about sequences where we can apply some arbitrarily chosen rotation rate and some arbitrarily chosen rotation axis, but we just apply the constraint that each segment of our piecewise defined
control gives us a pi pulse. You apply a pi pulse, you apply a pi pulse about a different axis, dynamical decoupling is captured here because you do pi pulses about x and then two pi pulses in a row about z, so identity operator. So these piecewise defined control functions capture a broad class of things that are of
interest. So what's nice is we can write down, using this, closed form solutions of the filter function. You can write down filter functions analytically for, again, any sequence, but it's neater if you make this requirement. And they have some more terms that I can, again, explain to anybody who's interested, but what's very important is that not only do we have a filter function for the effect
of dephasing, we now have a filter function. There's the effects of polarization damping, the buildup of x and y and or y errors in our system when we're applying control in the presence of dephasing noise. And it's nice, mathematically, that if you look at the pre-factors here, they look
very reminiscent to what you expect from a master equation treatment of a driven harmonic oscillator in the presence of some damping. It's kind of as expected. We're doing a driven rotation in the presence of some dissipation, and of course, these are important only in the statistical ensemble.
So what can we do with this? Well, we can treat a couple different non-trivial gate constructions that are of interest. Of course, there's the simple trivial gate, there's a pi pulse, and then there's the dynamically corrected gate. This is what Lorenzo told us about yesterday, where you have a series of pi pulses and then
pi pulses that go in the other direction, so it's just changing the phase of your oscillator. And then at the end, there's a pulse that just takes twice as long and is half as large in amplitude, but it's all just pi pulses. So we can use this filter function approach that I told you about a moment ago and try and analyze the performance of these things. Because while the general performance has been studied in a perturbative approximation from
previous work by Lorenzo and Cave, I swear to God that when I look at Cayley graphs, my mind starts spinning, and I think this filtering approach is, as a experimentalist, a little more straightforward. So here are the filter functions for the simple pi pulse and the DCG gate.
Black is the pi pulse, the primitive, and red is the DCG. So what do you see? The order of error suppression is improved in the filter because the slope of the filter near zero frequency is enhanced in the DCG. If you look at the high frequency regime, there is an effective extended time. This thing is six times longer than the simple gate operation.
That means that if you have lots of noise up here, it's dimensionless scale, that you're going to get hurt. It makes sense. If you make your gate longer, anything that fluctuates on the time scale that's extended is not going to influence the primitive data strongly, right? So that's captured here. But then the effect of dynamic protection comes in here, where the order of the error
suppression is enhanced using this simple dynamic approach. And I won't talk about this, but that talks about the different quadratures. You can now calculate the error as the probability of error in total. So what's the total probability? You're not where you expect it to be. But also, what's the probability you've ended up having an X error versus a Z error?
And we can validate this using some brute force numerics. Error probability is a function of time in some dimensionless units for a particular kind of noise environment. And the solid and the dashed lines are the calculations using this filter design approach, where we write down the filter functions and just take the overlap integral with
the noise. And then the data points are the outcomes of these detailed simulations, where you trace the block vector over the block sphere, and then you average over many iterations. And we get very good agreement within about 10% or 20%. And remember, this is a logarithmic scale here, so a 20% error is very small.
And frankly, I don't care, and I don't think anybody here cares, if the error is 2 by 10 to the minus 4 or 2.1 by 10 to the minus 4. They care about the 10 to the minus 4. So this filter design approach, even though it's first-order Magnus expansion, really captures a lot of what we care about from a practical perspective.
So what comes next? Well, this was the microwave system that we used at 124 gigahertz. It's really nasty. It's custom-made. All these oscillators are involved, and all we can do with it is on-off pulsing. It's pretty restricted. We can replace that now by moving to a lower magnetic field and a lower frequency with a
box from Agilent that's expensive. But it does programmed vector IQ modulation, so we can do amplitude, phase, frequency control. And this gives us a new wide range of capabilities to perform dynamically-corrected gates and new kinds of continuous control that we've not previously studied.
And using these filter design approaches, we think we can do optimization as well. So here's the summary of my talk. I hope I've convinced you of the utility of this concept of quantum firmware and how we can develop new analytical approaches for noise filtering that capture the average effects very well without doing detailed time domain calculations.
We can treat things that vary in time using this average Hamiltonian theory. Filtering during gate operations is now something we can do, and I think we'll hear about it in one of the talks in a few minutes. And we've done a variety of experimental demonstrations at a small scale, but started to now consider what happens if we want to move to something bigger.
What happens if we think about constraints, as, again, Lorenzo talked about yesterday, that have not traditionally been of concern? At the lab, I can do my sequencing perfectly well using an FPGA or programmable logic device in a PC, but if I want to build something bigger, I need to worry about a different set of constraints. And, again, quantum computing isn't the only application. I wanted to acknowledge the collaborators, in particular Todd Green, who did the second
half of the work I talked about today, plug the quantum firmware collaboration, which has been very fruitful for me, and I've been very pleased to collaborate with Lorenzo and Kaveh and Amir, and then do a little bit of smarmy advertising about what a nice place it is to live in Sydney, and invite anybody who's interested to come and talk
to me. So, thanks for your attention. Okay, very nice. Questions? It's either good or bad. Ah, they're trying to make me exercise.
So, strong proponent of this filter function viewpoint. So, it works very well for this limited class of models of classical defacing noise. If your noise is more general than that, if you have spin flip noise, et cetera,
then you don't just have one filter function in principle, you have a filter function for every element of a chi matrix, or something like that. I just wonder if you have any comments, since you're going down this road kind of hard, on whether that's going to be fruitful when the noise is more general. Sure. So, I guess there's two parts to that.
First, well, there's three parts, I guess. We showed, first of all, you can write down filter functions that capture the average effects of errors during, if I ever get there, during control operations in the X and Y, so the amplitude quadrature. So, this general approach can work for characterizing statistical errors.
Doing this more general error model that explicitly accounts for T1, that's something tougher, but you have to keep in mind that most T1 processes are not reversible. If it's spontaneous emission, it doesn't matter, right? So, you just have some error probability, and this sets a different bound. If it's a coherent rotation error, that's something a little bit different, and you
can start to consider, I think, writing down filter functions for that. But again, I'm just going to emphasize that this classical defacing model captures almost everything we care about. And if you have some other uncorrectable error coming from T1, you can just add that on top.
Any other questions? I should have just stayed back there. I have a question. It looks like you have a huge dynamical range because your coherence time is milliseconds and you're working at 100 gigahertz.
Did I understand it correctly? So in principle, yes. It's 124 gigahertz carrier in that particular experiment. In the new ones, it's 28. What matters is how long it takes to do the control, so it's the Rabi time. In those experiments, it was tens of microseconds, and we can get that down now to tens or
hundreds of nanoseconds, but the range is more limited than it first appears. So actually, all this stuff which happens at the edges of the pulse and all the errors associated with the pulse shaping, it's not a concern for you, right? It has absolutely not been a concern. Okay. Thank you.
Okay. Well, why don't we have the next speaker set up. Any other questions? And let's thank Mike again.
There's two considerations.