We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Quantum Computing at Google and in the Cloud

00:00

Formale Metadaten

Titel
Quantum Computing at Google and in the Cloud
Untertitel
An update on Google's quantum computing program and its open source tools.
Serientitel
Anzahl der Teile
561
Autor
Lizenz
CC-Namensnennung 2.0 Belgien:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Google is a world leader in technology and a major contributor to a number of open source efforts. Google's quantum hardware and algorithms teams have achieved a number of quantum computing "firsts". This talk will present the Google quantum computing architecture, an update on current hardware implementations, and a description of the design and philosophy of Cirq, Google's open source programming language for near term "NISQ" (noisy intermediate-scale quantum) computers.
PunktwolkeQuantencomputerOpen SourceGoogolKollaboration <Informatik>QuantencomputerDeskriptive StatistikVirtuelle MaschinePerspektiveSupercomputerQuantenmechanikKomplexes SystemOpen SourceComputersimulationHardwareKlassische PhysikPhysikerWissenschaftliches RechnenComputerarchitekturEndliche ModelltheorieMultiplikationsoperatorAlgorithmusProgrammierungPhysikalismusComputeranimation
SimulationWissenschaftliches RechnenAssemblerMinimalgradProjektive EbeneKreisbewegungMultiplikationsoperatorRechenschieberKlassische PhysikComputerPartikelsystemQuantencomputerDatenfeldHeegaard-ZerlegungInformationstheorieQubitMAPKugelEndliche ModelltheorieLastEinsWeb SiteInformationsspeicherungPuls <Technik>TermEin-AusgabeDigitale PhotographiePhysikalisches SystemSelbstenergieMathematikSurjektivitätRichtungÄquivalenzklasseDigitaltechnikParabel <Mathematik>PunktAbstandForcingCASE <Informatik>FrequenzGüte der AnpassungDiffusorFitnessfunktionZehnVorzeichen <Mathematik>Basis <Mathematik>GamecontrollerVirtuelle MaschineEvoluteInterface <Schaltung>EinflussgrößeQuick-SortSchreib-Lese-KopfMinkowski-MetrikSchiefe WahrscheinlichkeitsverteilungNichtlinearer OperatorVerknüpfungsgliedWürfelOrdnung <Mathematik>WinkelNichtlineares GleichungssystemRechenwerkResultanteComputerspielp-BlockCompilerVektorraumPhasenumwandlungFunktion <Mathematik>TeilbarkeitBildschirmmaskeQuantenmechanikLineare AlgebraVerschränkter ZustandMultiplikationCoxeter-GruppeSchwingungInformationFunktionalBesprechung/Interview
Verdünnung <Bildverarbeitung>Element <Gruppentheorie>QuantencomputerParametrische ÜbertragungsfunktionComputerVerknüpfungsgliedKreiszylinderKälteerzeugungEin-AusgabeEinfache GenauigkeitRechenschieberAnalytische FortsetzungDatenfeldMomentenproblemVirtuelle MaschineAuflösung <Mathematik>TouchscreenMultiplikationsoperatorGeradeEinflussgrößeThreadResultanteSkalarproduktMathematische LogikGanze FunktionPhasenumwandlungKlassische PhysikFunktion <Mathematik>AggregatzustandEnergiedichteSymboltabelleAbstandMultiplikationPuls <Technik>Endliche ModelltheoriePunktMAPBinärcodeUmsetzung <Informatik>DiagrammQubitNichtlinearer OperatorBesprechung/Interview
SimulationQuantencomputerVerknüpfungsgliedMinkowski-MetrikVolumenHausdorff-DimensionMultiplikationsoperatorQubitOperations ResearchTermZentrische StreckungZahlenbereichVirtuelle MaschineBenutzerbeteiligungAdressraumComputerQuick-SortKonditionszahlKreisflächeNichtlinearer OperatorZentrische StreckungZahlenbereichQubitVerknüpfungsgliedMereologieEinfache GenauigkeitVerschränkter ZustandAlgorithmusSoftwareentwicklerQuantencomputerEinflussgrößeQuaderPhysikalisches SystemPhasenumwandlungTeilbarkeitGebäude <Mathematik>Array <Informatik>MultiplikationsoperatorMobiles EndgerätCISCVariableStabilitätstheorie <Logik>ZeitrichtungEin-AusgabePunktParallele SchnittstelleEndliche ModelltheorieSpezifisches VolumenAggregatzustandWürfelProgrammierungDatenverarbeitungssystemInverser LimesGrenzschichtablösungComputeranimation
TermZentrische StreckungZahlenbereichZentrische StreckungMathematische LogikQuantencomputerKartesische KoordinatenVirtuelle Maschinet-TestRückkopplungGenerator <Informatik>MultiplikationsoperatorGamecontrollerMomentenproblemTeilmengeNational Institute of Standards and TechnologySchwellwertverfahrenSoundverarbeitungGradientSoftwareentwicklerTermComputeranimation
Zentrische StreckungTermKlassische PhysikQubitZahlenbereichFehlermeldungHardwareInformationZweiFehlererkennungProzess <Informatik>Virtuelle MaschineQuantenmechanikQubitAlgorithmusFirmwarePhasenumwandlungQuantencomputerFehlermeldungWort <Informatik>Physikalisches SystemComputeranimation
Klasse <Mathematik>TermZentrische StreckungZahlenbereichQuantencomputerBefehlsprozessorPunktTermQubitNumerisches VerfahrenPhysikalismusOverhead <Kommunikationstechnik>QuantencomputerGüte der AnpassungComputerarchitekturAlgorithmusComputerComputeranimation
FehlermeldungQuantencomputerBefehlsprozessorGruppoidQubitCodeFlächentheorieMathematische LogikSchwellwertverfahrenATMZentrische StreckungEbener GraphZusammenhängender GraphElementargeometrieStrategisches SpielMaßstabEbeneArchitektur <Informatik>VakuumKontrollstrukturQuilt <Mathematik>Dynamisches RAMRechenwerkKette <Mathematik>Diagonale <Geometrie>Zellularer AutomatLineare AbbildungKondensation <Mathematik>ComputerspielUmwandlungsenthalpieQubitEinflussgrößeBitrateNummernsystemResonatorAlgorithmusMechanismus-Design-TheorieDigitaltechnikTesselationFehlermeldungMathematische LogikAlgorithmische ProgrammierspracheSimulationQuantencomputerGamecontrollerWahrscheinlichkeitsverteilungPhasenumwandlungDiagrammDigitale PhotographieKartesische KoordinatenGreen-Funktiont-TestVorzeichen <Mathematik>Diagonale <Geometrie>GeradeMultiplikationsoperatorFehlererkennungElementargeometrieVakuumKlassische PhysikResultanteSchlussregelFunktion <Mathematik>SkalarproduktComputerPunktMAPDimensionsanalyseEinschließungssatzGrundsätze ordnungsmäßiger DatenverarbeitungVirtuelle MaschineKreisbewegungBitGruppenoperationFunktionalDifferenteVerknüpfungsgliedSchnittmengeZentrische StreckungComputerarchitekturProzess <Informatik>Overhead <Kommunikationstechnik>EnergiedichteBitfehlerhäufigkeitKrümmungsmaßFontSupercomputerCluster <Rechnernetz>DatenfeldEinfache GenauigkeitLesen <Datenverarbeitung>Quick-SortKategorie <Mathematik>MultiplikationComputeranimation
QubitQuantencomputerNumerisches VerfahrenSimulationGlobale OptimierungFormale SpracheHardwareFramework <Informatik>ProgrammPunktwolkeOffene MengeIsing-ModellQuantenschaltungMengentheoretische TopologieVakuumMinimumDigitale PhotographieFlächeninhaltGeradeDatenfeldSimulated annealingVirtuelle MaschineGamecontrollerAlgorithmusVerknüpfungsgliedBitQubitGüte der AnpassungMinkowski-MetrikKomplex <Algebra>Element <Gruppentheorie>Leistung <Physik>QuantencomputerQuick-SortHardwareComputerKlassische PhysikMechanismus-Design-TheorieSimulationSchnittmengeComputersimulationParallele SchnittstelleProzess <Informatik>ProgrammierungFächer <Mathematik>SoftwareNumerisches VerfahrenDickeOrdnung <Mathematik>SchedulingCray, SeymourFaserbündelResultanteMultiplikationsoperatorEndliche ModelltheorieZentrische StreckungStabilitätstheorie <Logik>National Institute of Standards and TechnologyMikroprozessorOpen SourceSystemaufrufSpannweite <Stochastik>BitrateQuanteninformatikLinearisierungSupercomputerKette <Mathematik>ÄhnlichkeitsgeometrieGrundraumTopologieDigitaltechnikPhysikalisches SystemHalbleiterspeicherAssemblerFramework <Informatik>ErschütterungEnergiedichteQuantenschaltungService providerGlobale OptimierungAlgorithmische LerntheorieNeuronales NetzEinsAggregatzustandCASE <Informatik>GraphfärbungCloud ComputingInverser LimesComputeranimation
QuantenschaltungQubitFrequenzMusterspracheFehlermeldungHardwareProgrammierungAbstraktionsebeneMAPFormale SpracheQuantencomputerFramework <Informatik>Open SourceTermProgrammBildschirmmaskeProgrammschemaDatenstrukturSummierbarkeitGlobale OptimierungCASE <Informatik>DigitaltechnikStetige FunktionMomentenproblemOperations ResearchGruppoidVerknüpfungsgliedNebenbedingungEinflussgrößeRechenbuchFormale SpracheKomplex <Algebra>SchlussregelDigitaltechnikKontrast <Statistik>SchedulingZahlenbereichCompilerVirtuelle MaschineDickeComputersimulationMAPBimodulNichtlinearer OperatorVerknüpfungsgliedRISCQubitSpannweite <Stochastik>NebenbedingungSchnittmengeProgrammierungQuantencomputerVersionsverwaltungCASE <Informatik>Endliche ModelltheorieHochdruckDichotomieOrdnung <Mathematik>DiagrammProzess <Informatik>Quick-SortFlächeninhaltGlobale OptimierungFaserbündelMomentenproblemKartesische KoordinatenFrequenzAbstraktionsebeneDateiformatEinsHöhere ProgrammierspracheSchwingungMultiplikationsoperatorAlgorithmusComputerspielBitPhysikalisches SystemDomain <Netzwerk>DatenstrukturGraphiktablettCodeLastRechenbuchWort <Informatik>DifferenteGenerizitätPolarkoordinatenWurm <Informatik>Verzweigendes ProgrammArithmetisches MittelFitnessfunktionVisualisierungWeb-SeiteInformationsüberlastungXMLUML
DigitaltechnikEinflussgrößeGebäude <Mathematik>RechenwerkGlobale OptimierungQubitMultiplikationsoperatorSelbst organisierendes SystemVersionsverwaltungDifferenteKonditionszahlKoordinatenQuantencomputerZahlenbereichProjektive EbeneDigitaltechnikImplementierungTermNichtlinearer OperatorVerknüpfungsgliedTopologieSchaltnetzRechenbuchEinfügungsdämpfungParallele SchnittstelleGrundraumPrototypingVerschränkter ZustandHadamard-MatrixQuick-SortEin-AusgabeDokumentenserverMinimumWeb-SeiteSuperposition <Mathematik>AggregatzustandStichprobenumfangGlobale OptimierungProgrammierungBitEinsDistributionenraumEinflussgrößePunktTypentheorieSchnelltasteMAPTouchscreenMereologieComputersimulationMotion CapturingInverter <Schaltung>AdditionWort <Informatik>Gleitendes MittelDiagrammFormale SpracheOffene MengeData MiningElement <Gruppentheorie>Workstation <Musikinstrument>GeradeProgrammbibliothekComputeranimation
PunktwolkeComputeranimation
Transkript: Englisch(automatisch erzeugt)
Show of hands. How many people here are really more of a physics background? Okay. How many are really more of a computer science background? Okay, slight majority. My own background is in classical computer architecture and high-performance computing.
And so the talk I'm going to give here is from that perspective. I'm not a quantum mechanic, but I do understand how complex systems tend to go together and work. And I do understand computational models. One of the things, and I tend to like to approach this really as a hardware up kind of an approach.
So I'm not going to get into any interesting machine learning algorithms. But I am going to get from some basic concepts through a hardware model, through some fairly specific details on how we actually build a quantum machine. And then up to, this of course being an open source conference, some descriptions and an example of our Cirq quantum programming toolkit,
which was developed largely at Google. But we've got collaboration from academia, particularly in Europe. Now a lot of people find, well those of you who are physicists who have mastered quantum mechanics,
bless you, I spend most of my time in airplanes reading textbooks and trying to catch up with what I should have learned when I was much younger. But I've come to the conclusion that in fact it's okay to be intimidated by quantum systems. Because Newtonian mechanics is something that has an evolutionary advantage to understand.
When you throw a rock, it's going to follow a parabola. It may have taken us tens of thousands of years to understand the mathematics of a parabola. But even children can understand fairly quickly that if they're trying to hit something at a certain distance, they throw the rock with a certain force and a certain angle and that resolves. It's important. It matters. It can be a life or death thing. And so evolution is going to favor brains that are good at understanding and intuiting Newtonian mechanics.
But there has been no evolutionary reason to be able to have intuitions about quantum mechanics. And so it sort of hurts our brains when we look at it. If you go back and consider the origins of this, I mean the classical experiment,
and again I'm just talking for some slides that are here at the intro that you don't really need to see. You can imagine what I'm talking about here. You consider the experiment of the beam splitter experiment. Classic thing. You fire photons into a beam splitter. The split beams you put to two mirrors. You run those two mirrors into another beam splitter,
and then you put some kind of detector on the two paths coming out of that second splitter. Now, if my mental model of what light does, if my mental model of what photons are doing is Newtonian, I'm going to say, well, yes, obviously half the photons are going to go in each direction, then they're going to merge, then they're going to split again, and my detector should see the equivalent signal.
In fact, this is not the case, and this is rather perplexing. Now, quantum mechanics has a model that explains this rather neatly through linear algebra. But in order for that particular phenomenon to be explained by the equations, we have to be prepared to accept the notion that the photons are actually on both paths at the same time.
And again, this hurts my caveman brain, but it is observable, it is verifiable, there's a good solid mathematical basis for it. Now, given that a photon can be on two paths at the same time, there are implications for what we can do in terms of information theory. And so what we do is we construct qubits, a quantum bit,
that can have two values at the same time, two values that are superposed. It is not either one or zero, it's not even statistically maybe one or zero, it is both one and zero, concurrently to some degree of probability. I have not had the good fortune to have seen all of the other presentations today,
but I imagine you've seen some block spheres out there, this three-dimensional sphere that's usually used to visualize a qubit. And so if I have a single qubit, it's a unit vector that's out there to some point on the sphere, and that has some degree of oneness and it has some degree of zeroness,
but the projection of the complex space ends up showing that I can be performing rotations around three axes on this value that in some cases is going to affect that one or zeroness, in some cases it's not, it's going to be indirect, I'm affecting the phase which may become a factor in some later manipulation of the qubit. So, okay, still nothing? Fine.
Look, I'm just going to use my deck just to organize my thoughts, if you all don't mind. It would be nice, it would be nice. Yes, by the way, I mean, not that I want everybody heads down on their computers, but the slides are on there.
So, for once the people who are remote have the advantage. Oh, dear. No, but the people who are remote may have downloaded the deck already. Okay, so for those of you in the studio audience, there's a copy of this deck out there on the conference site.
You can pull it in and look at it locally. Anyway, so having these quantum bits is a really cool concept in terms of information theory, but how can we get at them? How can we manipulate them? The simplest quantum systems are fundamental particles, and again, you know, Xanadu is working with photons in this regard,
and they're doing some very cool stuff. This is the first time I've seen them present, and it was really interesting. But the smaller you get, an electron, a photon, yes, individual particles could be quantum bits, but they're hard to capture and manipulate. The first really successful experiments going back 10, 12 years now were using ions, a charged atom.
If I have a charge on the atom, I can push it around using electromagnetic fields, and it is nevertheless a quantum particle, and there's a lot of good work still going on in that space. But this is one of the few places in computing where being bigger is actually an advantage. We've spent decades trying to make things smaller and smaller and smaller, fit millions of transistors, billions of transistors onto chips.
When I'm trying to manipulate a qubit, I actually kind of like it to be big at this stage of technology because I want to be able to get control and measurement circuits around that thing in some way and pack them together. So the technology that we've been working with at Google, very similarly to what IBM and Rigetti have been doing,
are superconducting qubits, which is to say I've got a superconducting magnetic field that I'm generating in a particular point in space. These are not tiny. If I look at a naked quantum chip, I can actually see where the qubit would be. Now, I can't see the qubit because the qubit's only going to exist at like hard vacuum and much colder than deep space temperatures,
but nevertheless, I can see where it will be. It's really quite macroscopic. And the way we use these things is visualize, if you will, that the qubit space is sort of a plus sign with the field essentially in the middle. At each end of that plus sign,
I have an opportunity to align another plus sign. And at that interface, I have some degree of coupling possible, the coupling that I can use to create entanglement and to manipulate qubits in multi-qubit operations. And so what ends up happening is the closer the frequencies are of oscillation of the qubit,
the more likely they are to entangle, to interact, to couple. And so x-mons, which is the simplest thing to build, this is what we built our largest chips to use, these x-mon qubits function by direct coupling. These qubits are right next to one another. You bring the frequencies together, you can entangle them. If you move the frequencies apart,
this creates essentially an isolation and a resistance to operation. And on that basis, we can selectively manipulate the qubits. Now, one thing that is interesting for the computer science folks here, those of you who have written assembly language, try and imagine an assembly language where the only way you can get data into a register
is through an immediate operation. You could make a machine like that, it would be awkward and the compiler would be painful, but you could do it. And that's good because that's what quantum computers need today. These gate-level models, there is no load, there is no store. However, we can send instructions in that create values. And indeed, the whole operation of a machine is somewhat turned on its head.
The qubits don't move. The qubits are static. The data flows, literally flows through a computer. You've got inputs to gates, you've got outputs to gates, you've got signals going from transistor to transistor. When a quantum computer, at least of the technologies that we're using today, operates, the qubits are staying right where they are.
And we are in some sense sending instructions to those qubits so that they will do what we want them to do. In the case of the superconducting devices we have at Google, that ends up being in the form of microwave pulses, very carefully timed microwave pulses that are input to the device.
So there is a photo later in my deck if we ever get anything live visually, but you will see. Our quantum computer looks like pretty much everybody else's. A big suspended cylinder, which is essentially a dilution refrigerator, with a lot of cables coming out of it going to racks of equipment. Well, an important element of those racks of equipment is an atomic clock,
and we need atomic clocks to keep things synchronized tightly enough to actually function. The gates that we use, and this may have been touched on in the IBM talk, again, I missed it. There are single input gates, and there are multiple input gates,
much as there is in classical logic. There is something. Does this mean if we just... Well, I am actually on this slide, so what if we just... Well, I'll let you do what you've got to do. I'll continue for just a moment. So at any rate, there are unary gates, and there are binary gates,
and in fact there are multiple input gates. One of the things about these gate model computations is the concept of a gate is relatively virtual at this point. We're still trying to figure out what we need to do. Just as you can build an entire quantum computer, pardon me, an entire classical computer out of nothing but NAND gates, it would be foolish, but a NAND gate is sufficient... No, no, still not there.
A NAND gate is sufficient to build any logic circuit, though it would be foolish to do so. To fully gate is, in principle, sufficient to build any quantum gate model system. Okay, we are getting close.
Okay, you missed the cute caveman picture, but it's cool. Now what's interesting is I don't see here what's on the screen. So forgive me if I look back nervously over my shoulder from time to time.
So here is a diagram, this is from a paper, there's a reference to it that was published by our team actually about five years ago now, and in fact the next slide will show you what it's actually doing that is interesting, but I like this because this is showing the execution at a couple of levels.
You've got your qubits there, there's three qubits, that was always required to do this particular operation. There is a phase where we're preparing an initial state, then we are sending various operations out there, so you can sort of see these symbolic microwave pulses there along the thread,
and what we end up getting is the result, but the result is encoded in the phase of the qubit that we're looking at. Well, the way these machines work is by having a resonator circuit that's close to the magnetic field, and that resonator circuit is going to read a higher value the higher the amplitude of the field. So in the end, to read anything out of this machine, we have to convert it into amplitude.
So basically we're running a phase-to-amplitude conversion in this final step, and then we do our measurement. Now, what was that doing? It was simulating the hydrogen molecule. The output was the bond energy that varies with the distance between the two nuclei,
and so we were very pleased. Again, this is almost five years ago now, I guess, that they did this, and the blue dots on this line are the actual measurements from the quantum machine, the Google quantum machine. The red dots there are IBM's data.
They were able to confirm our work, and we're very pleased to see that it took them a couple years longer, and they still didn't get as good a resolution. But all credit to IBM. They put their machines on the web early. So again, a lot of respect there, but we do take some pride in the quality of our work.
So perhaps other people talked about this. I'll address it here in another way of visualizing it, because sometimes these concepts, if you're not used to them, it's useful to see things in several ways. It's useful to read a couple of textbooks on the same subject to really grasp the matter. So here what we're looking at is a computation as sort of a 3D volume.
At the back there, those little circles with the arrows, those are our actual qubits, and those arrows are all pointing up because we've all set them to an initial zero state. And then we start applying gates to those qubits in place. So you can see some little cubes. Those are unary gates, single input gates. And then you can see some rectangular boxes which represent two qubit gates,
operations that are being applied to those qubits. Now those things can be done within limitations, which I'll get to later, in parallel. And then at some point when we finish the algorithm, there's a phase where we measure all the qubits that we're interested in. Now the problem is that every time we do an operation, there is a probability that we're going to decohere.
And so nothing is free, and particularly not in quantum. So depending on that fidelity factor and depending on just sort of the absolute stability of my system, I have a variable number of steps that I can do before I have to measure, before it becomes highly probable that my measurement is meaningless because something has gone wrong.
So two qubit gates are more dangerous than single qubit gates. They tend to intrinsically have a lower fidelity. And so part of the art of quantum algorithm development is to minimize the number of entanglement operations, minimize the number of conditional operations that need to be done, and put things into unary gates.
Another factor of these sort of early machines, like the model I just showed you of the hydrogen molecule, there were some unary gates in there that were really quite complex and really unique to that particular problem. If I'm building a general purpose computer, general purpose tool, portable programming, things like that, I'm probably going to want to have some fairly general operations.
But because fidelity is relatively low, because my reliability is relatively low, it really behooves me to minimize the number of steps. So having a large number of, you know, it's one of these risk versus CISC things. CISC wins in NISC, noisy intermediate scale quantum systems.
So we're building these qubits in arrays. And where we are on our particular development roadmap is we're making them larger and larger and better and better. And then there's a certain amount of feedback that goes in because every time we make them larger, we start seeing secondary effects that we had not observed at the smaller scale. Those need to be dealt with. We get better at the control logic.
We get better at how the silicon actually has to be laid out. And so what we're shooting for in the very near term is what's referred to as quantum supremacy. I'll explain a little bit about that in a moment. But not much beyond this quantum supremacy threshold, we expect there to be some useful applications that people will actually be able to get out of these NISC machines.
It's a limited subset of what quantum computing will be able to ultimately do, but there are some of them out there. And we're hoping that we'll keep a whole sector of industry and a couple generations of grad students going. And then once we get to a certain scale, then we will actually have error-corrected machines.
An error-corrected quantum device uses logical qubits, so you're doing exactly what you're doing. Your thought processes of algorithm design are quite the same, but the underlying hardware and, for want of a better word, firmware, the classical firmware that's controlling the system, is really quite different because to have a single logical qubit
that is preserving my value and phase information for seconds and seconds, hours and hours, days and days, I need periodically to regenerate things. And that's tricky because quantum mechanics doesn't actually allow us to make a copy of a qubit. We can measure it. We can teleport it in a certain sense. We can cause it to be recreated by converting to classical and back again.
But this basic act of just give me a copy of this and I'll check at some point whether I'm happy, you can't do that. So the algorithms are far more subtle and they're far more expensive in terms of qubits. The better your qubit, the less overhead you have. And this is why various people are taking very aggressive
and exotic approaches to constructing qubits in hopes of having a higher fidelity so they can reduce their overhead. We'll continue looking at that. We'll continue working at that. But we're close enough to having something that we think is workable where we're moving ahead. But at today's technology, it's going to be about a thousand physical qubits to get a logical qubit. So to get a really good quantum computer,
we're going to need large numbers of physical qubits. We're pushing hard and we'll get there. So this quantum supremacy experiment I alluded to is something that's somewhat controversial. I was never wild about it because I'm an old school computer architecture guy and when you build a computer, you build it to do something specific and useful,
whether that's weather forecasting or video gaming. It's doing something you know what you want to do. Quantum supremacy, as described by the guys on the team, is a question of – well, first of all, what is it? It's doing something with a quantum computer that you couldn't do with a classical machine no matter how big it was.
Now, as a classical architect, I have a little bit of a problem with that because give me enough centuries and enough energy, I could build a really big classical machine. So never say never and all that. But nevertheless, to show this advantage, to show the fact that there are things that you could not do reasonably with a classical machine.
But then you get to the problem of, okay, I've done a computation on my quantum machine that can't be done on a classical machine. How do I know it was right? And that's where the subtlety comes in. That's where I really respect the guys that worked on this. The notion is that – this is an example. I don't even know if this is a candidate, but at any rate, you generate random circuits.
They don't do anything particularly useful, but they're generated according to particular rules. And those rules cause the measured output to have a certain well-defined statistical distribution. Now, while I cannot simulate the actual circuit on a classical supercomputer, or Google cluster is what we use,
much the same thing, really, we can compute what the statistical distribution is going to be, and these statistical distributions have a really nice instability property, which is to say, if everything is working, I get this very nice, exponential-looking curve. And if an error happens – this is labeled multiple errors as a flatline.
I've seen more recent simulation results that show even a single error on a two-qubit gate will cause the thing to essentially flatline. So you can tell pretty quickly whether it did the right thing. And again, these things being unreliable, the way one tends to run these things, you do a lot of runs, you do a lot of measurements, and you accumulate the data and you look at it.
So we can throw away the bad runs. But the point being, once we start getting good runs, we know we will have achieved this quantum supremacy. And then beyond quantum supremacy, the exciting stuff is error correction. So this is at the lowest level what's going on. The algorithms are really quite complex,
but at the lowest levels, what's going on, if you can see on that diagram, and for those of you who have no visual, I'm really sorry, the little white dots are the actual qubits that are being used for the computation. This is what we refer to as the data qubits. The black dots are essentially what we refer to as measurement qubits, and those are just there to see if something has gone wrong.
And because we can have potentially both data value errors, z-axis errors, and phase errors, x-axis errors, then we actually need to have both x and z. So those are the yellows and greens there. That actually only looks like two to one overhead, but that's just the beginning,
because every logical qubit is still going to need more than one of those data qubits. Now, this requires a lot of qubits, and it requires them in a fairly regular grid to use this kind of a model. And so we got to this hardware-wise in a stepwise manner that is instructive. So if you look up here, there's a diagram.
You can see this is a photo micrograph, and you can even see the University of California Santa Barbara logo at the top of the picture there. We were using the UCSB fab line for the longest time because the team is in Santa Barbara. They were from the Santa Barbara research team when they came to Google, and they knew how to use that line. And you do not need, as I say, these are course geometry things.
You can see the qubits. We don't need the latest Intel TSMC technology to build this stuff. So you can build it on a student line. And so those plus signs are the qubits. The little squiggly lines above them, those are the resonators. That's what's actually doing the measuring. And then what's coming in from below are the control signals. And so this is a linear array, nine qubits,
and here we managed to get the two-bit error rate down to 0.6%, or a fidelity of 99.4%, which is damned close to what we believe is needed to be able to hit supremacy and start doing some meaningful error correction experiments. And indeed, they published some work using this device that showed that at least one dimension
of the error correction algorithm worked. So we're moving forward here. But I can only make chips so long and thin. Mechanics get us into material science. So I want to do a couple of things. One of the things I want to do is to fold that linear array onto a vertical axis.
So our scheme is to use 2.5D technology, as it's sometimes called in the field, where we actually have two pieces of silicon that we process. And what we do is we put the qubits on one, we put the resonators and the control logic on the other, and we mate them together. And those are close enough to do the job.
And so there's some bump bonds between them. And again, this is already operating at superconducting temperatures and in a near vacuum. So a lot of the problems that you would have doing this normally, I mean, people do have, this is not an unusual procedure in the industry, it works really quite well. And now that solved my 1D problem,
but now I need to go to a 2D array. And so as you might imagine, what I do is I tile the qubits on one substrate, on one silicon substrate, I tile my readout and control logic onto another substrate, I align the geometries, I can sandwich them, and that allows me to build chips up to the scale of what I can reasonably do on a die,
which is reasonably large these days. So here are some pictures, and thank God I have the deck now, of the first of these 2.5D quantum computing chips. It's called Foxtail. The guys assure me that putting the Google logo across the middle had absolutely no impact
on the functionality of the device. Made me nervous. I was in the semiconductor industry for a long time, ages ago, and yes, of course we put our logos on, but they're off in the corner somewhere? Anyway, but you can see this thing labeled, and yes, it had its quirks coming up, but nothing having to do with the logo.
But moving onward to what we ultimately want to get to, or not ultimately want to get to, but the next phase, to get to supremacy, to be able to demonstrate error correction, we organized things a little bit differently. So you saw this linear array here, what we did. What we're actually doing is pivoting. We're doing a rotate. We're putting them on a diagonal. Now that might seem counterintuitive
and geometrically inefficient, but what that allows us to do is to basically say that I have groupings of qubits that are either data qubits or measurement qubits. So I can use a common readout line down a set of qubits and know that I'm not mixing my data and my measurement qubits.
And so this gives us an architecture where we tile these diagonal strips. The first chip that we made with this was codenamed Bristlecone. It's a 72-qubit device, 12-unit cells, six qubits each. There it looks in the package, and again, you can sort of see it's kind of high in the middle
because this is one of these flip chips that are posed on the substrate, two and a half D kinds of things. I sometimes wonder who's ever going to see the logo because, as I say, it's in a hard vacuum at the bottom of a cryostat, but hey, pride of place. And so here's some photos of what it looks like,
and yes, it looks rather more like an automotive garage than a computer center, but this is one corner of the lab. This is the famous yellow fridge. We have fridges in the various primary Google colors, as you might imagine. But this is yellow, and it is up there. When you see these photos, you always see these things suspended.
I don't know if anybody ever tells you why, but it's about minimizing vibration. You really got to eliminate as much of any kind of energy that's getting in there, and mechanical coupling, acoustical energy is just as nasty as electromagnetic energy when it comes to perturbing these things. So it's suspended in isolation, and huge sets of wires coming out.
You can see those racks of equipment. They were still in the process of wiring the whole thing up when that picture was taken. The actual racks of equipment, that rack of equipment is replicated a couple more times down the road to get to all 72 bits. So near-term devices at the scale that we're talking about getting to, these NISC devices, people have talked about, for example,
oh yeah, we're going to obsolete RSA. We're going to bankrupt all the Bitcoin people, and sorry, any Bitcoin people in the audience. But Bitcoin is toast once things start working at scale. The good news for Bitcoin is that you need a pretty darn big quantum machine to break RSA for any sensible key length. So you've got a few years yet.
But there are things that we can do with order of 100 qubits. And so the most likely things, as has been said, quantum simulation of quantum processes. This is pretty basic. Again, we've demonstrated for hydrogen, the most trivial case. IBM has shown work both hydrogen, they've done lithium hydride. I don't believe we published any lithium hydride results ourselves yet.
Every qubit you add adds additional complexity to the whole, every quantum element you add adds additional complexity to the thing. But this is a promising area. And then another is numerical optimization. Now, numerical optimization covers a broad field. There is just the whole optimization notion along the lines of the quantum annealing that the D-Wave does.
And there's also machine learning. Machine learning can be thought of the same thing. This tends to be a fairly common looking set of algorithms. And these are things where we hope to be able to use relatively small numbers of qubits and relatively unstable qubits to do that. The model that we have at Google is more one of, we would use classical processing for the first several layers of a neural network. But as things fan down, it'll start reaching the scale
where we can actually process them in quantum and use the quantum technology. But this is a conference about open source software. So I want to talk about some open source software. And specifically, I want to talk about Cirq, which I don't even know that the guys who wrote it knew this pun was possible. But I like it.
So what is Cirq? Hit the button on my own machine. So it's a Python package, unsurprisingly. Seymour Cray is quoted to have said, back in the 1970s or 80s, someone asked him,
what will people be doing, writing high performance computing programs in the 21st century? And his reply, which is often quoted in the community, is, I don't know what it's going to look like, but they'll call it FORTRAN. With all due respect to Seymour Cray,
and I have enormous respect for Seymour Cray, he was wrong. I don't know what they're going to call it, but it's going to look like Python. So at any rate, but the model is, so you have a Python framework which was conceived to allow us to have a quantum engine, if you will, as a cloud service that one would connect to
and then provide one's program. And that program might be run on simulations, on classical simulations. We happen to have rather a lot of computers at Google. And we have some parallel simulation algorithms. And we can simulate pretty large systems on those machines. But as the quantum hardware comes online
and the quantum hardware gets larger and larger, then we start having more capability. And certainly, you'll run out of steam in that sort of 30 to 40 qubit range. Because remember, every time I'm adding a qubit, I'm doubling the possible state space. So just going from simulating 45 qubits to 46 qubits, all of the things being equal, I need twice as much memory and roughly twice as much processing power.
So hence the appeal of this. So there are reasons why I described a lot of how the machine works. Because the design of Cirq is built around the requirements that come out of these sorts of machines. For example, some of you actually could probably eyeball this
and know what I'm talking about. Here we have a set of controlled Z gates across a set of qubits. I have nine qubits here, A through I. And you look at this, is this a good quantum circuit? Well, let me put it to you this way. Would this be a good assembly language program? And those of you who have an eye for data dependency would probably say, that's a dreadful program.
Because you have a linear chain of data dependencies going right down the line. Well, guess what? In the quantum universe, it's a similar problem. I've created a gate depth that is not good. Because as I mentioned, the stability of my qubits is time limited. I need to do things as quickly as possible and as much in parallel as possible. So how about this? Is this a decent circuit?
Because here I've broken things up. And so this would be a big improvement if I were running this, say, on an Intel or some other classical microprocessor. I'm scheduling the registers. I'm scheduling the pipeline a little bit. And in the quantum sense, yeah, I visualize that this is what's going to happen, that I can do four in parallel
and then the other four in parallel. But I didn't tell you what the topology was. And the topology matters. Qubits that are in contact with another can be entangled easily. There are swaps that you can do. You can move these things around by various operations, but it's inefficient. But fundamentally, these operations can only be done
on adjacent qubits, on one axis or another. If it were a linear array of nine qubits, like the picture I showed you, that would actually be a good program. But for a three-by-three grid, it's not. Because F and G and C and D are rather too far apart to actually do those operations.
So no, it was not actually a good circuit. Okay, third try. I know that I have a three-by-three grid. Therefore, I'm going to be intelligent, and I'm going to visualize, okay, none of these are in conflict. I can do those four and those four in that order, and so this ought to be a two-step process. But again, life is not so simple.
There are things that you can't actually do, and we have to protect certain qubits. Now, why is this? The way we do CZs on an Xmon device, at least, is we bring the frequencies close together. As you heard me mention, we get this coupling when the frequencies of oscillation of two qubits
get close to one another, and they're isolated from another if you shift the frequency away. So the way we actually perform the operation is to bring to the same frequency the ones that we're interested in working with, but we're also going to take the other surrounding qubits to frequencies that are further away.
Because we want to introduce any... I don't want to use the word ancillary, even though that would be a correct English word. It's an overloaded meaning here. But all the bits not involved, we want to keep them at frequencies that are relatively far away. And what that means is we can't just do arbitrary operations on the grid in parallel. We have to take some care of this. And so on this particular six-by-six grid,
to actually do a CZ along every edge, it would take eight steps to make sure everything was properly isolated. So these are the kinds of things that we are playing with. These are the kinds of things that we're experimenting with. And so we need to be able to get at that level of detail in the programming of these early devices.
We need to be able to do algorithms, but we also need to be able to actually do experiments on the systems themselves. So we need a relatively low-level language. And so there's this sort of range here of levels of detail and complexity and abstraction that these proposed quantum languages have. Cirq is very deliberately pretty low-level.
I think the biggest contrast would be with Q-Sharp. I don't know if Microsoft or anybody working with Q-Sharp was in here today, but Q-Sharp fits into Visual Studio, very cool. But it is a high-level language, and it pretty much presumes that you have logical qubits, pretty reliable qubits to work with. So we're down on the low end of this range.
Here's just a trivial case. I'll take just a slightly more interesting case in a moment so you can generate qubits. You can generate circuits built from those qubits. You can put operations into that model.
In the trivial cases, there's a circuit. There's the fromOps method, which I basically provide a long set, a variable length, a number of described operators, and it will populate the array with them. You can do it incrementally if it's a more complex thing. And of course, this being Python, I have to be able to print it. And when I print a circuit,
we get these sort of cute ASCII versions of the circuit model diagram. So the structure of the tool is very modular. We think this is pretty important. So there is a circuit area and a schedule area. User code will typically go in, and it will generate circuits. And those circuits can be saved and expressed in various formats.
So protobufs is an internal format we use at Google for RPCs. It's basically Google generic RPC payload. We can save it as various people's chasms. We can print it out as text diagrams. And so various formats can be done there. But then, of course, operationally, those gates have to be played on the machine.
And as I showed you, directly playing the qubits onto the machine may not be what you need to do. There are rules. There are constraints. It's like the old RISC processors, where the compiler had to schedule instructions around conflicts. You had branch delay slots. And if you couldn't put something useful in the branch delay slot, you had to stick a no op in and pad things out.
You had load delays on the original Berkeley RISC that were visible, and the compiler had to deal with them. Well, it's a similar thing. But we've separated out the circuit and schedule. And that's a concept that you'll find throughout CERC. So typically, just writing and running a program, you have the path through that is just going. You generate a circuit, generate a schedule, send it out to the machine or the simulator. But you can also use it as an optimizer.
You can read chasm. You can use optimization modules at the circuit level. And then you can write it back out again. And I'll show an example of that in a moment. We can also do transcoding. And I suppose you could argue that that's what that initial example did, where I just sort of generated something and said print.
But the circuit and schedule dichotomy permeates the thing. And the notion is sort of circuit is a discretized sort of a thing. You basically have operations. Operations are essentially a binding of gate instructions to a qubit. And at the level of a circuit, you're really not concerned about timings and durations. It's just things and their ordering.
And the schedule is continuous. So that's made up of scheduled operations. So operations ties the whole thing together. But they're operating in a couple different domains. Now, one example, and this is a nice, simple example that fits on a page but does something comprehensible and almost useful, is the one-bit calculator.
Now, it's not hard for me, just with transistors on a piece of circuit board, I could create a one-bit adder. That's fine. But this one-bit calculator is actually calculating all possible additions at once. And so it's actually executing in parallel. It's doing four operations concurrently. So my top two qubits are going to be the inputs coming in,
and then the bottom is an ancilla. So what I'm doing is I'm using Hadamard gates. A Hadamard gate, if I'm coming in with a zero state initially, is going to put something that's in a superposition that is exactly between, is equally zero and one, an important tool.
And then I'm going to run that into a Tafoli gate, which is, again, a conditional conditional NOT. And then I'm going to run it into a conditional NOT gate. And what I'm going to measure is the ancilla that was involved in the Tafoli gate, and the qubit that was involved in the CZ gate. Not the controlling element, but the data element.
And so that, in principle, should do all this simultaneously. So I don't know how well you can see this. This is in the deck if you want to download it. This is actually not too unreadable. But here's what it looks like. As you've seen in several of the other examples, we're importing our package, picking up some additional things that I'm going to need later
when I actually plot the thing out. But one of the things you'll note is that I'm generating my qubits, and I'm generating them as grid qubits with certain coordinates. Now, the earliest versions of CERC did things a little differently. The earliest versions of CERC were actually rather similar to one of the other talks that happened a little earlier, where when you create the qubit, you specify what kind of qubit it is.
And so the earliest versions of CERC, the earliest versions of this program, it was CERC.Xmonqubit. I'm just generating Xmonqubits. And then I would take those Xmonqubits and put them in an array. But the more we work with this stuff, the more we understand that topology is more important than technology. Technology matters, but I may want to have the same topology,
but have different implementation technologies on me. So this is very interesting in terms of quantum languages, because we're still figuring out what the appropriate points are to bind things. And so, again, earliest efforts, early binding of qubit type, and then now we're taking it up a level and abstracting it a little bit more. So I generate my three qubits of this type. I generate a circuit from the operations.
So I have, essentially, you'll note that I imported the CNOT and TIFOLI. Those aren't there automagically, but they're part of the standard library, so I can import them. And then the Hadamard is so fundamental, it's just there as a built-in. And so I attach the Hadamards on Q1 and Q2. I do a TIFOLI of 1, 2, and 3, a CNOT of 1 and 2,
and then I perform a measurement on these things. And then I print the circuit, and I print the circuit. Now, this actually is screen capture off my workstation, so I know darn well that this is not cheating. So then, having had the program instantiated, now I'm going to simulate it. So I instantiate a simulator, and this is where the x-binding came in.
I said, okay, now I actually want to run this simulating x-mons. And then I'm going to run that simulator on the circuit for some number of times. These are statistical critters, as has been observed before. I can have a lot of states superposed, but for every qubit, I can only read one of them at a time, and the rest of it just sort of collapses. So what I need to do is I need to do enough statistical samples
to be highly confident that I've seen what the distribution is and see what's going on. And so it worked, which is to say, if you think about adding two bits together, then two ones is going to give me this, two zeroes is going to give me that, and zero one or two zeroes, in other words, two possible combinations will give me this. So this trivially works, but it's a very easy package to use.
Here is an example of optimization. I did not personally run this. This was somebody's prototype optimization method, but it worked pretty well, which is to say, we started out with this kind of awkward-looking gate diagram, but the important thing to understand is all of those vertical bars represent two-qubit operations.
Those are all entanglement operations. And so by optimizing, we've taken that from six down to three. And because those are the operations that contribute most to my loss of coherence, that's a really, really big deal. So thanks for your attention. I will leave this up here as long as I can get away with.
You've already heard about OpenFermion. OpenFermion is a project that's been collaborated on by a bunch of people. ETH in Europe, they were a big contributor to this. University of Oxford, as well as the two main European institutions involved, as well a number of U.S. universities and national laboratories. It's out there. Play with it.
Then Cirq is also off the GitHub slash quantum repository. It contains its own little simulator. It's very self-contained. There's a startup page that's pretty trivial. You install the package and you can just follow the sort of steps that I did there. So thanks. I'm probably over time with all the technical hiccups, but hopefully the organizers will tolerate my taking a question or two.