Bestand wählen
Merken

Long range failure-tolerant entanglement distribution

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
not long range entanglement distribution and good that's working hold up a bit and think that it's a point that so just willing about
OK so thank you very much to thank you very much to the organizers for running the conjugate his talk it's a little bit different from what's as advertised I will get to the stuff that's advertised on in due course but talking to people I guess the advantage of coming at the end of the conference is I get to see whatever announces talking about and I also get to talk to other people and see what they'd be interested in hearing about so I've kind of made some modifications and telling of the story in a little bit more detail I guess me OK so the basic question that I'd like to interested in is how are we going to do quantum computing and preferably fault-tolerant quantum computing if we have entanglement operations that are very prone to fail now what what I mean by fail is not generates error I mean just outright failed and you know that it failed so we were heralded failure so entangled operations very prone and when I say very prone we'll talk about those actual probabilities that think 70 % something like that more than 50 percent chance but when you tried entangle 2 cubits your apparatus was say no I tried I failed sorry it's all gone wrong and those 2 cubits in that case you should also consider a now corrupt so if they previously had some kind of entanglement invested in them with partners that's all been lost and spoiled so if that your model you of course on top of that you've also got the usual kind of error singular FU that errors of measurement errors which you don't know about and presumably at a much lower rate than 70 per cent we've got but the the you the new factor I mean many people thought about this of course but that the factor that is perhaps not usually considered is that we have a very high known failure rate and so that's the on yet so I always find it helpful to have some kind of physical example in mind when I'm thinking about these things this is by no means the only
kind of scenario that would give rise to this picture so what I'm what I'm thinking of here is you've got some kind of physical system that's called an atom it's sitting in a cavity so they're on the on the right we see at you can look at sort of cavity image there and light can come out of physical structure and perhaps due to some can conditional transition in the level structure that it has so always will have a complex of a structure but it might contain something like this L structure of structure that would be convenient to low-lying levels the rock units and we have some optically excited state that can only be reached from state 1 and so we can interrogate on ask it what state it's in by shining on like that would excite it to the AER optically excited state if it is in state 1 and then watched as the photon comes out can even repeat that's that's how we might measure it so that structure than an atom in a box is like a module of our hardware that's basically the units of quantum computer and how are we going to entangle different modules with with each other well I'm I'm sure pre-merger run in the room is familiar with these kind of path erasure type tricks that we could use to do measurements joint measurements on 2 such module simultaneously in order to entangled and in fact the most common kind of operation you end up with you that kind of thing is you end up with a parity projection on your cubits and that's what I'm going to be assuming later on but
just generally that is 1 example of why we might be thinking this because if we lose voting that process the entanglement operation will go wrong and we should know that that's happened because will fail to see photons in our measuring device the kind of architecture we might end up with something like this very very schematically of course you have a bunch of modules which you just plug more in if you want to do at each module contains 1 atom the 1 Q that you have some kind switching device that allows the that allows you to choose which 2 guys to entangle with 1 another and in fact I'm going to assume parallelism so that I assume that because it's optical light passes through light and so I can try and attend 1 of these guys with another while simultaneously attempting in a different pairing and in fact I compare everybody up and have a go entangling so this is the kind that you talk about what we have there in inset is just I don't know which of the particular mapping is not in the device that was made by Bell Labs of something it's a tiny little nearer that some sort of re-orientate book and that's the kind of thing that you might use to have little switching device this you want the light to pass through this device without being measured so I'm just
very briefly if we did have we for this talk I'm not going to see this but if we did have but more structure within each module by 2 cubits within each module then it wouldn't be at least be fairly easy to see how we might do quantum computer that even though the entanglement operation is very likely to fail the kind of thing we might do is we nominate 1 guy within each location to be in charge of getting entangled with other people and another guy each location to be not involved in the process so then we would get it so here the green guys the brokers the chance to get entangled you entangle that eventually offered perhaps many attempts with the other module that you shooting for when he finally got entanglement you can move it down transfer down sort down onto the class the quite guys so that at least you know who knows you have 1 my company might come back to what the error rates infrastructure but at least you can see that that approach would would work in principle but we haven't got that we've only got 1 cubic in each location sign up probably use get through this you know it's a pretty diagram because I realize that I started a little bit late and let me go on to talk about that
scenario I'm interested in is when the probability of successfully creating entanglement is of significantly less than 50 % so the failure rate is higher than a half hello hi might it be in reality when approaching at the moment in experiments that have been achieved the failure rate is enormous this this is kind of 1 of the earliest papers that got me excited this was a paper by Chris Monroe's group and this 2007 paper there because they need to pass the 2 photons in order to have the entanglement operation heralded as a success what they have to do is look at the success rate for retaining a single photon through the apparatus and square it in order to see 2 and they end up with something like excesses per billion attempts so that's obviously an appalling of success rate and it will be impossible to do jointly with such rates but it does at least shows that we we shouldn't be thinking for the foreseeable future that wouldn't have lovely high success rates like 80 percent we need to sort of look into the area of low success rates and see how things might
work and the sensors another very interesting approach have similar problems which I won't go into if anything worse so
alright let's imagine we got something like a 90 % failure at how we going to in approach the problem of quantum computing in a way that that the errors that the genuine unheralded errors don't build up thick
so the kind of thing we might do is this let's say I want to make up an entangled states of many of my qubits that's a useful state such as a two-dimensional cross the state like that then I am going to make it by having building blocks which I will obtain work from brute force if necessary I'm going to build a building block structures each of which contains a great many qubits in onto when I put up these diagrams this is a graph state notation as I imagine how much of his assume so if I have a law then that's as much about you that and if I had a line that's a phase gate between those 2 cubits so the kind of thing I might do is make these building block structures which have a huge number of qubits in them and then arrange them so that there is 1 such law for each eventual cubits that I want to have it in my across the state and then will try and use them up and weighted off use them up is that I will see if I can work from really it's not working companies in so if we look at these 4 guys which of the highlighted by a block from the center that what we can see is that we have as many as it were dangling bonds on the end of our graph structures here we can attempts to enter each 1 of those needs with a partner leave in the next structure along we can attempt at in parallel and we have so many leaves that even though we have a high failure rate we can then expects that we will probably succeed and make at least 1 bridge between these 2 structures OK so that's the kind of approach we might take all actually
but you had OK so the 1st question was would be what kind of building block structure should we should we should we employ various ones have been looked at in the literature by well for example I think news and stock incisive look to the linear structure Rossendorf used these cross shapes what I'm going to advocate is these binary tree structures which we refer to our paper snowflakes just because we like to draw and so picture them that way and in a minute I think gosh did show you why the snowflake is actually the most natural and sensible building block object to use of its of bus or how would hardly make them if we do a parity projection onto cubits in trial here we do apparent objection on 2 cubits and then up to local operations we get this cross states if we then try and connect 2 of those guys we get this three-pointed star if we connect 2 stars we start to get the street if we connects the core motive of to trees we get a larger tree so they grow very efficiently that's the 1st nice property of these start structures in snowflakes but what's
really important is that they give us a very large number of dangling bonds these dangling bonds are going to be used for fusing up snowflakes with 1 another they give us the largest possible number of these at considering the half-length through the structure so in a minute we'll see that if we were able to successfully connect this particular cubit to another snowflake and this particular Cuba to another snowflake we would then want to just all this extra structure because it's a binary tree it means that a lot of the structure errors in this patch errors in this patch here in this patch you will not find their way onto the part that we're actually keeping and the part itself is only of logarithmically so that's that's the very important in getting and sensible performance for our structure here so this remember this is very generally the kind of
machine we might imagine the properties that I will need from it for my analysis of it it should be of 2 operations in parallel ends that's the let's say that I'm arbitrarily allow to connect any Cuban with any other that's going to be more than I need but that's much OK so the kind of thing that go on inside my machine as this in a particular times that s I will have a whole bunch of qubits which are not entangled with any others a whole bunch of qubits the impairs a whole bunch of you the in groups of 4 and so on what I will do is I was schedule the at times that as I was scheduled to pair up every 1 of the UN entangled guys I was scheduled to pair up every 1 of the guys of size two to pair up and so forth and I will then attempt all those operations in parallel in 1 time step I will get some successes which will allow me to promote entities up 1 rank and I will get of course some failures more commonly failures and successes will take a very aggressive strategy but if I fail even if I manage to make a lovely quite big thing like this while which you know school of investment in a sense I then tried to entangled this with this if I fail I've actually still got quite love time structure there but when I'm going to do is nevertheless been it all just break the whole thing down we set all cubits and dump them back into the know states Europe and the reason I'm going to do that is to crack down as aggressively as I can on the rate that errors build up a and B of show you what difference it makes if we go to present so we have this painful process of promoting ourselves to a larger and larger snowflake but what I'm going to assume is that eventually we can get
to snowflakes are sufficiently large then if I try and pair to of the snowflakes up and connect so this guy to this guy while simultaneously Triton is that this guy's got this guy for the particular failure rate that I have I would expect you know a greater than 50 % probablity whatever probability I decide I need that I was successfully taken create a bridge between these 2 structures like this that's the basic trend also this this is just some sort of a trick for making the last stage more efficient so then
we have structures here or snowflake strong slightly differently we then try and connect them up in whatever topology that we're interested in it's OK for some of these connections to fail even though we got multiple chances sometimes all our chances will fail that's all right because we can use results like population to show that if we are able to have a pretty well connected to the cluster states and we can actually do quantum who with I
and then here again is this point that when we come to a clean the structure up get to the US costs that we really want we are going to have passed the only logarithmically in the size of the snow for right so now let me show you what kind of performance we see the 1st point here is why is there so much about the 1st point here is that we see that it doesn't really make much difference if we have recycling recycling is the process of not checking things away when we get our 1st there is the fact we won't recycle we will just been and get through this and show you this this is quite interesting of I wanted to nominate but basically in order for a machine to operate efficiently and keep all its errors bounded as a logarithmic function of a century our 1 over the success rates of logarithmically related to the probability of fundamental probability that we can successfully make entanglement in order to achieve that we need a big machine because we need to making lots of the snowflakes at the same time so that whenever waiting for a partner to emerge for Snowflake of a given size we always straightforward and since we have a guy a certain size in the Burnett step will part him up in order to do that we need are computed be of a particular size so we had a rather interesting relationship between the probability of successfully making entanglement that's presumably a very low level experimental parameter and the size of the machine that this whole idea works for so up the the lower the success rate the bigger than she needs to be I can't say that more about that it be registered so all 3 minutes OK so let me just I don't get to the point which is I think and be of most interest you guys so all of that this idea of making snowflakes enjoining snowflakes up and so forth is that basically a route to showing that I can create a useful structure with only logarithmic number of errors but what I haven't obtained for you yeah is any kind of threshold to see whether this is going to be a useful approach how horrible things will let's get a threshold now instead of having a target
object which is a simple across the state let's try and get 1 of these are topologically unprotected states 1 of these Rossendorf style states is that in principle that seems pretty straightforward after all it's a similar sort of challenge of connecting each building block objective for other objects but the trick here that allowed us to complete the analysis was that we had this percolation result and we just so
here we fat we know that if we have a certain number of missing edges in our two-dimensional state then we nevertheless have a resource that's good for quantum computing we need a result that's equivalent here if we're going to use all the ideas that I've developed so far in order to to obtain an actual of threshold based result so what can we do we can need to be able to say how many missing edges we can tolerate within a structure like that now that
result was just obtained last year by Sean and Thom who are both in the audience here or rather they were able to obtain a result new head if you around I think yesterday you had that shown talk about this you know I have a result that says if you have a
certain task isn't an alliance Fortran
Hi at last so there was that if you
remove certain notes from so cubits from structure then it will still operate you can remove up to 25 per cent of them and was still operate what I require that all that is missing edges but that in fact is a straightforward because what we need to do it if I can to go backwards what we need to do is identify missing edge remember these unknown I know with these such missing and I will it basically delete both the cubits from either end office missing edge in all
to then translated into a question of missing humans so this is the equivalent of the population result for my purposes which will allow me to the end of the show how I do
that I will take again my snowflake concept identify built the snowflakes identify each 1 with a node of the target structure and then try in parallel to connect them up as shown here and I know that some of those will fail but as long as the rate of failure is below the essentially the translated 25 per cent greater than the eventual an entity will still offer me fault tolerance so I can simply inherits the results that those previous authors and worked out in order to do so numerics and generate for you a threshold so that the this is essentially the key figure for the talks and tell you what it means here we have the rate at which I fail when I attempt entitlement so this is I always fail this of course is that fairly trivial case with only 25 as a 20 % failures this is setting or what kind of level errors in the device equal to 1 another the single qubit errors that you could as the measurements setting those that are equal to each other and calling them PG This is what we then obtain so his using the snowflake which as you can see is very superior to using the star of across because this logarithmically beneficial some path link that it has and what we do here is we see that we can operate out to sort of 90 % even perhaps 95 % failure rates and in that regime the kind of actual error rates we can tolerate are 2 or 3 times 10 to the minus 4 not a great number but not also not an insane number considering that we are tolerating an enormously high failure rate so I need to mention to you that it doesn't make much difference if you put in reasonable and memory errors those those 2 things are becoming corrupt all the time that also doesn't spoil the proper spoil the situation for you and finally I loss is of course very important the real problem that stopped you getting to sort of the very high nineties is that the resource overhead explodes so I think that if you're looking here and the sort of 80 percent regime you can see that your overhead is like a hundred cubits that's a lot but perhaps in solid-state type scenarios where your cubits not too expensive but isn't completely crazy so this as far as we're aware that was the 1st result that basically related failure rates to actual error rates and obtained a fresh for this style now at the same time I should mention that these authors who also present at the conference obtain almost exactly the same threshold by exactly the same techniques and we were aware that they're working on this is 1 of the strange situations where 2 papers emerged almost simultaneous OK I know stop there because I can also tell you about communication thresholds that you just leave it on that slide so you can also obtain
of course higher numbers 15 % if you use the same tricks for OK thank you time for maybe 1 or 2 questions and trying to put together some of your numbers so you got over their heads at about a year 90 % all 10 to the 3 10 to the 4 yet proved this studied general for quantum computing right rather than the application which he has denounced undergo bodies in this is the 1 that you would like to see your yes and then early on there was 1 way you saying these types of codes would work at the age of 90 % has something like a computer size test 10 to the 11 cubits I'm wondering what the size of the eventual what the number of key bits in the eventual lattice that you've got yeah OK so the trick is that when your building these snow any go all the way back if I can to show you how we're building the snowflakes the can that's the 1 I want
so what we see here is how a 2 going to build the structures it's very important if we're going to crack down on errors that we shouldn't let anything as can become non logarithmic because the basic result the nice result is that all the is are bounded by a logarithmic function of our failure probabilities in order to stop memory errors from becoming a problem I need this basically the situation as shown in the schematic must be allowed to happen what I've got here as you can see as it happens is 1 of these guys and he has no part no with which to try and become intact so he will now have to wait until another partner emerges in order to try and create the next generation of art if I allow that to happen then I will spoil my logarithmic scale therefore all my overall computer size must be such that I have a large number snowflakes of every science including the final size of emerging all the time and that's where you saw the large number of basically the larger my computer the more what snowflakes I'm going to need because I'm trying to create this this large either to because the state or other geometry in order to do my computation and also provided that I'm attempting perverse provided I'm attempting a large enough scale computation then I can I know that I will actually have plenty of snowflakes all of a given size as a function of the failure probability so we can talk about that more at the end if you like but is is it's it's rather strange relationship between how bad your hardware is and basically how large a computation you attempt to undertake let's think some again were falsely 10 minutes behind only that to Daniel Twitter fright that depends on the type of
Distributionstheorie
Spannweite <Stochastik>
Punkt
Verschränkter Zustand
Konvexe Hülle
Einheit <Mathematik>
Güte der Anpassung
Technische Optik
Quantenkommunikation
Nichtlinearer Operator
Fehlermeldung
Subtraktion
Bit
Selbst organisierendes System
Messfehler
Physikalismus
Quantencomputer
Telekommunikation
Bitrate
Teilbarkeit
Stichprobenfehler
Quantisierung <Physik>
Fehlertoleranz
Singularität <Mathematik>
Informationsmodellierung
Funktion <Mathematik>
Verschränkter Zustand
Verschränkter Zustand
Overhead <Kommunikationstechnik>
Technische Optik
Operations Research
Bitrate
Innerer Automorphismus
Fehlermeldung
Subtraktion
Prozess <Physik>
Quader
Gruppenoperation
Komplex <Algebra>
Übergang
Einheit <Mathematik>
Datentyp
Datenstruktur
Bildgebendes Verfahren
Einflussgröße
Nichtlinearer Operator
Hardware
Quantencomputer
Physikalisches System
Modul
Quick-Sort
Mapping <Computergraphik>
Verschränkter Zustand
Rechter Winkel
Gerade Zahl
Projektive Ebene
Computerarchitektur
Ordnung <Mathematik>
Aggregatzustand
Bell and Howell
Prozess <Physik>
Momentenproblem
Element <Mathematik>
Klasse <Mathematik>
Gruppenkeim
Wärmeübergang
Vorzeichen <Mathematik>
Datenstruktur
Ereignishorizont
Serviceorientierte Architektur
Modul
Nichtlinearer Operator
Teilbarkeit
Quantencomputer
Bitrate
Modul
Quick-Sort
Quantisierung <Physik>
Diagramm
Quadratzahl
Funktion <Mathematik>
Verschränkter Zustand
Flächeninhalt
URL
Ordnung <Mathematik>
Fehlermeldung
Quantenkommunikation
Rhombus <Mathematik>
Quantencomputer
Stichprobenfehler
Zahlenbereich
Gebäude <Mathematik>
Bridge <Kommunikationstechnik>
Gesetz <Physik>
Eins
Netzwerktopologie
Zahlensystem
Datenstruktur
Parallele Schnittstelle
Phasenumwandlung
Gerade
Qubit
Nichtlinearer Operator
Graph
Kategorie <Mathematik>
Gebäude <Mathematik>
p-Block
Binärbaum
Bitrate
Linearisierung
Objekt <Kategorie>
Modallogik
Diagramm
Verknüpfungsglied
Datenstruktur
Forcing
Gerade Zahl
Surjektivität
Bus <Informatik>
Verbandstheorie
Speicherabzug
Projektive Ebene
p-Block
Aggregatzustand
Qubit
Nichtlinearer Operator
Subtraktion
Zusammenhängender Graph
Prozess <Physik>
Kategorie <Mathematik>
Gruppenkeim
Zahlenbereich
Binärbaum
Bitrate
Stichprobenfehler
Virtuelle Maschine
Patch <Software>
Scheduling
Rangstatistik
Verschränkter Zustand
Mereologie
Operations Research
Datenstruktur
Analysis
Aggregatzustand
Resultante
Einfach zusammenhängender Raum
Perkolation
Twitter <Softwareplattform>
Quantisierung <Physik>
Bridge <Kommunikationstechnik>
Datenstruktur
Bitrate
Quick-Sort
Aggregatzustand
Resultante
Subtraktion
Prozess <Physik>
Punkt
Zahlenbereich
Stichprobenfehler
Übergang
Virtuelle Maschine
Logarithmus
Datenstruktur
Analysis
Fundamentalsatz der Algebra
Parametersystem
Fehlermeldung
Schwellwertverfahren
Gebäude <Mathematik>
Routing
Ähnlichkeitsgeometrie
p-Block
Bitrate
Quick-Sort
Objekt <Kategorie>
Verschränkter Zustand
Ordnung <Mathematik>
Perkolation
Aggregatzustand
Resultante
Schwellwertverfahren
Perkolation
Theoretische Physik
Quantencomputer
Zahlenbereich
Aggregatzustand
Ordnung <Mathematik>
Datenstruktur
Schreib-Lese-Kopf
Aggregatzustand
Task
Objektverfolgung
Perkolation
Theoretische Physik
Stichprobenfehler
Aggregatzustand
Extrempunkt
Datenstruktur
Office-Paket
Resultante
Telekommunikation
Subtraktion
Einfügungsdämpfung
Zahlenbereich
Äquivalenzklasse
Stichprobenfehler
Übergang
Fehlertoleranz
Knotenmenge
Theoretische Physik
Datentyp
Datenstruktur
Parallele Schnittstelle
Figurierte Zahl
Einflussgröße
Autorisierung
Objektverfolgung
Schwellwertverfahren
Stichprobenfehler
Bitrate
Binder <Informatik>
Quick-Sort
Rechenschieber
Festspeicher
Overhead <Kommunikationstechnik>
Ordnung <Mathematik>
Fehlermeldung
Resultante
Bit
Zahlenbereich
Kartesische Koordinaten
Computer
Computerunterstütztes Verfahren
Räumliche Anordnung
Ähnlichkeitsgeometrie
Stichprobenfehler
Logarithmus
Datentyp
Datenstruktur
Softwaretest
Lineares Funktional
Zentrische Streckung
Hardware
Gebäude <Mathematik>
Quantencomputer
Generator <Informatik>
Verbandstheorie
Rechter Winkel
Festspeicher
Mereologie
Codierung
Ordnung <Mathematik>
Schlüsselverwaltung
Aggregatzustand

Metadaten

Formale Metadaten

Titel Long range failure-tolerant entanglement distribution
Serientitel Second International Conference on Quantum Error Correction (QEC11)
Autor Benjamin, Simon
Lizenz CC-Namensnennung - keine kommerzielle Nutzung - keine Bearbeitung 3.0 Deutschland:
Sie dürfen das Werk bzw. den Inhalt in unveränderter Form zu jedem legalen und nicht-kommerziellen Zweck nutzen, vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
DOI 10.5446/35311
Herausgeber University of Southern California (USC)
Erscheinungsjahr 2011
Sprache Englisch

Inhaltliche Metadaten

Fachgebiet Informatik, Mathematik, Physik
Abstract We introduce a protocol to distribute entanglement between remote parties. Our protocol is based on a chain of repeater stations, and exploits topological encoding to tolerate very high levels of defect and error. The repeater stations may employ probabilistic entanglement operations which usually faill; ours is the first protocol to explicitly allow for technologies of this kind. Given a error rate between stations in excess of 10%, arbitrarily long range high fidelity entanglement distribution is possible even if the heralded failure rate within the stations is as high as 99%, providing that unheralded errors are low (order 0.01%).

Zugehöriges Material

Ähnliche Filme

Loading...
Feedback