We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Lecture Mathematics: The Value of Erro in Proofs

00:00

Formale Metadaten

Titel
Lecture Mathematics: The Value of Erro in Proofs
Serientitel
Anzahl der Teile
10
Autor
Lizenz
Keine Open-Access-Lizenz:
Es gilt deutsches Urheberrecht. Der Film darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
The Value of Errors in Proofs (a fascinating journey from Turing's seminal 1936 R ≠\neq RE to the 2020 breakthrough of MIP* = RE) Last year, a group of theoretical computer scientists posted a paper on the Arxiv with the strange-looking title "MIP* = RE", impacting and surprising not only complexity theory but also some areas of math and physics. Specifically, it resolved, in the negative, the "Connes' embedding conjecture" in the area of von-Neumann algebras, and the "Tsirelson problem" in quantum information theory. You can find the paper here https://arxiv.org/abs/2001.04383 As it happens, both acronyms MIP* and RE represent proof systems, of a very different nature. To explain them, we'll take a meandering journey through the classical and modern definitions of proof. I hope to explain how the methodology of computational complexity theory, especially modeling and classification (both problems and proofs) by algorithmic efficiency, naturally leads to the generation of new such notions and results (and more acronyms, like NP). A special focus will be on notions of proof which allow interaction, randomness, and errors, and their surprising power and magical properties. The talk does not assume any special background. Avi Wigderson – Nevanlinna Prize 1994, Abel Prize 2021 The 8th Heidelberg Laureate Forum took place from September 20–23, 2021. #HLF21 The opinions expressed in the videos do not necessarily reflect the views of the Heidelberg Laureate Forum Foundation or any other person or associated institution involved in the making and distribution of the videos.
BeweistheorieStichprobenfehlerMathematikRechenschieberZentralisatorKörper <Algebra>Prozess <Physik>ComputeranimationVorlesung/KonferenzBesprechung/Interview
MathematikBeweistheorieStichprobenfehlerZentralisatorKörper <Algebra>ModulformKomplex <Algebra>Familie <Mathematik>Natürliche ZahlQuantisierung <Physik>Vorlesung/KonferenzBesprechung/Interview
StichprobenfehlerGlobale OptimierungParadoxonBeweistheorieMathematikObjekt <Kategorie>Physikalische TheorieFreie GruppePunktElement <Gruppentheorie>GeometrieEbeneMengenlehreAxiomZirkel <Instrument>Berechenbare FunktionRekursiv aufzählbare MengeRotationsflächeEntscheidungstheorieModulformOffene MengeParametersystemZahlentheorieTheoremAuflösbare GruppeAusdruck <Logik>SchlussregelVollständigkeitFolge <Mathematik>MathematikNatürliche ZahlNumerische MathematikPolynomMengenlehreModelltheorieAxiomKategorie <Mathematik>Ausdruck <Logik>Derivation <Algebra>FinitismusBerechenbare FunktionGrenzschichtablösungEuklidische GeometrieEntscheidungstheorieArithmetisches MittelBeweistheorieEbeneKomplex <Algebra>Physikalisches SystemPrimidealRotationsflächeStichprobenfehlerTeilmengeTheoremNichtlineares GleichungssystemTeilbarkeitParametersystemAuflösbare GruppeVollständigkeitQuadratzahlPunktKlasse <Mathematik>Arithmetischer AusdruckElement <Gruppentheorie>Ebener GraphEvoluteMultiplikationsoperatorSchlussregelZweiEinsGeometrieTourenplanungProdukt <Mathematik>VariableEinfacher RingForcingGrothendieck-TopologieHochdruckExogene VariableNichtlinearer OperatorAdditionBeobachtungsstudiet-TestVorzeichen <Mathematik>Objekt <Kategorie>KreisbogenComputeranimation
BeweistheorieVollständigkeitPhysikalische TheorieBeweissystemParametersystemPhysikalisches SystemTheoremZahlentheorieAuflösbare GruppeStichprobenfehlerZufallszahlenAuswahlaxiomFunktion <Mathematik>E-FunktionAxiomParadoxonMathematikNatürliche ZahlNumerische MathematikSignifikanztestFunktion <Mathematik>MengenlehreZufallsvariableKategorie <Mathematik>Berechenbare FunktionAuswahlaxiomBeweistheorieDeterministischer ProzessFunktionalGruppenoperationKomplex <Algebra>Physikalische TheoriePhysikalisches SystemRandomisierter AlgorithmusRechenschieberStichprobenfehlerTeilmengeTheoremFormale PotenzreiheParametersystemAuflösbare GruppeMathematikerinRandomisierungVollständigkeitKlasse <Mathematik>Abstimmung <Frequenz>DruckspannungKonditionszahlObjekt <Kategorie>Klassische PhysikDickeMultiplikationsoperatorOptimierungOrdnung <Mathematik>StereometrieProdukt <Mathematik>GefangenendilemmaArithmetisches MittelGrothendieck-TopologieLastRuhmasseGewicht <Ausgleichsrechnung>SterbezifferPunktKreisflächeUnrundheitStabilitätstheorie <Logik>DifferenteRechter WinkelComputeranimation
ParadoxonBeweistheorieStichprobenfehlerLineare OptimierungFormale PotenzreiheRiemannsche VermutungFunktion <Mathematik>Physikalische TheorieNebenbedingungNuklearer RaumSignifikanztestGrenzschichtablösungE-FunktionPolynomMinkowski-MetrikInklusion <Mathematik>KonstanteParametersystemMaß <Mathematik>Numerische MathematikOrdnung <Mathematik>SpieltheorieModelltheorieProdukt <Mathematik>GrenzschichtablösungPhysikalischer EffektGesetz <Physik>Globale OptimierungArithmetisches MittelForcingGruppenoperationLeistung <Physik>MaßerweiterungMereologiePhysikalisches SystemQuantisierung <Physik>ResultanteThermodynamisches SystemFlächeninhaltExogene VariableSuperstringtheorieParametersystemRuhmasseNormalvektorStrategisches SpielSchwebungKlasse <Mathematik>Spezielle unitäre GruppeSortierte LogikInklusion <Mathematik>DruckspannungEinfügungsdämpfungFokalpunktProzess <Physik>Vorzeichen <Mathematik>KonditionszahlBestimmtheitsmaßPolstelleMinimalgradKreisbogenSchlussregelMinkowski-MetrikLineare OptimierungBeweistheorieFunktionalMultiplikationTheoremKonstanteVollständigkeitPunktKörper <Algebra>Hierarchie <Mathematik>Riemannsche VermutungMultiplikationsoperatorStandardabweichungZweiComputeranimation
BeweistheorieParametersystemApproximationPhysikalische TheorieKomplexifizierungRiemannsche VermutungStreuungsdiagrammQuantisierung <Physik>Operations ResearchSuperposition <Mathematik>ModelltheorieLineare OptimierungQuanteninformatikGegenbeispielEinbettung <Mathematik>Neumann-ProblemGruppenkeimEinstein-Podolsky-Rosen-ParadoxVerschränkter ZustandSpieltheorieLineare OptimierungMathematikNatürliche ZahlOptimierungQuantenmechanikSpieltheorieMengenlehreModelltheorieGebäude <Mathematik>Produkt <Mathematik>Ganze ZahlBerechenbare FunktionRekursiv aufzählbare MengeBeweistheorieEinbettung <Mathematik>Einfach zusammenhängender RaumFunktionalGruppenoperationKomplex <Algebra>Leistung <Physik>MaßerweiterungMereologieMultiplikationPhysikalische TheoriePhysikalisches SystemPhysikalismusQuantisierung <Physik>RechenschieberResultanteStichprobenfehlerSuperposition <Mathematik>EinflussgrößeGegenbeispielRandomisierungPunktArithmetische FolgeUnrundheitObjekt <Kategorie>Klassische PhysikMultiplikationsoperatorMinkowski-MetrikQubitEinsNumerische MathematikArithmetisches MittelDivergente ReiheForcingGrothendieck-TopologieExogene VariableSterbezifferBeobachtungsstudieProzess <Physik>FlächentheorieOrtsoperatorMechanismus-Design-Theorie
Strategisches SpielSpieltheorieBeweistheorieKonstanteTeilbarkeitQuantisierung <Physik>ApproximationTheoretische PhysikKlassische PhysikGruppenoperationVerborgener ParameterSignifikanztestEinstein-Podolsky-Rosen-ParadoxPhysikalische TheorieStellenringVollständigkeitModallogikStichprobenfehlerLeistung <Physik>ModelltheorieComputerphysikFreie GruppeFolge <Mathematik>MathematikMathematische PhysikNatürliche ZahlNumerische MathematikOrdnung <Mathematik>QuantenmechanikRationale ZahlSignifikanztestSpieltheorieModelltheorieTourenplanungGebäude <Mathematik>Produkt <Mathematik>ModulformArithmetisches MittelBetragsflächeEinfach zusammenhängender RaumFunktionalGradientenverfahrenGrothendieck-TopologieGruppenoperationKomplex <Algebra>Leistung <Physik>Lokales MinimumPhysikalisches SystemPhysikalismusQuantenzustandQuantisierung <Physik>ResultanteStichprobenumfangFlächeninhaltTeilbarkeitAbstandParametersystemSterbezifferDurchmesserPunktKörper <Algebra>Klasse <Mathematik>Arithmetische FolgeARCH-ProzessKartesische KoordinatenSortierte LogikBeobachtungsstudieRichtungSchnitt <Mathematik>Kontrast <Statistik>MultiplikationsoperatorRechter WinkelLineare OptimierungParadoxonKategorie <Mathematik>OrdnungsreduktionAnalogieschlussBeweistheorieGerichteter GraphInverser LimesPhysikalische TheorieRechenschieberStichprobenfehlerSuperstringtheorieStrategisches SpielVorhersagbarkeitVerborgener ParameterRandomisierungVollständigkeitDifferenzkernVerschränkter ZustandProzess <Physik>DifferenteEvoluteEinsComputeranimation
Berechenbare FunktionAggregatzustandBeweistheorieStichprobenfehlerWechselsprungFlächeninhaltGüte der AnpassungStreuungsdiagrammVarietät <Mathematik>Kartesische KoordinatenRichtungZahlzeichenProdukt <Mathematik>ResultantePunktMeterRechter WinkelVorlesung/KonferenzBesprechung/Interview
BeweistheoriePhysikalisches SystemQuantisierung <Physik>MatchingQubitVorlesung/KonferenzBesprechung/Interview
MathematikPunktKörper <Algebra>Prozess <Physik>Schnitt <Mathematik>Vorzeichen <Mathematik>AssoziativgesetzBesprechung/Interview
Lineare OptimierungMathematikGegenbeispielKörper <Algebra>Kartesische KoordinatenStichprobenumfangGeometrieNatürliche ZahlProdukt <Mathematik>Einfach zusammenhängender RaumKomplex <Algebra>Quantisierung <Physik>FlächeninhaltRuhmasseZusammenhängender GraphRandomisierungSummierbarkeitExistenzsatzPartitionsfunktionEigentliche AbbildungObjekt <Kategorie>Rechter WinkelAxiomBeweistheorieGruppenoperationPhysikalische TheoriePhysikalisches SystemMathematikerinPunktDifferenzkernVorlesung/KonferenzBesprechung/Interview
Natürliche ZahlAxiomBeweistheorieFlächeninhaltRandomisierungKörper <Algebra>MultiplikationsoperatorFormation <Mathematik>Berechenbare FunktionDeterministischer ProzessEinfach zusammenhängender RaumGerichteter GraphLeistung <Physik>Physikalische TheorieQuantisierung <Physik>Randomisierter AlgorithmusResultanteStichprobenfehlerStochastische AbhängigkeitEinsAlgebraische StrukturOrdnung <Mathematik>EreignisdatenanalyseReelle ZahlExogene VariableUnrundheitKartesische KoordinatenEinfügungsdämpfungRechter WinkelVorlesung/KonferenzBesprechung/Interview
BeweistheorieExogene VariableKartesische KoordinatenSortierte LogikSpieltheorieModulformMereologieNuklearer RaumFlächeninhaltGüte der AnpassungMatchingPunktBesprechung/InterviewVorlesung/Konferenz
SpieltheorieMathematikMengenlehreTheoremArithmetisches MittelPhysikalisches SystemArithmetischer AusdruckDifferenteBesprechung/Interview
Ordnung <Mathematik>TourenplanungEinfach zusammenhängender RaumGeradeGruppenoperationMultiplikationNichtlinearer OperatorAdditionRechter WinkelGeometrieInvarianteBoolesche AlgebraPhysikalische TheorieAffine VarietätMultiplikationsoperatorBesprechung/InterviewVorlesung/Konferenz
GruppenoperationVorlesung/KonferenzBesprechung/InterviewComputeranimation
Transkript: Englisch(automatisch erzeugt)
Welcome to the Laureate Lectures. This year we have one lecture per discipline, so one in mathematics, one in computer science. The lectures will be 45 minutes live with 15 minutes Q&A following. I would like to remind you that already during the lecture you can submit questions in the conference tour just next to the video. You have Slido and you can write
your question and also like up questions other people ask. So it's now my pleasure to welcome Avi Wigderson, the recipient of the Abel Prize 2021, jointly with Laszlo Lovasz, for their foundational contributions to theoretical computer science and discrete mathematics
and their leading role in shaping them into central fields of modern mathematics. Avi has actually been at the Heidelberg Laureate Forum before because he is also the recipient of the Nevelina Prize 1994 on his work on computational complexity.
Today Avi will tell us about the value of errors in proofs and take us on a fascinating journey from Alan Turing to recent scientific breakthroughs. It's wonderful to have you with us Avi. Please the floor is yours. I'm not seeing anyone but I hope you're all there and thanks
for coming. I want to tell you about the value of errors in proofs. I will explain everything in blue in particular, the nature of the quantum nature of this recent breakthrough but let's start at the beginning. So the plan is simple. I want to tell you about the value of introducing
errors into proofs in several fashions. It impacts science and technology a great deal. I will talk too much about this. I'll mention it. It has a very conceptual impact that I'll talk about some paradoxical properties that proof systems can have and some impact on math and get to this
breakthrough at the very end of the talk. But before telling you about errors in proofs, I have to talk about errors in computations and before that I'll discuss proofs and computations. More than everything, I really want to talk about the value of
the computational complexity methodology of modeling organizing problems, classes of problems by algorithmic efficiency. You'll see it throughout the talk. It's a story of the evolution of proof systems according to these principles. Any of you who want to
read more about this or read more about computational complexity are welcome to look at my book which is free online. So let me start with proofs and computations. Just to give you two points in a very long history. The first is like three, you know,
couple of millennia ago the elements of Euclid tell you how to derive many theorems in plane geometry by deduction from five simple axioms. So these are proofs of theorems. But if you look at
them, all of them are actually constructions of planar point sets using straightheads and so they are actually computations with very simple operations. So here in plane geometry, proofs and computations coincide. If we jump to the turn of the 20th century, we get to Hilbert's dream
that I'm sure many know about. He believed that truth in mathematics is the same as provability and he also believed that everything provable is computable. While this dream was shattered first by Goedel's incompleteness theory, I'm showing that the first equation is wrong,
and then by Turing showing that the second is wrong. And I want to talk about this second about computation and proofs. The way Turing did it or his first step was actually to define algorithms using these Turing machines. As we all know that led to the computer revolution.
People started building computers immediately afterwards. I should also point out that this paper was written with, he was a graduate student, so it should inspire all of you graduate students and postdocs. But I don't want to talk about this, I want to talk about theory.
And so what do algorithms compute? They compute in this talk decision problems. Every set S, a set of numbers, a set of graphs, a set of sequences defines a decision problem. I give you a number, a sequence, a graph, and ask you is it in this set? That's what we want to
understand. And then you can define two classes which Turing did. All sets computable by finite algorithms, that's the computation side. And RE, these are the sets provable to finite algorithms or in other words verifiable by finite algorithms. And his major theorem
was that these two are different. That was shattering the second Hilbert dream. We in computational complexity care about efficient algorithms. So many of you have heard of PNNP and they have the similar definitions. P is a set of, you know, collection of sets
computable by efficient algorithms and NP similarly says provable to efficient algorithms. Efficient for this talk and, you know, the most common use is the means polynomial time in the size of the data X. There are other definitions that are studied but we'll stick with
that. Of course here we don't know the answer whether these two sets are the same or different, that's a major question of computer science and in fact of mathematics. I want to start by giving examples of, you know, just to start understanding both
intuitively and formally the notion of proofs, arguments, proof systems, povers, verifiers, so on. So basically you want to understand what will explore, what is the nature of a convincing argument. So this is some claim that some element is in some set. And there are two
parties here, a verifier that's really eager to know the answer but is limited, perhaps cannot compute it on its own, and a prover who is trying to help by supplying an argument but the verifier, you know, is not going to trust the prover. The prover is infinitely clever, can do anything but, you know, you want to verify that he's not just fooling you.
Here's an example and this example will be examples of proof systems. Here claims of the nature a certain number is a composite number. An argument for it will be two smaller numbers and the verification is just, you know,
multiply these two numbers and check that they give you the input and reject or accept accordingly. Of course that's a very efficient algorithm, just multiplying two numbers in pro-arithmetic and of course this system is general, it works for any number, not just this particular one here.
And this proof system produces a set of theorems, everything acceptable by this verification algorithm which is a set in this case of composite numbers. I want to stress and improve systems generating the argument, the complexity of that is, you know, not interesting at all, only complexity of verification. We know from,
you know, the fact that the world assumes factoring is difficult that maybe getting the argument is hard. We don't care about this, we care about verification being efficient. Here's another example, Sudoku puzzles. A claim is that this puzzle is solvable,
an argument will be just filling the empty squares and verification will be just checking that it form, you know, complies with the Sudoku rules, again very simple algorithm efficient and the set of theorems produced by this proof system are solvable Sudoku puzzles.
This is general and not just general for such small examples, of course you can generalize the Sudoku rules to any size board. So also here we have infinitely many theorems and infinitely many non-theorems. Back to familiar deductive proof systems like
plane geometry and Euclid, we have the Peano arithmetic in this deductive system, we have a bunch of axioms, we have deduction rules like modus ponensil and an argument for a claim, which is a formula or expression over integers, is just a sequence of other formulas,
that the last one is supposed to be the claim and the verification again is very simple and efficient. You just check that each formula in this sequence is either an axiom or follows from previous ones by the deduction rule. And in this case we have, you know, maybe more interesting theorems than Sudoku puzzles, like that there are infinitely many primes or
Fermat's last theorem and so on, but it's of the same nature. So let's generalize now and see what we understand about essential proof systems. They have two properties that are critical, completeness that true claims have proofs, soundness that false claims don't have proofs,
you can't be fooled. And the arguments are easy to check, it's easy to distinguish convincing and faulty arguments by this verifier. Again efficient will mean polynomial time in the length of the claim. And calculating this formally is what computer scientists, complexity theorists
like to do, and here it is, it was done. The 70s by Cook and Rachow, what is a proof system? Of course, everybody knows that in math there are zillions of proof systems, we want to have just one definition. So as we said, the critical object there is a verifier, which is
supposed to be an efficient algorithm. It takes two inputs, a claim and an argument, and should satisfy these two conditions. Completeness, if the claim is true, then some argument makes it accept.
This is a happy case where the claim becomes a theorem, the argument becomes a proof. And soundness, if the claim is false, then for every argument, it doesn't matter which, the verifier will read there. For any such proof systems like we saw in the example, we collect a set of theorems, a set of whatever the verifier accepts, we call it T sub v of this
verifier. And it's a very simple theorem to prove that what you get when you look at all possible deterministic verifiers, this collection of sets is actually the class NP.
So if you haven't seen a formal definition of NP, here is one. And we have the sets there include solvable Sudoku puzzles and the composite numbers and the sets proveable in Peano arithmetic and so on. The class P sitting inside it
is the class of all problems for which you don't need the argument. You can compute the test claim on your own in deterministic polynomial time. Okay, that's classical proof systems. Now we are going to introduce errors into proof systems.
So before that, I want to show you, and you probably know how they are introduced into computation. So one slide background, probabilistic computation and errors. The basic assumption is that nature is benevolent and gives us access to randomness, to unpredictability. And if we have it, let's let algorithms use,
make random choices. So what's a probabilistic algorithm? Well, a deterministic algorithm is one that never errors, right? It computes a function. If for every input what it outputs
is the function value. A probabilistic algorithm using random choices outputs a random variable. So what we demand is instead that for every input, it will output the correct value with high probability. High, let's say probability two-thirds. So we introduce errors in
algorithms here. We allow the error to be one-third and you may complain that this error is too high, but the value of this definition is that errors can be reduced arbitrarily just by repetition. If you have an algorithm with one-third error, you can write around k times, k can be anything
you want, and take a majority vote of the answers. This reduces the error to being exponentially smaller in k. So that's a robust definition. What's the value of errors in algorithms? Well, we know, it seems to solve many, many more problems than deterministic algorithms
as far as we know. That's how it seems at least to us. And of course the rationale for allowing it is, well, we assume to have access to this randomness and, you know, we tolerate uncertainty everywhere in life, so why not in algorithms? That seems to have been a very important step. Well, ancient of course, but formalizing it in the last 50 years
has been an extremely important development in using computers, allowing algorithms in them. What we are going to do now is import this definition from
algorithms, from computation to proof systems. So we are going to introduce errors into proofs. First of all, this was done in the seminal papers in 85. Given the way we have defined proof systems, it's very easy to introduce randomness to them
because they are defined by an algorithm, the verifier. So we simply allow the algorithm, the verification algorithm, to be probabilistic, and we allow these conditions, completeness and soundness to be satisfied with high probability. There is a small chance of error, and again you can reduce it as much as you want by repetition.
So that's a probabilistic proof system. Every verifier, every probabilistic algorithm defines one. We have introduced errors in proofs as I said in the same way, but somehow psychologically at least it seems like a very different act. Proofs are holy in mathematics,
errors are terrible, in proofs we try to eradicate them. So that's a very bold action to make this definition, and we'll see the value of it throughout the talk.
In the same way we can define the sets computable or the sets acceptable by verifiers, so a set of theorems of this system for every v. And now we call IP for interactive probabilistic proofs, the collection of sets provable in such systems. I mentioned interactive,
I'm going to explain in the next session because the interaction between prover and verifier is slightly more general here. So what is it? What is an interactive proof? So remember in NP, in the classical deterministic case, we had an argument sent by the prover to the verifier,
it was a one-way interaction. In IP, which is probabilistic, we allow actual communication. So it's not just the prover sends an argument to the verifier, the verifier can ask as many
questions as he or she wants from the prover and only then decide whether to accept or reject. So there is an interaction here. You may wonder, first of all, why not allow it in the deterministic case? Well then it doesn't matter, it doesn't help or doesn't change the value.
You may wonder whether this is a reasonable definition to allow interaction between prover and verifier, but given that we all experience it in the classroom, in seminars, I mean it's a very interaction between mathematicians in the very nature of our work, it definitely seems a reasonable class of proof systems or arguments.
And what I want to stress is that this definition is a revolutionary scientific notion. Since its introduction, it's had a huge impact. I will not go into too many details on the
scientific and technological impact, I'll tell you more about the first two. And in particular, how it leads to proofs that have paradoxical properties that cannot be attained by classical deterministic proofs. So errors give us something new. In particular, I'll tell you
briefly about zero-knowledge proofs, convincing but do not convey information, and probabilistically checkable proofs, where you hardly need to know, to read the argument in order to know whether it's correct or not. So that seems ridiculous and we'll see it.
But I more want to talk about this journey that led from zero-knowledge proofs to PCP, which seems unrelated, and also how we got from interactive proofs to multiple proven interactive proofs to the final quantum interactive proofs that I want to end with. So let's talk about zero-knowledge proof systems. Zero-knowledge proof systems are
the same as proof systems, only you add another requirement, the requirement that if the verifier accepts, namely believes the claim, it learns absolutely nothing else but the fact that it is
true, despite the long interaction and the many questions it asks. And this seems preposterous if you've never seen it. I want to stress that it's not trivial to define, but the intuitive meaning is enough for us. And, you know, you should ask yourself if you haven't seen it before, whether such a thing is possible. Is it possible to be convinced by some
argument that gives you no information? You can think about particular claims and if anybody could convince you that they can factor a large integer or prove the Riemann hypothesis without giving you any information. But, you know, life as it is, there are surprises. And a year later
we proved with Godreich and Micali that under the most standard assumption of cryptography, that one-way functions exist, namely, for example, that fact-throwing is difficult,
NP, all of NP has zero-knowledge proofs, which in plain words says that every proof you have, absolutely every proof of any mathematical statement you have, can be turned efficiently into a zero-knowledge proof. Everything can be proved in zero-knowledge. This had major impacts,
both theoretical and practical, that I'm not going to, as I said, I want to talk about how it led to new proof systems, so let me do that now. Back to the theorem, I stress that we are using the assumption of cryptography and when we did that we wondered whether this assumption is necessary. So whether you need cryptography in order to have zero-knowledge proofs
and the answer like many things in life is yes or no. I have a theorem with Ostrovsky that in this model of IP it is necessary, you must have cryptography, you must have such an assumption. But, you know, as we often do in math, you can change the model and in another
model the answer is no. So let me move to this new model. That was the very reason for introducing an extension of IP, multiple IP, many provers and I'll focus on two provers.
This was done in this paper with Benobol, Vassar and Kilian. It's the same game, the verifier wants to know something but now he's interacting with two provers who are physically separate and that's critical. The verifier can ask them both questions and they don't see the questions
to the other prover. They may have, you know, conferred in advance to plan a strategy but once the conversation with the verifiers starts they are in different isolated parts. And still the same thing, the verifier has to, you know, completeness and soundness have
to happen with high probability. So it's another type of proof system and in this proof system we showed that what we wanted, zero knowledge, does not require any assumptions. So somehow the physical separation replaces the cryptographic assumptions used for the
single prover case. This was a motivation for introducing MIP and this was, you know, we were quite satisfied and that's roughly where I left this field just before it started to explode. So let me tell you about this explosion. You may ask beforehand whether it's a reasonable
model like we asked with IP. There are several answers. Suddenly police is using this with interrogating, you know, several suspects who are suspected of together committing a crime,
separating them and interrogating them. Another way to think that it's reasonable is the fact that the institutions we were at actually decided to invest money in patenting this. But anyway mathematically it's extremely interesting and very rich as we'll see. So let me proceed. What we knew at this point was that we have a collection sort of a hierarchy
of proof systems, IP to IP. And at the time there were only trivial inclusions of the powers, the computational power of what they can, the type of claims they can prove. Things in IP were
always computable in polynomial space. Things in 2-IP were always computable in non-deterministic exponential time. These are very high classes and nobody believed that they are even close. But, you know, two years later sort of an avalanche of characterization arrived.
I'm not going to go to tell you the meaning of them. They have conceptual meaning. But I want to stress first of all that they characterize these classes. IP actually is equal polynomial space. Multiple IP is equal non-deterministic exponential time. And the final
one, the PCP theorem I will tell you about shortly. I want to stress also that in this, you know, theorems there are no assumptions, no cryptographic assumptions. So what is this PCP theorem? Again the acronym stands for
probabilistic checkable proof. These are non-interactive proof systems just like NP. So we have again the prover and verifier. They have a claim at hand and the prover sends an argument to the verifier. And the added spice, the added condition is that the verifier
is allowed to read only let's say 20 bits of the argument, no matter how long it is. So to just demonstrate it, you know, it seems ridiculous that anybody could discover a bug in a hundred page proof. And you can imagine that, you know, somebody
proved the Riemann hypothesis, probably written a mountain of books with a thousand pages containing the argument for it. And we would like to check it only looking at 20 bits. So what can be proved here? Well, again, despite how ridiculous it is, the PCP theorem
says that everything in NP has such a probabilistically checkable proof. Every proof can be, of course, not the original proof maybe, like with zero knowledge,
but if you have any proof of the Riemann hypothesis, it can be turned efficiently into a, you know, slightly longer proof, which can be checked in exactly this way just by reading a constant number of bits with a high probability. So this will be a dream. If this were really practical, it would be a dream for refereeing.
I mean, we would do it in seconds and not refuse to referee everything. Of course, the real value is amazingly deep and important contributions to optimization, to coding theory, to complexity theory, which I will not talk about.
And part of it can be actually be made efficient enough to be used in technological products that you are all using, in fact. But I want to go back to talk about the extension, the quantum extension and the last breakthrough that I advertise in the title.
So remember that we understood completely, we have characterized the computational power of these new proof systems with errors, interactive proofs with one prover and multiple provers.
And this is if we allow randomness and error. Now we are going to replace or add rather beyond randomness, we are going to add a quantum power. So again, as I am introducing quantumness into proof systems, I want to
before and tell you about quantum computation as I did with randomness. So let's have one slide tutorial on quantum computing. Again, you start from the same observation. Nature seems to provide us access to
quantum phenomena. Quantum mechanics seem to work in our world. And so again, why not use it in computation? This was suggested independently in the West and in the East. Let's build quantum computers that manipulate their objects
according to quantum mechanics and manipulate superposition using unitary operations, manipulate qubits. It took time to formalize this notion to make it not as simple as formalizing normal computers' algorithms. But it was done and we now have like we have
BP and BPP for probabilistic algorithms, we also have BQP for quantum, efficient quantum algorithms. And so it went for a while until a bombshell landed when Peter Shaw in 94
found out that quantum algorithms can efficiently solve both factoring integers and discrete logarithms. And of course, as you know, these are not arbitrary problems. These are the very problems which was assumed hardness underlies all security systems or online
shopping, everything in the current world. And so there became frenzy attempts since then to develop many things. First of all, try to build quantum computers, build this technology. And so far billions were invested in this and some progress is made.
There were attempts and they are ongoing to develop post-quantum cryptography, cryptography and new hardness assumptions that will be able to sustain quantum computing if it becomes available. People try to invent new quantum algorithms for new other problems, maybe for
NP. Of course, there is no such progress so far, very few new problems of quantum algorithms. And the one I want to focus on is the development of new models in which we have
the quantum power. And quantum proof systems are the ones I want to talk about. So what happens with quantum? The same way we add quantum power to the verifier, of course to the infinitely powerful, so you don't need to add anything. So let me explain this.
I'll tell you the history. In 1999, what was introduced quantumness to the one proof systems and this is called we add star to denote quantum. And people that say again with
methodology of trying to understand the power of proof systems computationally, what complexity theories do as instincts by now, it took about a decade to show that actually adding the quantum power in interactive proof with one prover does not add any power. It's
the same as the classical interactive proof, just PSPACE. Then people introduced quantumness to many proof systems, MIP star. And here the story was even more amazing. Again, it took some
time, almost a decade to just prove that it's at least as strong as the classical MIP notion because both the verifier and the prover get extra power here. But this was done. Everybody expected that again there'll be equality, that quantumness does not add power. But
like seven years later, it turned out that yeah, you can do more. You can do this non-deterministic double exponential time is exponentially more powerful than just exponential time. So it became clear that MIP star is stronger than MIP. And the culmination of this was in this breakthrough of Giertal, that MIP star equals RE, namely if you remember from
the first slide, RE is a classic Turing defined, which is a recursively enumerable sets which contain uncomputable problems. So quantum proof systems can prove uncomputable problems.
Totally surprising everybody. But the surprise within let's say computational complexity, theoretical computer science, maybe quantum information theory, this was not the end of the story because this result is so deep that it implies many other
results in mathematics and physics. It provided counterexamples to very famous conjectures in quantum information theory to the Thereson problem, Von Neumann algebras to the embedding conjecture in group theory to the Kirchberg conjecture. And I'm sure many more
will be found. I'm not going to explain this, but just saying something. The nature of the result deals with the power of measurements on large Hilbert spaces. And in one word,
that's a connection to these various subjects. I want to explain more about the nature of this quantum interactive proof with many programs. And how they relate to classical questions almost century old in the foundations of quantum mechanics. So for this, it turns out
that it's sufficient, and it's sufficient in fact in general, to look at just one round proof systems, just one question and answer, which lead to the point of view of proof systems as games. What do I mean by a game? So here let's go back to the classical 2-prover setting.
We have this picture, the verifier asks one question of its prover, gets one answer, and then decides to accept or reject. Well, what do we mean by decide? I mean,
you know, okay, the randomness of the questions are generated at random, but at the end, you know, the verifier has a function and evaluates it. So we can, you know, and whether to accept or reject a claim is decided by whether this number is large or small. So let's define the value of this game to be the maximum probability that this verifier accepts, you know,
given the questions and answers under the best strategy that the provers could have devised ahead of time. Okay, so that's a number, a number between zero and one. It's a probability.
In contrast, by the way, this doesn't change even if the provers have, you know, access to a common random string, not communicating, but just access. In the analog, in the quantum analog, it's the same picture only that now the provers have
access to a quantum state, to an entanglement. I realize not everybody knows what it is, so I'm sorry about that, but the quantum analog of a random string is a quantum state. And we can define another value, you know, what's the maximum probability that the
provers can convince the verifier now with this extra power to accept the answer they give to the questions. In computer science, we study, you know, just the complexity of computational power of the system, so just how hard it is to approximate these values. That's another view of what we've seen, and we've seen these characterizations.
But this one round, you know, proof systems or games, if you, you know, stare at them, and this was already understood when MIP star was defined, they have appeared in physics many, many decades ago. Here at the Institute where I am, Einstein with his postdocs Podolsky
and Rosen in 1935 introduced his famous, you know, Gedanken experiment to test quantum mechanics. If you squint at this picture here, you may see a two-prover system. You may see two-provers,
these detectors at both sides, and so the claim is just, you know, the source generating two photons traveling at the speed of light opposite from each other after generated by a single event, so they are entangled. And they proposed it as a way to test quantum mechanics.
In the language of physics, the top case is called the classical or local game, and, you know, the random thing is called hidden variables. These are hidden variable models for
health. In the quantum case, it's called a non-local game, and what may be, you know, called the spooky action at the distance that Einstein didn't believe in, like because it seems to violate the speed of light limitation. And there are many designed experiments in which,
you know, the quantum value was bigger than the classical value to show that, you know, it doesn't make sense or you at least have to explain it. And that was really the question, is quantum mechanics complete? That was the challenge in these experiments. And a sequence of very important influential works designed more and
more examples within the realm of physics and quantum information theory, more examples in which indeed the quantum value was larger. And eventually it led to, you know, such efficient tests that they could be implemented and actually confirm in the field, in the world,
that quantum mechanics, the predictions of it at least, work. You know, the meaning is still debated even today. On the computer science side, the paper I told you introduced MIP star, looks at things the way we always look at in complexity, or it's suggested to build
the class of study together all non-local games, not necessarily those implementable or those that come from physical consideration, just in the abstraction as we did with other proof systems. And it is critical that this allows the use of the tools of complexity theory of
reductions and amplifications and concreteness, and actually some tools that were used before and for some of the previous results of PCPs and codes, in order to arrive at this result of MIP star equal RE, namely understanding the full power of these two-prover quantum proof
systems and via that arriving at these amazing consequences in mathematics. To summarize, what have we seen? We've seen a lot, and maybe at a high speed.
We've seen lots and lots of proof systems, and we've seen the evolution of the nature of proof systems, especially after we allow randomness and errors and then quantumness into
proof systems, but stressing that in all of them, the definition insisted that the verification process is efficient. So even for this latest result, verifying an uncomputable function is done by an efficient verifier. We saw that
allowing interaction and randomness and errors in proofs lead to proof systems with paradoxical properties like zero-knowledge proofs or these probabilistically checkable proofs that do not require you to even read most of the argument
given. Something I'd like to stress again and again, theoretical definitions, starting from Turing definition of an algorithm, this Turing machine, and many of others we've seen, both predates and enables practical developments, and then of course also scientific and mathematical
and societal ones. I hope it was clear the importance of the complexity theoretic methodology of modeling classes of problems according to the computational complexity and algorithmic efficiency
and the use of these tools that are repeated all the time, reductions and completeness and classification in building up these theories and these characterizations.
Finally, something I didn't get a chance to talk much about, but there is a lot of interaction between different subdisciplines. You saw it mainly in the last slide about the impact of this result, MIP star equal re on physics and mathematics, but I also mentioned
earlier the connections to coding theory and to optimization, although I didn't explain it. So this is a very, I would say, interactive discipline. The algorithmic point of view
connects to lots and lots of other fields, and this cross-collaboration just amplifies both the use and the progress of these fields. So again, let me end up by noting that you can always look for more in this book I wrote a couple of years ago on complexity theory
and it's available freely on my website. Thank you very much. Thank you Avi, thank you very much for your wonderful and interesting lecture. We will start off with some questions. So some of the young researchers already asked
questions during your talk and their two questions go in the direction of application. So let me start with one of them. So it's the question. I don't hear you well, Anna. I don't hear you well. Can you, do you hear me better now? Yeah. Okay, good. So let me start with one question which goes in the direction of applications.
And so there was one question, what would you think are the areas where these probabilistic proofs will be mostly applied or find most applications? So we keep being surprised when we developed zero knowledge proofs, we were sure that
they are too complex to be implemented anywhere. And it took maybe 30 years, but we were proven wrong and people couldn't implement them. There are so many uses already of applications of these probabilistic proofs. I mentioned one that everybody's familiar with,
just doesn't know that that's where it came from, you know, or that it's used, a cloud computing. I mean, you keep, you know, your cell phone does so many things for you. It computes so many things, but of course it doesn't do it really, right? I mean,
it's not done on your cell phone, it's done somewhere in the cloud in some powerful server. Well, how do you or your, you know, your cell phone know that the computation is done? Was it correct? I mean, how do you verify in cases where it is important, let's say verify, that it treats your login, your password as it should and so on.
These are interactive proof techniques. I mean, that's a very basic use of interactive proofs. Interactive proofs are used now routinely in a variety of digital currencies that are available.
They heavily use ideas from PCP technology, for example. So there are so many and I wouldn't be surprised if we're seeing newer and newer ones. It's a very basic fundamental concept and allowing errors in this type of proof is, of course, extremely natural. It's not that you
will not know the value of the memory processes, whether it's a jump. Of course, we know that in the real life, there are also errors and sometimes they are not discovered. But here, errors are actually robust and can be reduced arbitrarily, so there's value in that.
One great question, which goes in a slightly different direction. Would you say that the current state of the art of the PCP proofs are somehow enough for applications or also should develop further to somehow bring out more applications?
So here's another question from the young researchers. We now have quantum computers with about 66 qubits. How strong must a quantum computer be to realistically implement quantum
systems for typical problems where you would like to apply them?
Yes, let me come to one other point of your lecture, which you highlighted also in the summary and which for me really is a wonderful example in your talk about the interaction
between the different fields and also between somehow questions coming more from computer science and somehow then having an influence on mathematics and even giving counterexamples to some more long-standing conjectures in mathematics. So in this, I mean, from the result,
this MIP star equals RE. Do you expect more applications within mathematics?
Several of this nature, but let me tell you one. Mathematics has lots of existence theories. You prove that something exists. The complexity theoretic message is, okay, it exists. How long
does it take to find it? When you just ask this question, you are led to doing far deeper mathematics on the same question that you looked at. And this was done because it was done by mathematicians even before complexity theory existed. But now with the interaction
of complexity theory and math, we see applications in algebra, in topology, in groups theory, in geometry. Anywhere you look, there's connections like this. So this
MIP star equal RE is just one particular striking example of the connection between the algorithmic point of view here in the context of proof systems and mathematics. Yes. So I have one other question, which addresses the axioms you made. So for example,
already in the non-quantum probabilistic proof systems, you started with the axiom that randomness is always there given to us by nature. But of course it's not that easy, I think, in general. So is there, could you comment on that perhaps?
Yes, it's a beautiful field. It's probably the area to which I devoted most of my time in the last few decades, understanding the power of randomness and what do we do if nature does not
provide us with randomness at all, or maybe provides us not with pure randomness, but rather a weak randomness with just some entropy, but not very structured like independent unbiased quantum sources. And there are beautiful theories in computational complexity, which tell
you what you can do in this situation. So I'll tell you one theory because I like it and it's, I was involved with it, with the result with the Rassilin-Paleazzo. If you believe that
P is different than MP, okay, there is an assumption of this nature. If you believe it, then randomness is not needed in probabilistic algorithms. Probabilistic algorithms can be replaced by deterministic ones without error that are as efficient. So we have connections
between randomness and computational difficulty and that is just one example. So we have a lot to say about randomness, but yeah, it's certainly an assumption for probabilistic algorithms that you have access to such pure random coins and you are absolutely right that there is no
real reason to assume it, which is why we study what we can do without it. Yes, so thank you. So there's one other question raised in the chat that, I mean, the applications you described by probabilistic proofs are sometimes really important and
touching our society. So is there, do you think that there's some responsibility also in developing those or special responsibility researchers should take? There is always a responsibility and with everything you discover, I'm sort of a believer that technology and ideas cannot be stopped and the important role is for the society to have
good use of technologies. Of course, it's much more benign than nuclear power and nuclear bombs and so on, but I agree completely responsibility is important. I want to maybe mention one
area where demonstrates the responsibility of computer scientists in this development. Now that artificial intelligence is entering all parts of society and in particular decision-making
by judges, by bank officers, by parole officers, by, you know, practically everyone, that these algorithms will be fair and the very definition of fairness, which is something that of course philosophers and social, you know, academic world looks at, become newer through
definitions that are coming from algorithmic point of view. So lots of my colleagues are doing that. So I think you see the responsibility is taken automatically and probably partly because it's also intellectually challenging and interesting and
beautiful things already came out. For technical question, when you introduced, went from one prover to having multiple prover would really change the game in a sense. So one young researcher asking whether you could also change the game by introducing multiple verifiers or would it just stay the same?
Yeah, so the question is what would you define as the, you know, the set of theorems in the system, the fact that, you know, you can ask this question and you have to say, well, is it something that both verifiers are convinced of or only one of the two and so on. But
it's not clear to me that this will add, this will add something new. I can't say a shot about this, but it's, you know, it depends what you want to do with this to verify. It's clear what more provers are doing that can do different things. Verifier, all the verifier wants to know
is whether the claim is true. I would assume that more verifiers would want to know the same. I mean, of course, you can just take the existing definitions and allow more verifiers in it and it won't change. But maybe you are thinking about more different definitions. I don't know.
Yes, thank you very much. So perhaps in the last minute we have, can you give us a glimpse in what is the somehow most important or interesting problem you are currently thinking about? P versus NP. I've been thinking about this all the time. I think about it in many different ways
and in the last five years I may be spending time in trying to understand or hopefully prove, but I don't know, an algebraic version of it. This is called the VP versus VNP question.
So trying to prove that in the algebraic setting when you do addition multiplication as opposed to Boolean operations, P is different than NP. And this has amazing connections to
invariant theory, at least the route that we are taking, amazing connections with invariant theory and algebraic geometry and really cool stuff. So I'm learning a lot. That's a summary of it. Thank you very much. And let me just say that, I mean, to the young researchers, you have the opportunity to sign up for Laureate group discussion with Avi and you will also see Avi
again in a Laureate discussion later this week. So if you still have questions, you can try to catch him and ask your questions in these other formats. Thank you very much, Avi. It was wonderful to have you and see you later this week.