We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

3/3 Operator limits of random matrices

00:00

Formale Metadaten

Titel
3/3 Operator limits of random matrices
Serientitel
Anzahl der Teile
27
Autor
Mitwirkende
Lizenz
CC-Namensnennung 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Algebraische StrukturBruchrechnungGeometrieMaß <Mathematik>Natürliche ZahlStochastische MatrixTransformation <Mathematik>MengenlehreModelltheorieModulformBrownsche BewegungInvarianteDreieckAutomorphismusEbeneGeradeGrenzwertberechnungGruppenoperationHyperbolische GeometrieHyperbolischer RaumInverser LimesLogarithmusParallelenaxiomSommerzeitWinkelLinearisierungEinflussgrößeNichtlinearer OperatorAbstandTranslation <Mathematik>RandomisierungQuadratzahlSummierbarkeitPunktPunktgitterEuklidischer RaumAbstimmung <Frequenz>Spezielle unitäre GruppeNegative ZahlSortierte LogikRadiusIsometrie <Mathematik>Abgeschlossene MengeBestimmtheitsmaßObjekt <Kategorie>Element <Gruppentheorie>MinimalgradDickeMultiplikationsoperatorRandwertKreisbewegungInverseEinsVorlesung/Konferenz
Gewöhnliche DifferentialgleichungMathematikNumerische MathematikTransformation <Mathematik>MengenlehreGebäude <Mathematik>MatrizenrechnungVektorraumDerivation <Algebra>Ganze ZahlAutomorphismusEbeneFunktionalGeradeKoordinatenPrimidealTermWinkelEinflussgrößeNichtlinearer OperatorAbstandParametersystemGammafunktionQuadratzahlPunktEuklidischer RaumNegative ZahlRichtungMultifunktionMultiplikationsoperatorRandwertKreisbewegungVorlesung/Konferenz
EigenwertproblemRauschenModelltheorieMatrizenrechnungModulformVektorraumDeterminanteAnalogieschlussAnalytische MengeEbeneFunktionalGeradeGruppenoperationHyperbolischer RaumPhysikalische TheoriePhysikalisches SystemNichtlineares GleichungssystemEinflussgrößeGammafunktionTridiagonalmatrixPunktEuklidischer RaumDichte <Physik>KreisflächeMultifunktionGibbs-VerteilungObjekt <Kategorie>Riemannsche VermutungEvoluteMultiplikationsoperatorRandwertKreisbewegungInverseMinkowski-MetrikAssoziativgesetzVorlesung/Konferenz
AnfangswertproblemEigenwertproblemGewöhnliche DifferentialgleichungNumerische MathematikSchwingungStochastische MatrixMatrizenrechnungModulformVektorraumDerivation <Algebra>UnendlichkeitAnalogieschlussÄquivalenzklasseDifferentialEbeneFunktionalGeradeInverser LimesPhysikalisches SystemWechselsprungNichtlineares GleichungssystemNichtlinearer OperatorParametersystemVektorpotenzialPunktUnitäre MatrixDifferenzkernProzess <Physik>BestimmtheitsmaßUmkehrung <Mathematik>MultiplikationsoperatorRandwertKreisbewegungInverseRechter WinkelVorlesung/Konferenz
EigenwertproblemMaß <Mathematik>VerschiebungsoperatorVektorraumWahrscheinlichkeitsmaßAnalogieschlussEinheitskreisFundamentalkonstanteGeradeSigma-AlgebraSinusfunktionTermTrigonometrische FunktionWinkelTeilbarkeitEinflussgrößeUnitäre GruppeGammafunktionSummierbarkeitPunktEuklidischer RaumSymmetrische MatrixRichtungProzess <Physik>MultiplikationsoperatorRandwertRechter WinkelReelle ZahlVorlesung/Konferenz
EigenwertproblemNumerische MathematikPolynomVarianzProdukt <Mathematik>MatrizenrechnungBrownsche BewegungRekursive FunktionÜbergangEbeneErwartungswertFunktionalGeradeHyperbolischer RaumOrthogonale PolynomePhysikalische TheorieSigma-AlgebraTermTheoremWiener-ProzessTeilbarkeitEinflussgrößeNichtlinearer OperatorAbstandPunktspektrumRandomisierungVollständigkeitPunktInnerer PunktSortierte LogikKoeffizientUnitäre MatrixKomplexe EbeneJensen-MaßMinimalgradMultiplikationsoperatorRandwertStandardabweichungMinkowski-MetrikVorlesung/Konferenz
EigenwertproblemFolge <Mathematik>Numerische MathematikRauschenStochastische MatrixZufälliges MaßVarianzVerschiebungsoperatorProdukt <Mathematik>Brownsche BewegungBrownsche BewegungInvarianteZufallsvariableUnendlichkeitAnalytische FortsetzungBetafunktionHyperbolischer RaumLeistung <Physik>MultiplikationResultanteSigma-AlgebraSinusfunktionTheoremEinflussgrößeGewicht <Ausgleichsrechnung>PunktspektrumWurzel <Mathematik>Unitäre GruppeTranslation <Mathematik>RandomisierungFormation <Mathematik>DistributionenraumQuadratzahlBetrag <Mathematik>StrömungsrichtungSortierte LogikKoeffizientUnitäre MatrixProzess <Physik>Jensen-MaßAbgeschlossene MengeDickeMultiplikationsoperatorRandwertKreisbewegungRechter WinkelVorlesung/Konferenz
EigenwertproblemStochastische MatrixVarianzBrownsche BewegungVariableIntegralUniformer RaumÜbergangBetafunktionBeweistheorieGrenzwertberechnungIndexberechnungInverser LimesIrrfahrtsproblemLokales MinimumPartieller DifferentialoperatorResultanteSinusfunktionStetige FunktionTheoremÜbergangswahrscheinlichkeitNichtlinearer OperatorAbstandNormalvektorWurzel <Mathematik>Gebundener ZustandFormation <Mathematik>DistributionenraumQuadratzahlPunktKonvergenzgeschwindigkeitKreisflächeSortierte LogikKoeffizientDifferenzkernWahrscheinlichkeitsraumProzess <Physik>MultifunktionRadiusJensen-MaßSupremum <Mathematik>Objekt <Kategorie>DickeMultiplikationsoperatorInverseRechter WinkelVorlesung/Konferenz
EigenwertproblemNumerische MathematikZahlensystemFrequenzBrownsche BewegungBerechenbare FunktionGesetz <Physik>BetafunktionBeweistheorieExponentGrenzwertberechnungHyperbolischer RaumResultanteTermStochastische AbhängigkeitAbstandNormalvektorGebundener ZustandFormation <Mathematik>QuadratzahlPunktEuklidischer RaumKreisflächeProzess <Physik>MultifunktionJensen-MaßAbgeschlossene MengeMultiplikationsoperatorRandwertKreisbewegungGruppendarstellungVorlesung/Konferenz
EigenwertproblemHeuristikPhysikerAusdruck <Logik>Berechenbare FunktionUnendlichkeitBetafunktionBeweistheorieDivergente ReihePhysikalismusSinusfunktionTermZentrische StreckungParametersystemFormation <Mathematik>GammafunktionQuadratzahlHierarchie <Mathematik>Figurierte ZahlVorlesung/Konferenz
Brownsche BewegungAusdruck <Logik>Berechenbare FunktionFunktionalTheoremWinkelParametersystemGammafunktionPunktKreisflächeJensen-MaßMultiplikationsoperatorRandwertStandardabweichungVorlesung/Konferenz
EigenwertproblemMathematikNumerische MathematikRauschenRelativitätstheorieStatistische PhysikStochastische MatrixSymmetrieZentraler GrenzwertsatzVarianzMatrizenrechnungModulformBrownsche BewegungBrownsche BewegungInvarianteVariableAusdruck <Logik>UnendlichkeitBetafunktionBeweistheorieEinfach zusammenhängender RaumGeradeGrundraumHyperbolischer RaumInverser LimesIrrfahrtsproblemLokales MinimumMultiplikationPhysikalische TheorieSommerzeitTermTheoremZentrische StreckungNichtlinearer OperatorParametersystemGewicht <Ausgleichsrechnung>NormalvektorFormation <Mathematik>GammafunktionDistributionenraumQuadratzahlPunktKörper <Algebra>Dichte <Physik>KreisflächeProzess <Physik>Vorzeichen <Mathematik>Jensen-MaßObjekt <Kategorie>DickeMultiplikationsoperatorStandardabweichungEinsVorlesung/Konferenz
Transkript: Englisch(automatisch erzeugt)
I invited everybody to vote what to talk about, and the votes came in, so I'm going to talk
about the bulk limit this time. And this lecture is going to be in nature slightly different from the previous two. We're not going to start from random matrices, but rather we're going to go from top down. So I'm going to talk about these operators first, and I'm going to tell you roughly how you get to them. There's going to be a little bit more storytelling than
before. I'll try still to give you some math, at least. So I think the first thing that I have to talk about is the hyperbolic plane. And that's just because I gave you all this nice story about
how you care about geometry and random matrices, and then people were complaining that the only they got was the geometry of the line. So let's talk about the hyperbolic plane, kind of unrelated to everything else. So you know that the hyperbolic
plane has a long story, starting with the parallel postulates of Euclid, which everybody tried to prove, and nobody could. In fact, Aquinas, you probably know, said that even God
couldn't do such a thing, that the sum of the degrees of a triangle is not 180 degrees. And then until, you know, and it went all the way until the 1800s where, you know, three people independently found this geometry where this thing doesn't hold. So this is Gauss, Lobachevsky and Boyai, and this is the hyperbolic plane. So as you know, there are two nice models. I mean,
there are several nice models, but the ones I'm going to talk about are the Poincare half plane model and the Poincare disk model. And, you know, this is the Poincare disk model.
This is, you know, the hyperbolic plane is, just think of it as a manifold, where, you know, if you have something here at length epsilon in the Euclidean plane at distance r, then the length here is epsilon times 1 minus r squared. So that's the,
this is a disk of radius 1. So the lengths, of course, so things that are short here are longer as you get to the boundary. And the same thing is true. So here, if the height is y, then the length element is 1 over y. So as you get to the boundary, things get longer.
And I don't know, the standard thing that you probably have seen is these Whitney squares here, which is a nice way to think about the hyperbolic plane. So if you have squares here and you have
squares of half the size, and then you have here quarter of the size and so on. And then all of these squares are actually isomorphic. So there is a, in fact, this is a transitive lattice. So if you have pick any point and at the other point,
there is an automorphism of the hyperbolic plane that takes one to the other and keeps this square structure. So this sort of tells you, for example, that if you have a point here, say i in the hyperbolic plane, then if you have something which is of this and epsilon from the boundary, then its distance from i is about log epsilon.
You can just see it from the squares, for example. So this distance here, if this is Euclidean distance epsilon, then this is the hyperbolic distance from i is about log epsilon. In fact, it's exactly log epsilon. So this one, this distance here, dh is log epsilon.
OK. The other things that you know is that the rotations and translation of all kinds of isometries of the hyperbolic plane, and they all correspond to Möbius transformations, so linear fractional transformations that fix, of course, the corresponding object,
so the disk or the line. And those form various groups. This one is SO2R, and this one is SU11, these things of linear fractional transformations.
So there is also a natural notion of boundary, which is kind of obvious, right? So this is the hyperbolic plane, and this is the boundary. You know what it means to converge to a boundary point. So in this model, the boundary is completely obvious. Sorry, can I just ask? Yes. You said it's log epsilon, but if epsilon is small, then this is negative. OK. OK. Thanks. Yeah. Thank you. So yeah, minus log epsilon is negative.
So OK. So let's see what's happening here. So there is also an interesting thing. So suppose that there is a sun in the hyperbolic planar world. So what does the sun do? The
OK. So that's days and nights. I don't know, whatever you like. Then there is this strange thing that actually you have different times depending on where you are. So for example,
if you're here, you see the angle of the sun changing differently than if you are here. And this is, again, different from the Euclidean setting. OK. So in other words, there is another thing you can do, and this is what we're going to do, so I'll tell you, which is that you have a boundary point. And you're interested in,
so let's say you have a boundary point, you're interested in rotating the boundary point about some center of rotation at some speed. OK. So if this center is actually the center of the disk, then this is just the trivial rotation. OK. But if the center is somewhere else,
then you can figure out what this rotation is by conjugating it, sending it to the center, doing the trivial rotation, and then sending it back by this Mobius transformation. And you know, these Mobius transformations, they leave Brownian motion, for example, invariant up to time change. OK. So this gives you a way of understanding how fast
the rotation goes. And what happened is that before this changing of center, you covered unit harmonic measuring unit time for Brownian motion. And so harmonic measure is just the
hitting measure and the boundary. And that's going to be true also if you're rotating from somewhere else. But the harmonic measure is different. OK. So in fact, the actual speed is going to be the inverse of the harmonic measure. OK. So the Euclidean speed is going to be the inverse of the harmonic measure. So if you're close to here, it goes lower. If you rotate from here, then it goes much faster. OK. So that's just the geometry. OK. So that's hyperbolic geometry. And I'm
going to talk about an object called hyperbolic carousel, which we introduced with Benedict
Volkow in, I think it's 2007. And so it's just something very simple. So you have, OK, so you have a point, have a point in the hyperbolic, have a path in the hyperbolic plane,
OK, which we call BT. So this is a path. It doesn't have to be continuous. I still call it a path. And then you have a point on the boundary, which you call, I don't know, gamma t.
OK. And you do the following operation. You don't know, I'll tell you later why, but let me tell you what you do. So you rotate this point gamma t with center BT at speed lambda. OK. So I write this down. OK.
So if you're in this Poincare disk model, and so you write everything in the Poincare coordinates, then you can write an ODE. So let me write that. So gamma prime of t is lambda times, you have to put this harmonic measure. So the way it looks like is is the distance between gamma and BT squared, and it's divided by 1 minus b squared.
So that's the ODE that satisfies. No, this is Euclidean distance. But it's important.
So this is written in Euclidean coordinates, but this whole thing is defined intrinsically. We don't care about what's going on. So then I'm going to define another point on the boundary.
So let's call this u0 and u1. And let's say that t is in the interval 0, 1. So you run all these things until time 1. And you're going to define n lambda. So it's a function of lambda,
OK, which is just the number of times you pass u1. So by u, I mean gamma of t.
So you do this rotation around your path. You may go around several times, and you just count how many times you have passed this point u1. OK, so what is it? So it's some kind of increasing function, right? So it's step function, integer value step function.
In any direction, it's not algebraic. You can pass in one sense or the other. This one is always going in the same direction. So there is no issue like that, OK? It's increasing, right? Well, you first pick lambda, and you rotate at speed lambda. Lambda doesn't
also be negative. So you can extend this to the negative direction as well, OK? Something like this, the same way, OK? So gamma is a path. It's given to you. It's a parameter.
So gamma prime t is directed, or what does it define? Oh, sorry. B is a path. So gamma is defined for you. So it's gamma that depends on lambda, OK, and t. That's gamma prime or gamma? Gamma prime. So the change in gamma, so how much this angle changes,
is lambda times this speed factor, which is the inverse harmonic measure. So it's a definition for gamma prime? Yes. Gamma, this is the derivative of gamma. So this is an OD. So you solve the OD.
I just wrote what I told you in words in math, OK? That's all. There is no. OK, so what you get here is you have this triple, which is b u0, u1, right? This is a path. These are two points on the boundary. And to this triple, you associate this n lambda, which is a counting function for some set of points,
OK? So they're a point where you jump, you have points, right? So which is the same, right? So this is the same as some lambda, which is a set of points, right?
So this is called a hyperbolic carousel. You have a path. Two points on the boundary gives you a bunch of points. Sorry, so this is the denominator, the Euclidean distance or the? This is all in Euclidean coordinates. Poincare, this coordinate. OK. But it's written here so that it's very explicit. But as you see, the explanation shows
you that this is intrinsic. You can define it anywhere. It doesn't matter. Do it in the half line if you want. Actually, we're going to do it in the half line.
OK, so I'm going to give you various examples of this. So this is, we're going to build lots of examples. But first, let me try to write this in terms of automorphisms, because, of course, the nice way, the really nice way to handle it is in terms of automorphisms of the hyperbolic plane. And so let's go into the half plane model,
like this. So this is the Poincare half plane. So we're going to write b of t in this world.
So it's the upper half plane. We're going to write it as xt plus iyt. So you get from here to there by a Cayley transform. So it's another LFT that takes this to the half plane.
So let's see how we can write this in terms of matrices. So we're going to take the boundary point gamma. We associate it to some function f of t, which is going to be actually a vector.
So f1 of t, f2 of t. So it's going to be a two vector, because that's the way you represent the Moebius transformations in terms of matrices. The points in the complex plane are going to be two vectors. They correspond to the ratio. So this corresponds to f1 over f2
in the complex plane. But this is in the boundary, so actually this happens to be actually real. Because this is a boundary point, and the boundary here is exactly the real line. So that makes things a little bit nicer. So this is gamma.
And so let me see how you make this into a rotation. So let's just give you one simple thing. So if you look at the solution f prime, the f prime is one half zero one minus one zero f.
Let's just look at this ODE. So what this does, and I'm not going to go into the details of this, so let's check. So let's actually do an exercise. This ODE is the rotation at speed one
about the point i. So there is the point i in the upper half plane, and this exactly corresponds to rotation at speed one about that. You could simply deduce this by writing the rotation in
the Poincare-Dies model and just conjugating it to the half plane. This good so far? Okay, so that's the rotation. Now...
The first example, it was rotation with center b? Yes, it's center i. So we're getting there, we're getting there, we're getting there. This is just an exercise. Do you mean it's just Euclidean rotation or Poincare-Dies?
No, not Euclidean rotation. This is the hyperbolic rotation at speed one about i. It corresponds to the Euclidean rotation. It's the same as the Euclidean rotation in the Poincare model, but in the half plane it's not. In the half plane these things move at Cauchy speed basically, or inverse Cauchy speed. So you just write the Cauchy
density and the inverse of that, because that's the harmonic measure. So the inverse of that will be the speed. So things that are far away from i will move very fast on the boundary and so on. Is it true that the rotation, if I start from the real lines, will be staying on the real lines? Yes, because the boundary is kept in these rotations always. And if you start somewhere
else, you're going to go in a circle, because circles are actually also circles in the hyperbolic plane. But the Euclidean center is not i. This is a circle. OK, so now let's introduce this matrix valued function, which is actually very simple. So it's
1 minus x of t, 0, y of t. So we just made an affine matrix out of x and y. So this has the property that if I take x, think of this as a Moebius transformation,
and apply it to the point x plus i y. So x plus i y, I suppress the t, 1, so this vector.
Then you get, so let me put a dot here so that this is matrix multiplication, but it also corresponds to the hyperbolic action. Then you get, I think, some constant times i1. So it takes the point x plus i y to the point i
in h, as an action on the hyperbolic plane. So why is this good? Because this will allow you to write the evolution of gamma. So what is going to be the evolution of gamma?
It's not going to be the evolution of f, right? So it's going to be f prime is equal to, so let me put the lambda, because there's this over 2, because of that 1 half. We put this matrix, 0, 1, minus 1, 0. And you have to conjugate it by x,
because x will take this point to i. And if you conjugate it by x, then the rotation is going to be exactly about that point, and not about i. So you just put here x inverse x. So this is what another equation for the same thing,
except in noise matrices. Excuse me? It's not x, x inverse x? I think it's, well, I hope not.
Do you think it's this one? It's possible, yeah. Let's see. Maybe it's this.
OK, so let me double check what I have in my notes. I wrote it like this, yeah. OK. So now here is one thing. So if you look at this matrix, this matrix you can write as follows just because of the properties of
2 by 2 matrices of this matrix x. You can write it as x transpose x over the determinant of x, which is, by the way, y. So I'm just going to write it so that this is just y.
Times the matrix 0, 1, minus 1, 0. Just some identity. Check it out. And I'm going to call this matrix R.
So now I got the following equation. Let me write it again. So f prime is equal to lambda times R over 2, 0, 1, minus 1, 0, f. And this R is non-negative definite,
in fact positive definite in our case. So we did this computation, and we got to a place, which is actually very nice. Because this object that you see here
is called a canonical system. So that is exactly what a canonical system is.
So let me tell you what canonical system is. So this is this is something that comes from scattering theory. And basically, there's a history of understanding various generalizations of scattering theory.
In some sense, this is a continuous analog of a tri-diagonal matrix. So that's one thing I could say. It doesn't look like it, but in fact, you can put tri-diagonal matrices into this form. And this is somehow the nicest generalization that there is.
And the theory of these things was worked out by the branch. In this beautiful book, in the 60s, it's called Hilbert spaces of analytic functions. So here's the one who
unified the theory of such objects. And I'm not going to tell you the details of this, but the one thing that's important is that this book is actually perfect.
There are things of the branch that are not perfect, but this one is almost perfect. It's not easy to read. The language is very simple, but almost everything is done in exercises, so that's one problem. And actually, if you want to learn about canonical systems,
there is a beautiful very recent review by Romanov, which basically takes the branch's book and explains it to people who have finite patience. So this is a canonical system.
And what did the branch use the canonical system for? He basically tried to use it to prove the Riemann hypothesis. This was later, after this theory was completely developed. So basically, he has some papers where he says, well, you can set up such a canonical system.
To this canonical system, you can associate eigenvalues, I'll tell you in a second how. And you can set this up in a way that the eigenvalues are exactly the non-trivial zeros of the Riemann zeta function.
Transformed to the real line. So that didn't work, at least. But then, of course, what's the random analogy of the zeros of the zeta function? Well, that's random matrix eigenvalue to the bulk limit of the GUE.
So even though nobody knows if you can put the Riemann zeta zeros here, you may ask if you can put the GUE in here in this kind of setup. And the answer is yes. And that's what I'm
saying. Why does this have to do with an operator? Well, let me see if I did this right. And maybe the R over 2 goes here. Sorry, I think. Yes. OK, so this is how it goes.
So this is a canonical system. And it contains many things, again. I'll show you how it contains
unitary matrices in some sense. It also contains tridiagonal matrices. As I said, it contains Schrodinger operators with potential. You can put them in this form. And then it contains the Dirac operators. And the Dirac operators are the case
when this R is invertible. So it's always non-negative definite. It may have zero eigenvalues. But if it has non-zero eigenvalues, then it's invertible. And then I can put this in the form. So let's see what I do. So take the inverse of this matrix, which is 0 minus 1, 1, 0. And then you take the inverse of R over
2. So you get 2 R inverse. And I write del x or del t f equals lambda f.
This is what I got from here. So what is this? So this we can call tau. This is an operator. And this is the eigenvalue equation of the operator.
It says I take some function f, I apply to it some operator, and I get f back lambda times f. So what we have produced is given this hyperbolic carousel, we have produced an operator
which has eigenvalues of that. And this is actually the way that this was done historically. So we came up with this carousel 10 years ago, and then we found we can do it as an operator much later. And here is how you do it. It's pretty simple.
So again, what kind of operator is this? You take a vector-valued function, 0, 1. So the interval 0, 1, 2. You may want to leave this open here. I'll tell you why. 2 R squared. OK. And there has to be some boundary conditions which correspond to that starting and ending
points there. So it tells you that f0 is parallel to some vector, maybe u0, 1 or something, and f1 is parallel to u1, 1. So those are the boundary conditions.
These are f at time 0 and f at time 1. They're vectors. And then you apply it, differentiate f, and you apply this matrix to that vector-valued function f. You get the new vector-valued function, and you check whether it's equal to lambda f. And this, again, this R is a parameter t there.
So OK. So we have some kind of identification of pass here. You get the point process, you get an operator. So do you assume that f is continuous? Well, yeah, it should be differentiable.
OK. And the one is the limit? Excuse me? Because f is not defined at once. Yeah. So at 1, there's some technical issues. I'll probably not tell you about this. Yes? About the time. So you say it's from 0 to 1, but in this picture of n lambda, where's the time there in the upper board?
The time there is? In lambda, what you draw on the right side, the upper board. Right. So in n lambda, the time is gone. So you got the n lambda at the fixed lambda by running this whole process through the entire time. So when you run this process, you get a number of how many times you have passed,
and that will be your n lambda for that lambda. And if you want to compute it for another lambda, you do it again with a larger lambda. But this is now time to infinity or time to? Yeah. You can use it for any lambda. So this is defined in the whole real line. Now here, you fix the time and you change the lambda or you vary lambda,
whereas before, you used to fix lambda and vary the times of it. Well, you can do, you know, so let's clarify this. So, you know, so how do you check whether F, whether lambda is an eigenvalue?
So what you can do is you can just start F with this initial condition and solve this ODE, right? It's an ODE if you write it like that. And then you see what happens with this ODE. Well, this thing F is going to go around the boundary of the hyperbolic plane. And if it ends up at U1, okay, then you're happy.
That means I have an eigenvalue. So that's what you can see from here. But in fact, there is an oscillation theory, a storm-rearable theory, which says that you can say actually more. And the more that you can say is exactly what I put up there, which is exactly that the number of times you have passed in one rotation will tell you how many eigenvalues
there are that are less than, that are between zero and your lambda. And it's kind of obvious if you think about it. It just follows from continuity. It's argument that I, why I choose to do. So these two lambda are the same?
These are the same lambda, yeah. So the eigenvalues of tau are exactly the points in capital lambda. So those are the places where n lambda jumps. It's clear. That's when, because you can always solve this, this F always solves it,
we can solve this for F. And that's when the right boundary condition is satisfied. So let's do some examples. What if you just set R to be the identity?
Okay. Or X and Y to, this is the same as setting X and Y, X plus IY equal to I. So you're just rotating about the point I. Okay. So let's just do, let's just see what you have, right? So then RT inverse is just
this. So which one should we do? Maybe this, right? So you have F1 prime is equal to minus F2, right? Because, oh no, it's there. So actually plus F2. Sorry, F,
okay, I do it like that. F1 prime is F2 times lambda, right? From reading the first row of this. F1 prime is lambda times this thing will bring you F2. And F2 prime is equal to
minus lambda F1. Okay. So what's the solution? So let's see. So the derivative of cosine is
minus sine, so derivative of sine is cosine, right? So let's take F1 is,
we're going to set the boundary conditions to work for us. So F1 is sine lambda over 2T, and F2 is cosine lambda over 2T. And if you set the right boundary conditions,
so let's say that U0, so the left boundary condition should be, let's set it, 0, 1, okay? That corresponds to U0 equal to infinity, but that's fine. And the right boundary condition, you can also set 0, 1, okay? U1 equal to infinity.
And what do you get? Well, you get that eigenvalues, lambda i, k is 2 pi k. So just solve. Because that's when you plug in 1 here, then you should get
this one thing parallel to that. So this corresponds to the point process, which is 0,
2 pi, 4 pi, and so on. So if you do a random shift of this process
by 2 pi, then we'll call this process the sine infinity. It's rigid. You'll see why.
So that's one example. One very stupid question in Atlanta. So what did you pass in this model, the line from U1 to some PT, or what do you mean by pass U1 in lambda?
So you look at gamma T, it moves around the boundary, and it always goes in one direction. At infinity, okay. Yeah, and U1 is a point on the boundary. And you just count how many times you hit it.
That's it. You can write it in terms of some org. You solve this OD there. Actually, I made a mistake here. I should have said, so there's a mistake here, because this should be really e to the i gamma T. So you actually write the Euclidean coordinate,
and this is the angle. That's the correct. So that explains why you're asking this question. All right, so let me look at example two. And this is unitary matrices, or more generally,
measures supported on endpoints. So probability measures, endpoints on the unit disk, unit circle.
Let's see. So this story is an analogy of what we did in the beginning of these lectures,
where we had endpoints on the real line, which were corresponding to the spectral measure of a matrix, of a self-adjoint or symmetric matrix. Here, if you take the unitary matrix, U n, and you take a vector e, and it has a spectral measure. So U is in, let's see, unitary group U n. Then U has spectral measure at some vector e.
And that's a general, and that's a probability measure on the circle,
which is supported on endpoints. So what does this have to do with what we're doing? Has to do with the following. So if gamma t, sorry, that's x plus i t, so the path,
the constant. So we already did constant at i. Constant than any other point is actually the
same, just because you can conjugate the whole picture to send the point to i. But let's say this is just piecewise constant. So if constant has intervals, k over n, and k plus one over n, something like this.
Is that y? x plus i y t? x plus i y t, yeah. So the path, so we make it constant on the, so take the interval, divide it up here. So again, this is a path, right? So it's a hyperbolic plane,
so it's a path that is some kind of step process. And it's not continuous, it's just piecewise constant. So then there is a theorem, which is, again, from our new paper.
And it's actually very simple. So then the eigenvalues, so if we have,
so consider the measure. So what should we do with this measure? It's some measure sum of i equals one to n of q i delta lambda i, where lambda i are in the boundary of the disk.
So this is that kind of measure. So if the increment, and I'll tell you this more precisely,
of gamma, sorry, of x plus i, i y, are the Verbalinski coefficients, maybe say it more
precisely, are given by the Verbalinski. So I'm going to put this here like that. I'll explain in a second, of mu, or maybe sigma. So this is the measure sigma.
Then the eigenvalues of tau are exactly, so okay, let me parameterize this like this.
So you'd like e to the i lambda i, so lambda is real, n times lambda, i plus 2 pi z, where i
equals 1, 2. Okay, so what does this say? So if I have a measure on the boundary of the disk,
just like in the previous case, which is a probability measure,
then there exists for it an operator such that the eigenvalues of that operator are almost exactly this point. Not exactly this, but it's there lifting. Okay, so you lift these points to the real line and you repeat them periodically. That's why the plus 2 pi z, okay? And you also do it so that the average spacing will be 2 pi.
N lambda i plus 2 pi z? Yes, n lambda i plus 2 pi z. So n lambda i is just some number, right? And then you take all of its shift. But then you range over all i, so all the eigenvalues.
So really what you do is you take the eigenvalues, you lift it by the covering space, and then you stretch it out by n so that the average spacing is 2 pi. Okay, that's all. Okay, so I told you how these two things are related,
and I have to tell you what the Verbalinski coefficients are. And I have to tell you what I mean by increments. Okay, so the Verbalinski coefficients are coefficients in the Sego recursion. So the Sego recursion is the following. You want to write, you want to figure out what the orthogonal polynomials are for this measure.
Okay, so orthogonal polynomials are just the ordinary things. You want polynomials that are orthogonal are the degree, the ith one is degree i minus 1. And they're basically uniquely defined to normalization. And they'd satisfy a certain recursion.
Okay, this is two on the real line, if you have seen that. In fact, the recursion is given by this Jacobi matrix. We didn't discuss that, but it's true. And here the recursion is not given by a matrix, it's just a two-term recursion. Okay, so it's given by sort of two by two matrices.
Okay, and in those matrices, there is only one number. Every matrix has one number, it's called alpha. That alpha is the Verbalinski coefficient. Okay, so it's a complex number. So the Verbalinski coefficients are, let's look at,
so you know, the Segarit recursion is really a beautiful story, but I don't have time to say it in, not from completely. But you have alpha 0, alpha n minus 2. These are in the interior of disk, in the interior of the disk. And then you have an alpha n minus 1, which is in the boundary of the disk,
boundary of the disk. And that's what these things look like, the complex numbers of this kind. Okay, and as you can see, again, this data is 2n minus 1 dimensional, just like this data, because the qi is somewhat to 1. And there is a one-to-one correspondence, and if you want to learn about it,
you know, Barry Simon has a 2,000 page book, so I'm not kidding. Maybe just 1,500, but it's long, and it's beautiful. There's a huge, huge theory about how this thing works. Okay, but, and then the increments, you know,
so the increments here will have to be understood in terms of matrix products. Okay, so you multiply those matrices together, and then you see how they act on the upper half plane, and then you get the increments of the walk. So that's the theorem.
So every unitary matrix has some eigenvalue distribution, or a spectral measure, and you can associate to this kind of operator.
Okay, so that's example two. Okay, so example three. Okay.
So you took the same boundary conditions as before, from 0, 1 to 0, 1? So the boundary condition actually is given by this guy. You start at 1 or something, and then you end at this. You start at, yeah, you start at 1, or start at 0, and you end at that 1,
but you have to transform it to the real line. So, okay, so example three is just hyperbolic Brownian motion. Okay, so you take b, okay, so you can take bt to be hyperbolic Brownian motion.
Okay, so let's write b is x plus i t, and b satisfies on the real line, so db is mb times dz, okay, and b0.
Okay, so this is hyperbolic Brownian motion. This is an ordinary complex Brownian motion. So real part and imaginary part are independent real, standard real Brownian motions. You solve this ODE, so when the imaginary part is small, you bust lower,
because in a small distance, there actually mean large distances in the hyperbolic plane. So this is intrinsic in the hyperbolic plane, and this thing is called hyperbolic Brownian motion. Okay, so this is the hyperbolic Brownian motion.
You set some boundary conditions. You run it to a finite time, so say time 1, and you can put in the very answer if you want, some sigma, so you can put the standard deviation. Okay, then you have the following theorem, and this is due to Kritchevsky.
So it connects a little bit to Simone's talk. So if you look at the random Schrodinger operator on one dimension, okay, and you put some potential here, v1, v2. Let's put the sigma, which is some standard deviation,
and let's say that expectation of vi is 0, and vi are iid, and you also want the variance of vi equal to 1, so that sigma is going to...
Okay, so it's n by n, and let's call this hn. So it's the one dimensional, standard one dimensional random Schrodinger operator. So you take this guy.
To write you some formula, I'd like to be precise. Right, so you pick... So the spectrum of this is roughly from minus 2 to 2, okay, because... So sigma is going to be small.
In fact, sigma is going to go to 0. So sigma should be some sigma tilde times 1 over 10.
Okay, so let's say that I'm going to look at the spectrum of this operator at some energy level e, okay, which is with minus 2 to 2. So what am I going to do? I look at hn, I subtract e, okay. I have to blow it up by a factor of n if I want to see a point process,
because they're n eigenvalues, right? So the spacing is about 1 over n, they live in this thing. And then there is another scaling factor rho, which is just some function of e. So it's 1 over 1 minus e squared over 4.
And let's say that the eigenvalues of this scaled operator, let's call this lambda n. Okay, okay. So then the theorem says that lambda n converges to the eigenvalues
of this tau, which corresponds to this Brownian motion, hyperbolic Brownian motion,
with, so let's call this limiting sigma, sigma infinity, equals sigma times rho.
And it's almost true, there is some shift, sorry, here. So you have to put here a shift which depends on n, so I call alpha n. This is just some number, and alpha n is in 0 and 2 pi for everything. It's just some sequence, okay. And I can tell you exactly what that sequence is, but I don't want to know.
Okay, and these eigenvalues of this we call the Schrodinger tau process. Schrodinger tau, which is Schrodinger process. There is a parameter, which is this variance squared.
Yes? What happens if you, instead of taking constant sigma, right, you scale it? You scale it. No, no, no, I don't, yeah, I'm thinking something not homogeneous in the sense that you take sigma depending on the j. Yes, yes, yes, you can do that. You can do that, and you get the same result out of it?
No, so if you make this decrease... Exactly, make it decrease like one over square root of j, one over square root of j? Yes, if you do it, then you get beta ensembles. And which beta ensemble depends on actually the constant in front of the sigma, and where you look in the spectrum.
It depends on e and sigma, it depends on both things, yeah. That's in this paper, so the answer is in that paper, there's no... It's not, yes. Okay, so you have this nice thing where you have this, it's not a random matrix ensemble,
although I haven't told you that, right? This could be that the Schrodinger process is some random matrix ensemble. But it's a random process, okay? It is actually, it is translation invariant by multiples of 2 pi, that's easy to check. It's not translation invariant by any number. It's not translation invariant by, say, multiples, by one.
It sort of remembers the original locations of the eigenvalue. This noise that you had, it still remembers. I really don't have enough time.
And let's do example four. And now you write dB.
So the Brownian motion is going to be 1 over 2 over root beta, times root 1 minus t, dMb dz.
Okay, so what is this doing? So if you don't put this here, then it's just ordinary hyperbolic Brownian motion. If you put this thing here, it just scales this,
the variance is going to be scaled depending on beta and time. In particular, the square of this is not integrable. So that means that then this Brownian motion is actually going to go, by time one, it's going to get to infinity. So it's running this funny time.
This is actually, we call this logarithmic time. There's a reason why this is extremely natural. So you're going to have actually a Brownian motion that goes to infinity. And then the right boundary condition is a little bit irrelevant.
In fact, we're going to take b infinity. Infinity is the right boundary condition. Okay. And in this example, the eigenvalues of tau are called the sine beta process.
Okay. And again, this is a definition, if you like, for beta, not one of the classical values, but when beta is one of the classical values, it's a theorem.
Okay, so the sine 2 process is just the sine current of a process is actually, has the same distribution as the eigenvalues of this particular time. So example five is actually a continuation of example.
O2, remember unitary matrices. And this is the result of Kilip and Nanchu,
who started with Edelman and Dimitriou, and looked at the unitary beta ensembles, so C beta E. So this corresponds to the measure where the joint distribution of eigenvalues is lambda i minus lambda j product i i less than j.
Now these are on the disk, on the circle, to the power beta with respect to length measure. And you can put weights, which are Dirichlet just before, beta over 2.
And so then you have some random measure on the disk, which the eigenvalues, the locations are distributed like this. The weights are distributed like this. Everything else is independent. And Kilip and Nanchu said that in this case, you can get what the Bernoulli coefficients are. So the last one on the disk is just uniform.
On the circle, it's just uniform. But the others, let's see if I can get this right. So alpha, I think, k minus 1, alpha k. So first of all, it's rotationally invariant.
Remember, it's some random variable in the unit disk. So it's invariant under rotations. And secondly, alpha k absolute square. So it has distribution, which is beta.
And let me see. So it's 1. And you have, I think, 1 and k, beta over 2k, maybe k plus 1.
I may have this right or wrong, but I think that this is correct. So what are these? These are just random variables on the disk.
This thing pushes the beta variable close to 0. So what this tells you is that this has variance about this, about beta over 2. Let's see if I can do it right.
Something like k plus 1 over 2k plus 1 over beta. No. Inverse, yeah. Sorry. Is this correct at all? OK. 2 over beta times k plus 1 is equal.
Cool. So the variance is going to 0. Did I do this correctly?
No. Sorry. I think I have to write n minus k. So the variance is growing. OK, so this is what the alpha k's are.
So let's look at what the path is. So remember, we want to understand this path x plus it that corresponds to this c beta e ensemble eigenvalues. So the variable click coefficients have this very nice rotational invariant thing.
So what are they? So I'll tell you what it means in this sense. So you pick how you can construct this. So it's actually going to be a random walk. And so the path x plus it for this unitary ensembles is a random walk. It's made into a piecewise continuous function.
And what is the random walk? Well, you just pick a radius according to this distribution. You have to convert it to hyperbolic length. Look at the circle around you with that radius and jump to a uniform point of that. And this is a hyperbolic circle,
so it's not the Euclidean circle. And then you do it again. And then you do it again. But the variance of the radius is actually getting larger as you go ahead. So you have a hyperbolic random walk with changing variance.
So why is this interesting?
Well, you already see the convergence, right? If you look at the c beta e, then it's actually just the operator that corresponds to the hyperbolic random walk. So you take a limit of the hyperbolic random walk. What do you get? You get a hyperbolic Brownian motion with changing variance.
So this root 1 minus t comes from this changing variance here. And that actually proves this. I mean, you have to do some tail estimates, but that's it. So that's the proof. So let me tell you a strong version of this theorem. So when you prove that c beta e converges to the sine beta process?
Yes. So the theorem actually, so this was, OK, so the fact that the eigenvalues converge to the process like this, that's a result of Kilip and stoichi. OK, that's also from 2006 or so.
But in fact, we now have a convergence on the operator level. And here is what? So you look at tau n, which is the c beta e operator, right? The one that you constructed up there. And then you get, so you look at tau n, you look at its inverse.
So this is a differential operator. Its inverse is going to be an integral kite type operator with a kernel. OK, so it's actually going to be Hilbert-Schmidt. It's very nice, very nice operator. I can try to do it down to you explicitly. And you look at also the sine beta operator inverse.
OK, and look at this norm. OK, and which norm? Actually, you can look at this Hilbert-Schmidt norm. OK, and the theorem is that this is less than or equal to, for large n,
you know, with high probability, this is less than or equal to 1 over n times n to the epsilon, for all epsilon.
Oh, and this is squared. OK, so in what sense is this? Well, of course, these are two random objects. So we have to sort of put them on the same probability space. So this says that there exists a coupling. OK, so let me tell you how strong this theorem is.
So that's just one conclusion.
OK, so if you look at lambda n of k minus lambda k. OK, so this is the kth eigenvalue in the index bound by k. So the 1 to the right of 0 will be 1, and so on.
This is the kth eigenvalue in the sine beta process. This is the kth eigenvalue in the lifted process. OK, so you can look at the sup over this, for all k less than or equal to n to the 1 fourth minus epsilon.
OK, so you look at the maximum distance between n to the 1 fourth eigenvalues, and this goes to 0 in probability in this coupling. OK, so not only you have convergence of the spectrum, you have a much, much stronger rate of convergence bound.
You can go all the way up to n to the 1 fourth eigenvalues, and even those are going to be close. This is just, this is an exercise to get from there to there. Just using the Hilbert-Schmidt norm and a lot of large numbers. So for beta equals 2, the best results that I knew before
was actually a recent result of Veda. Oh, actually, it was just Josef Najnudel and some co-authors, I don't know, where they had the same thing for n to the 1 6. So even for beta equals 2, as far as I know, this is very strong.
OK. Yes? What about the joint law of this alpha k? The joint law?
Oh, they're independent. Yes, they're independent. I still have another question. The alpha k in the Schrodinger tau process is not the same alpha. Was there an alpha? No, no, that alpha k is, sorry, that was a bad notation. No, that's just some shift, because this thing is somewhat, so you have to take care of some periodicity issues, so that's why.
OK. So I want to do one last piece of math, which is a computation using the Schrodinger tau process.
So what you're really interested in, when we first identified this process, we can prove various things about it, CLTs and low large numbers, all kinds of things. Gap probabilities, for example. What is the chance that you have a large gap? You can prove all those things using this representation. But the one that we are most interested in is whether this corresponds to a beta ensemble.
OK? And if it does, then which beta? So let's try to identify it. So we want to understand the probability that there exists two eigenvalues in zero epsilon. Right?
So remember, in the beta ensemble, this should be epsilon to the squared, that's just for the eigenvalues to be there if they're Poisson, and then that's another beta for the repulsion term. So that's why it should be for beta ensemble. So we just wanted to identify this exponent. So let's do a computation for this. I'm going to give you an upper bound.
So how do we compute this? Remember, we run this carousel, but we rotate extremely slow. We rotate with speed epsilon and see how many times it passes the target point. So it has to pass the target point by at least twice.
So at least I can say that this is the probability that the carousel does a full circle. If it has to pass the same point twice, it has to do a full circle.
And let's see. But we're doing a rotation at speed epsilon. How can it do a full circle? Well, let's look at the geometry, right? So we're rotating here about some point at speed epsilon. So really, the average speed should be 2 pi over epsilon so that we go all the way around.
At least 2 pi over epsilon. So that means that I have to make up for this rotation by epsilon by going far away. So in fact, I have to get about epsilon close to the boundary so that I get about unit speed.
So if you look at that formula, which you have up there, you see the speed up there is 1 minus b squared.
So if I get epsilon close to the boundary, then 1 minus b squared will be about 1 over epsilon. So that will actually compensate for epsilon being small. Is this clear? The rotation speed should go up to, at some point, it should go up to about 1 over epsilon. And that means 1 minus b squared. The top thing there is bounded. So the only way that can happen is 1 minus b squared is about epsilon.
So this is, of course, in Euclidean distance. So in hyperbolic distance, what does this mean? This means that I am about minus log epsilon away from 0.
Epsilon close to the boundary is about minus log epsilon away from 0. But this is a Brownian motion that's 0 until unit time. So basically, its tails, these distance tails, are like the tails of a normal.
Hyperbolicity here doesn't matter. So what is this? So it's less than or equal to e to the minus, some constant, which depends on the variance, times the distance, which is log epsilon minus log epsilon squared.
So what is this? This is equal to epsilon to the minus c log epsilon. Or maybe like it like this, so epsilon to the c log epsilon.
So you compare this to that, right? So for the beta ensembles, the repulsion is 2 plus beta. For these ensembles, they're actually beta is infinite in some sense. So the repulsion is much, much stronger than in the beta ensembles.
So it's a much, much more rigid ensemble. And you can see we did all this for just looking at this picture. And heuristics. OK. I have seven, I have six minutes.
Actually a bit more because it started late, right? All right, so I can finish if I wanted to do another computation. One more computation. So this actually comes with a story.
And because I started with talking about beta ensembles, I think it's kind of appropriate to finish this series of lectures with a story about beta ensembles. And the story is about Dyson, who in 1962, just like the golden year of beta ensembles,
Dyson has three fantastic papers. Every single one is still important. And one of them is the introduction of beta ensembles. The other one is Dyson's Brownian motion, by the way. And the third one is about the invariant ensemble. So what happened in that paper?
So it was known, even Wigner could calculate that the chance that if you have a large gap between two eigenvalues in GOE, for example, is not like a Poisson, right? The Poisson, that chance is exponentially in the size of the gap. But here, in this case, it's exponentially in the square of the size of the gap.
And so even Wigner was aware of this. But Dyson was much, much more brave. And of course, he's a physicist. But he gave you a formula. And this is what it looked like. So this is in the scaling that we have. So it's minus beta squared, beta over 64 lambda squared.
Plus, so lambda squared is the main term. So again, this is the probability of, let's say, tau beta or sine beta
has no eigenvalue in zero lambda. So let me write it like this. So this is what Dyson said. Then there is a linear term.
And there is a polynomial terms. OK? And Dyson said that this gamma beta is equal to one quarter beta over 2 plus 2 over beta plus 6.
Yes, OK, sorry. So let's put it like this, OK? This is when lambda goes to infinity. In fact, there is a constant here.
So you can put here a constant plus. So you know, this is a physics story in some sense. So many people say there is a proof. That means they're convinced that it's true, and they have very good arguments, right? But in physics, there's a hierarchy of arguments. Some arguments matter more than some others, because they may be more rigorous,
or people give more credit to it. So in 1973, Mehta and Clauzo actually computed these values for just the special betas. Beta equals 1, 2, and 4. They computed this gamma beta with more precise methods. I think they're still physics methods,
but they're more precise. And figure out that this is wrong. So at that point, there was no guess. We knew that it's not that, and you could have guessed some formula, but they knew the values for 1, 2, and 4. OK.
So using these methods that we have here and SDEs, we actually proved that this formula is true. You just have to put a minus 3 here.
So that's now a theorem. And this is, I don't know, 2010 maybe, something like that. And so the last thing I want to tell you is how you prove this. Not the gammas, but I'll tell you this.
So the idea is the following. Remember, you have this boundary point. You have this hyperbolic Brownian motion. And this guy is moving around like this. I'm going to see if it makes a full circle.
The problem of doing computing with this is that there are a lot of things. It's too much to take care of. So you would like to have some quantity which evolves by itself, and you can follow it. You don't have to follow lots of things at the same time. And there is a quantity like that, which is the hyperbolic angle. So hyperbolic angle between this point 1, bt, and gamma t.
So this angle. So we call this alpha t. And you can write down this hyperbolic angle. And it satisfies an SDE. In fact, it's better to do this in...
I hope I have my SDE. Yes. In fact, you see in the Brownian motion, there is a time dependent parameter. It's good to scale it out and put it somewhere else. So let me do it like this.
So you write the SDE for alpha in standard time. So not logarithmic time. It looks like this. So the alpha is lambda. And now there is another function f, which is not the same f as before.
Anyway, I just write it like this. So there is a drift, lambda times f, f depends on time, plus 2 sine alpha over 2 db. This is just a standard Brownian motion.
And it actually turns out that alpha converges to a multiple of 2 pi, lambda, almost surely.
St goes to infinity. This is now in the time scale 0 infinity. Because of this time change. This is actually trivial. This is just a fact about SDEs.
Because let's see what happens here. So this is an SDE on the real line. What's happening? So there's some noise term, which has some bounded variance. This is sine here. And then there is a drift form.
Oh, I didn't tell you what ft is. So f of t is actually just an exponential random variable with... The density of an exponential random variable with beta over 4 parameters. So eta over beta over 4t. So it's exponentially decaying this f of t. And it depends on beta.
So there is a drift whose total integral is lambda. That's it. This is not much of a drift. And then there is this noise term. So it's essentially a martingale. Because there's a drift. And it's non-negative also.
It's another interesting thing. Because it cannot cross down to minus infinity. Because then when you get to 0, alpha gets to 0, this variance goes to 0. So you will never be able to cross 0. So it's a non-negative martingale. So it has a limit. It's not a martingale because there is a drift. But this drift has integral, which is finite. So it's essentially a martingale.
All of this can be easily proved. So it does have a limit almost surely. And it's also easy to check that the limit can only be an integer multiple of 2 pi. Because otherwise, there is still some variance here. It will still keep buzzing. The only way the variance disappears, which it has to if it has a limit,
is if this is a multiple of 2 pi. And actually, which multiple it is. So that's a theorem. It's just n lambda. So it's the number of eigenvalues in the interval. So how do you compute this gap probability? OK. So here is your f.
And here is 2 pi. So it could converge here. Or it could converge to 0. So it starts at 0, has a positive drift, and it buzzes. And what we want is that it never reaches 2 pi. Because if it reaches 2 pi, then again, the same way, it can go again below 2 pi.
So this alpha is for the same reason. Because there is a drift up that can push it through upwards. But downwards, it can't go because the variance vanishes when it gets to multiples of 2 pi. So what are we computing?
We have a large lambda. Because we want to understand the large gap. So we want to understand the number of eigenvalues in a huge thing. And we want it to be 0. So this Brownian motion, or this Sd, which is a gigantic drift, it has to be confined in an interval.
So how does that happen? Well, the way that happens, of course, it's more easy for it to change its drift if the variance is large. Okay? If the variance is small, then of course, if the variance is 0,
it will follow this drift no matter what. If you add some noise term, it can deviate from the drift. And the more variance you add, the more you can deviate. So actually, you can just forget this sine of over 2. Just look at the maximum. So it's probably going to stay in the middle to be able to kill off this drift. And how much does it cost to kill off this drift?
Well, you know a change of variables for Brownian motion, right? Brownian motion with drift is absolutely continuous with Brownian motion without drift, at least up to finite time. And what's the change of variables from here? Well, the probability that it follows some drift that it shouldn't is just exponential to the minus, right?
The L2 norm of f squared, actually lambda f squared, divided by twice the variance. But the variance there is 2, so divided by 8. All right, that's how much it is.
That's how hard it is for this Brownian motion to compensate for a drift, so that it doesn't go out of this trip. This is just standard facts about Brownian motion. And if you compute that, you just get exactly exponential of minus beta over 64 lambda squared.
And you can get all the way to that gamma if you do this thing more precisely, quite a bit more precisely. Okay, well, thanks very much.
There'll be exercises too.
Is this something to compute the minus 3? Actually, can you do it without the symmetry of the matrices? This is very rigid, so your random matrices could be a lot of symmetry.
Yes, so here is how I think about it. So you call universality the fact that if you have two matrices that are of size n, but have different distributions, then they are close in eigenvalue distribution.
The kind of thing that this does is have you have one invariant ensemble, and it shows that if you do it with n and n plus 1, or n and 2n, then the eigenvalue distribution will be similar. So this is a kind of thing. That's why you have a limit, right? Otherwise, universality wouldn't tell you that you have a sign process. It would just tell you that you have some matrix which has eigenvalue.
You replace it with the same matrix with Gaussians, and they look the same. So this theory is complementary to universality. Even though you can prove universality in certain cases, but it's really not the point. The point is to identify the limit as a probabilistic object that you can say things about using your usual tools, not just analysis.
So it's more that. In the 70s, probability almost died because people were trying to prove various harder and harder versions of the central limit theorem.
Fortunately, statistical physics saved it. So let's hope that the same doesn't happen with universality. I have a question here. Using this sign-beta operator, you managed to analyze the eigenvalues of the...
I mean, can you make a connection between the eigenvalues which are close to unity, right? The eigenvalues on the circle which are close to unity. How about the ones in the bulk? There's these three... That's the bulk. It's invariant. That's invariant-validation? Yes. There's nothing else.
But you can ask the same thing about GUE, right? And in fact, in this Carousel paper, we proved that the GUE and the Beta-Harmita ensembles converge to this in the bulk. But the story is nicer with unitaries, that's why. It's simpler in that case.
Yeah, I have a simple question. Does this hyperbolic disk in your talk have some relation with hyperbolic space? It's the same hyperbolic space. I don't know.
You're using the same symmetry. You're using the symmetry of some of the formulas you're using here. Yeah. You're doing some symmetries of this hyperbolic length.
So it's possible that for some... You could relate some random walk on the line, maybe, or something. So reinforce random walk on the line, or some weighted line, or something to this. It's possible. So there's not a direct physical relation?
No. I mean, you know... How should I say? I mean, random walk is a Gaussian free field on the line, right?
But... And so in that talk... So you have a random walk here, which is some version of a Gaussian free field in the line. So you can say that. And in that talk, there are also Gaussian fields in the line. And then also, the fields were not exactly Gaussian, but hyperbolic. So in that sense, again, the case when this operator is aligned, but I think it's not very interesting for reinforce random walk.
Then maybe there is some more direct connection.