Statistics of randomized Laplace eigenfunctions
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 13 | |
Author | ||
Contributors | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/48158 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
8
00:00
ManifoldOrthonormal basisSelf-energyGeometryQuantum mechanicsFunctional (mathematics)Spectrum (functional analysis)Point (geometry)Musical ensembleModulformDifferent (Kate Ryan album)SurfaceVolume (thermodynamics)TrajectoryOperator (mathematics)Dimensional analysisEigenvalues and eigenvectorsManifoldSpacetimeLaplace-OperatorOrthonormal basisRiemannian manifoldSquare numberDynamical systemRiemannian geometryBoundary value problemGeodesicSet theoryLecture/Conference
02:30
Orthonormal basisManifoldFood energyKritischer Punkt <Mathematik>Self-energySet theoryFunctional (mathematics)Maxima and minimaEigenvalues and eigenvectorsDirection (geometry)Population densitySquare numberArithmetic meanAreaBoundary value problemDynamical systemTrailManifoldQuantumGeodesicComputer animation
04:18
FrequencyVibrationDimensional analysisSphereConnectivity (graph theory)Wave functionAlgebraic structureLine (geometry)Different (Kate Ryan album)Time domainMultiplication signSet theoryFood energyConfiguration spaceLimit (category theory)SurfaceMathematicsPlane (geometry)Real numberSquare numberMortality rateFunctional (mathematics)WaveRoundness (object)TorusLie groupComplementarityPosition operatorComputer animation
07:31
SurfaceSeries (mathematics)LengthPoint (geometry)Set theoryWave functionFrequencySurfaceMoment (mathematics)Time domainMeasurementMultiplication signMathematicsSign (mathematics)Logical constantDistanceRadiusFunctional (mathematics)WaveComputer animation
09:06
MeasurementSurfaceConnected spaceLogical constantSlide ruleConnectivity (graph theory)Multiplication signGeschlossene MannigfaltigkeitNumerical analysisTheoremSquare numberTime domainSet theoryResultantSurfaceFrequencyMoment (mathematics)Proof theoryManifoldOcean currentComputer animation
10:02
Distribution (mathematics)Connected spaceMeasurementPoint (geometry)Connectivity (graph theory)ManifoldMoment (mathematics)Configuration spaceQuotientMeasurementNumerical analysisKritischer Punkt <Mathematik>Set theoryDiffeomorphismMultiplicationMereologyN-sphereResultantDimensional analysisLogical constantFunctional (mathematics)Natural numberComputer animation
11:22
Eigenvalues and eigenvectorsPoint (geometry)Functional (mathematics)MultiplicationSquare numberSphereRoundness (object)CoefficientLinearizationNumerical analysisWaveTessellationRandomizationCombinatory logicTorusSpherical harmonicsFrequencyRootLecture/Conference
12:46
Connected spaceMeasurementTopologyDistribution (mathematics)Point (geometry)Riemannian manifoldKritischer Punkt <Mathematik>Arithmetic meanEigenvalues and eigenvectorsVarianceNumerical analysisQuotientGoodness of fitMeasurementConnectivity (graph theory)Set theoryManifoldMortality rateGraph (mathematics)Logical constantSpacetimeCombinatory logicLinearizationDirection (geometry)Series (mathematics)N-sphereNeighbourhood (graph theory)Well-formed formulaFood energyCoefficientFunctional (mathematics)Multiplication signMetric systemLecture/Conference
15:58
Random numberAerodynamicsWaveStatisticsHistogramLinearizationLimit of a functionMathematicsRandomizationFrequencyCombinatory logicRing (mathematics)Model theoryFunctional (mathematics)Game theoryCross-correlationWaveScaling (geometry)Körper <Algebra>Point (geometry)Vector spaceSpacetimeManifoldStatisticsHistogramDistribution (mathematics)SurfaceClosed setProduct (business)VarianceVariable (mathematics)GeodesicKritischer Punkt <Mathematik>CoefficientGraph (mathematics)Numerical analysisTheoremSummierbarkeitMeasurementPhysical lawExponentialabbildungEigenvalues and eigenvectorsComputabilityObject (grammar)DiagonalSpectrum (functional analysis)Stochastic kernel estimationOperator (mathematics)Projektive MannigfaltigkeitRight angleSign (mathematics)GeometryContent (media)Set theoryNormal distributionLoop (music)OscillationDirection (geometry)INTEGRALModulformArithmetic meanDifferent (Kate Ryan album)Lecture/ConferenceComputer animation
22:28
Network topologyTheoremCoefficient of variationMaxima and minimaGleichverteilungLoop (music)Exponential functionManifoldRandomizationProof theoryCondition numberVelocityResultantEigenvalues and eigenvectorsMultiplication signMathematical analysisSphereFourier transformKörper <Algebra>HypothesisFunctional (mathematics)Differential operatorKritischer Punkt <Mathematik>Cross-correlationScaling (geometry)Mathematical singularityTorusLoop (music)Point (geometry)MeasurementGeodesicDerivation (linguistics)Term (mathematics)InfinityLimit (category theory)SpacetimeNetwork topologySuperposition principleRadiusWavePlanar graphLogical constantAutocovarianceNumerical analysisRange (statistics)Graph (mathematics)GeometryHeuristicDistribution (mathematics)Set theoryDimensional analysisMoment (mathematics)Right angleSinc functionClosed setLocal ringOperator (mathematics)Stochastic kernel estimationLaplace-OperatorComputer animation
28:27
Point (geometry)MeasurementDistribution (mathematics)TopologyTotal S.A.FrequencyNumerical analysisDiffeomorphismMeasurementMassMultiplication signSeries (mathematics)Connectivity (graph theory)ResultantSpacetimeManifoldTorusPhysical lawSet theoryRange (statistics)Connected spaceDimensional analysisGlattheit <Mathematik>Riemannian manifoldKritischer Punkt <Mathematik>Geschlossene MannigfaltigkeitLogical constantRandomizationWahrscheinlichkeitsmaßGeodesicSummierbarkeitCompact spaceRegular graphCombinatory logicArc (geometry)Boundary value problemPoint (geometry)Closed setTime domainVarianceVelocityCross-correlationClassical physicsQuotientPole (complex analysis)Functional (mathematics)LinearizationDistribution (mathematics)SurfaceGraph (mathematics)CuboidNetwork topologyLoop (music)Eigenvalues and eigenvectorsCategory of beingLimit (category theory)Axiom of choiceEuklidischer RaumSphereDifferent (Kate Ryan album)ChainMorphismusLaplace-OperatorUniverse (mathematics)Mortality rateCountingAssociative propertyAreaLecture/Conference
35:35
ManifoldDifferentiable manifoldTopologyDistribution (mathematics)InfinityTheoremRootMeasurementSpacetimeTime domainConnectivity (graph theory)WaveNetwork topologyGraph (mathematics)DiffeomorphismDifferent (Kate Ryan album)Limit (category theory)FrequencySubsetDistribution (mathematics)ComputabilityAssociative propertyTheoremMoment (mathematics)FinitismusLimit of a functionMultiplication signBound stateRootRandomizationManifoldInfinityKörper <Algebra>Point (geometry)Closed setTrailDimensional analysisSet theoryNichtlineares GleichungssystemCross-correlationAlgebraic structureMassGeodesicSurfaceFunctional (mathematics)ManifoldConfiguration spaceWahrscheinlichkeitsmaßProof theoryMereologyCorrespondence (mathematics)MorphismusExistenceResultantPosition operatorLoop (music)SphereOvalElement (mathematics)Student's t-testCylinder (geometry)Computer programmingExponentiationMusical ensembleTheory of relativitySinc functionLaplace-OperatorSeries (mathematics)Eigenvalues and eigenvectorsLecture/Conference
42:43
SupremumDistribution (mathematics)Time domainConnectivity (graph theory)Connectivity (graph theory)SpacetimeProof theoryMassFunctional (mathematics)Table (information)SummierbarkeitCross-correlationVolume (thermodynamics)MeasurementDistribution (mathematics)Graph (mathematics)Standard errorPolynomialConfiguration spaceCylinder (geometry)PercolationKörper <Algebra>Numerical analysisDimensional analysisTime domainSet theoryEntire functionLimit (category theory)Network topologyNichtlineares GleichungssystemPlanar graphManifoldRandomizationWavePoint (geometry)DecimalDifferent (Kate Ryan album)Superposition principleMorphismusFourier transformCircleGraph coloringManifoldDiffeomorphismPhysicistSquare numberEquivalence relationComputer animation
49:51
Proof theorySet theoryManifoldDistribution (mathematics)RadiusPower (physics)Moment (mathematics)Numerical analysisVolume (thermodynamics)EstimatorMeasurementLimit (category theory)Kritischer Punkt <Mathematik>Euklidischer RaumInfinityScaling (geometry)Musical ensembleTime zoneFunctional (mathematics)Limit of a functionSeries (mathematics)AdditionNichtlineares GleichungssystemPoint (geometry)Lecture/Conference
Transcript: English(auto-generated)
00:16
I'll be speaking about Laplace eigenfunctions. So this talk falls in the realment spectral geometry.
00:23
But I'm not going to be talking about eigenvalues, though. So this is not a talk about spectral rigidity. It's more a talk about how eigenfunctions of the Laplacian behave. So I'm going to start introducing the setting. So we are going to work on a compact Riemannian manifold. We are going to assume it has no boundary.
00:41
And throughout the talk, I'm going to write little n for the dimension of the manifold. Now, the manifold is compact. So you have the Laplace operator acting on it. And the spectrum of the Laplacian is discrete. So if this is the standard Laplacian,
01:01
I put a minus sign in front to turn it into a positive definite operator. We are going to denote by phi lambda j the eigenfunctions. All the eigenvalues are positive. And I'm going to be writing phi lambda j squared for the eigenvalues. And because, again, the manifold is compact,
01:21
you can form an orthonormal basis of L2 of the manifold and the L2 with the Riemannian volume form by using these eigenfunctions that, throughout the talk, I'm going to take to be normalized. So the reason why I'm interested in studying eigenfunctions is because they encode all the dynamic and geometric
01:42
properties of the underlying manifold. From a quantum mechanics point of view, if you want to understand what's the probability that a quantum particle belongs to this region A in space, what you do is you grab this square of the modulus of the eigenfunction and you integrate it over A. And then you get the probability of your particle being
02:02
in that region of space. So eigenfunctions, they carry all this information about the underlying manifold. And they reflect a lot of what's happening with the dynamics of a geodesic flow. So just to illustrate that point, I have this picture here. You have a disk and a cardioid. And in red what you see is the trajectory of a geodesic in each of these two surfaces.
02:25
And, well, there is a big difference here. This one looks quite chaotic. And in these four plots, you have the density plots of four different eigenfunctions. The eigenvalue is growing in this direction. And what you're seeing here is the plot
02:41
of the modulus square of the eigenfunction. So you're seeing this function plotted. Black means that the eigenfunction has, that the modulus is high, while white means that you're getting a zero there. So, for example, in these pictures here, it looks like the probability of my particle being near the center, it's zero or very small.
03:03
For example, in these two, my particle is going to be concentrated near the boundary. While here where the geodesic flow is highly chaotic, it looks like the region is becoming evenly grayish, which would mean that the probability of finding the quantum particle in any region A
03:23
in this cardio would be comparable to the area of the region. So this is one way in which eigenfunctions keep track of what's happening with the dynamics of the underlying manifold. And this talk is going to focus on two aspects of eigenfunctions, the critical points
03:42
and the serial sets of the eigenfunctions. You can think of critical points if you think of maximums and minimums as the places where this modulus square is the greatest. So those are the most likely places for your quantum particles to be found at, while serial sets are going to be the least likely places for these quantum particles to be at.
04:03
And what I wanted to do before starting to talk about which kind of questions we are going to ask was to give you some pictures of how serial sets look like. And the standard thing is to show you a video of the Kladney plates experiment. So here in this video, what you're going to see
04:22
is a metal membrane that's placed on top of a speaker. This speaker is connected here to a frequency generator. And what you're going to see is that for different frequencies, this membrane is going to start vibrating. And what they just did was to put very thin grains
04:40
of sand on top of a plate. So the frequencies are appearing here in this corner. You'll see, yeah. And what's happening is for each different frequency, these frequencies are associated to standing waves. So they actually correspond to eigenfunctions. So the wave function with which this membrane is vibrating,
05:01
it's an eigenfunction whose value lambda is this one here on the corner. And what happens is the membrane is vibrating and the places where it doesn't vibrate at all, those places are attracting the grains of sand. So those places are the zeros of these eigenfunctions. So what you're seeing is just different serial set configurations of eigenfunctions for these different values of lambda.
05:23
And you can see that as the frequency gets larger and larger, the configurations become much more complicated. This was an experiment done first in, I think it's 1680 by Hooke. He did it at the time with a metal plane and a violin bow.
05:41
And then it was replicated 100 years later by Kladney. And since then, known as the Kladney plates experiment, he was the first one to record at least 100 configurations of the serial sets. And what we are going to do throughout this talk is,
06:00
well, when we are in two dimensions, is to try to understand what the structure of the serial set is. We are going to talk about how these components are going to be nested within each other. If you look at the complement of the components, those are called neural domains. We are going to try to understand what the connectivity of these components are.
06:23
And what I want you to keep in mind is that we are going to be working in this limit where the frequency lambda is going to infinity. So just as a reminder, so lambda J squared is the energy, lambda J is what I refer to as the frequency.
06:43
Okay, so these are pictures of serial sets on different surfaces so that you don't only have the picture of what happens with a square. This is a quarter stadium. This is a Vitoris. This is a square Vitoris, and here you have the round sphere. The lines up here are the serial sets of a high frequency Eigenfunction.
07:02
And in the two bottom pictures, what you see is the complement. So in black, you see where the Eigenfunctions take positive values, and in white, you see where the Eigenfunctions take negative values. So the serial sets in this bottom pictures is just the lines dividing black from white. And what I want to do first is to tell you what is it,
07:22
what do we know about these serial sets. And I'm going to do this in the real surfaces, which is what we know the most. So for serial sets, you can prove that they are rectifiable. So you can actually measure their length.
07:41
And there is this conjecture by SDR that says that the measure of the serial set should grow like a constant times Lambda, when Lambda gets large. So this conjecture was open for a long time, and very recently log now prove this lower bound here. The conjecture says that there should be also a constant up here, but we are nowhere close
08:01
to getting that constant at the moment. What we also know about the serial set is how it spreads across the surface. So we know that there exists this constant C here, such that if you grab a ball of radius C over Lambda, no matter where you place it on your surface, it will always intersect the serial set. So what that is saying is that if I start here
08:22
at the point X in my surface and I walk a distance of C over Lambda, then I'm always going to see a sign change in my function because these wave functions, they are oscillating at frequencies that are comparable to one over Lambda. What we also know is if you take the complement of the serial set,
08:41
we know that the inner radius of the complement is bounded in between two constants over Lambda. So there exists this constant here, time is C, such that if in every neural domain, you can fit a ball of radius C over Lambda. So this is giving you a notion of how thick the complement of the serial set is.
09:02
And finally, what we also are interested in is in understanding the number of components of the serial set. In all the pictures that I've been showing you, it looks like the number of components grows to infinity as the frequency grows. We have no proof of that at the moment.
09:21
In very specific cases, we can show that the number of components goes to infinity, but it's actually quite hard to do. And we believe that it should grow like Lambda square if you are on a surface or like Lambda 2EN if you are in a compact manifold. But the only thing that we knew, and this is a standard result
09:40
is Courant's neural domain theorem that tells you that at most you will have a constant times Lambda square number of neural domains. Okay, so this is what we know on surfaces. On general manifolds, we know even less. And what I'm going to do in my next slide is to tell you what are the questions around which this talk is centered.
10:03
So we are going to discuss number of critical points of the Eigenfunctions. When you divide this by Lambda 2EN, we think that this quotient should remain in most cases bounded above and below by two constants. So the number of critical points should grow like Lambda 2EN, N being the dimension of the manifold.
10:22
But there are no results that prove anything like that at the moment. The measure of the serial set, as I was saying before, this quotient should remain bounded above and below by two constants. The number of components of a serial set and so this is going to be the first part of the talk.
10:42
And then it's going to get a little bit crazier and we are going to talk about the diffeomorphism types of the components of a serial set and also about the nesting configurations of these components. Okay, so the first thing that you have to do if you're trying to attack these questions is to realize that very little is known.
11:02
So one of the things that you can do is to try to randomize the problem and instead of answering what happens with these quotients or these quantities for an actual Eigenfunction, you can ask what happens for a random Eigenfunction. So, suppose you're working on the N sphere
11:21
or on the N torus, okay, where multiplicities are high and now fix an Eigenvalue. And what you will do is to consider a linear combination of Eigenfunctions whose frequencies are equal to the frequency that you started with, okay?
11:42
So what this is, is just an Eigenfunction with Eigenvalue lambda square, that's what you just did. But if you pick these coefficients at random, so we are going to allow these coefficients to be standard Gaussians independent,
12:02
then what this becomes is a random Eigenfunction with Eigenvalue lambda square. There it is. Okay, so here in the slide, I have a normalizing constant here, one over square root N lambda and lambda being the number of frequencies equal to lambda, so the multiplicity of the Eigenvalue.
12:23
This is just a normalizing constant, so don't pay attention to it. The point is that you have now this random Eigenfunctions and on the round sphere, they are called random spherical harmonics, on the torus, on the flat torus, they are called random arithmetic waves.
12:42
And the idea is to try to answer these questions, but these questions, but now for these random Eigenfunctions. So on the case of the N sphere and on the N torus, Eigenfunctions can be computed explicitly. We know how they look like, we know their Eigenvalues.
13:02
So you can actually say a lot, and there has been a lot of work done in this direction. So for example, on the two sphere, there is a series of works by Nicolaescu, Camarota, Maninucci and Wigman that prove that the number of critical points will converge to a constant in probability. So what this means is that the mean
13:21
is going to a constant, and they can actually show that the variance goes to zero, and they actually in this last paper, they get a nice rate of EK for the variance. So for the number of critical points indeed these quotients are converging to a constant. It's not only that you get it bounded above and below
13:40
by two constants, but you actually get convergence in this random realm. For the measure of the serial set on the two sphere, there are computed the expected value of this quotient, and then you have the Wigman control the variance. So they can also show that the variance not only goes to zero, but they give actually really good rates of EK,
14:02
and you get that the probability of, so you get convergence in probability of these quotients to a constant. So Yale's constructor is holding in this random realm. So these two first questions are what we call local quantities, because you can, if you want to understand the number of critical points
14:22
or the size of a serial set, you can start with your manifold and you can chop it up in neighborhoods that have sizes comparable to one over lambda, and then compute the number of critical points in these tiny walls, and then just add them up. However, you can not do this with the number of components of a serial set,
14:41
because if I have two tiny walls, I may have components that are going from one ball to the other, and you don't want to over account. And actually these components will definitely go from one ball to the other. So this is a much harder quantity to study, and there is the work of Nassarov and Solling, that they first did it on the two sphere,
15:02
and then they were able to generalize this, where they show that in min, the number of components of the serial set is converging to a constant. Getting the variance is much harder, exactly because this is not a local quantity, so they only get convergence in min. And the idea of what I want to do now
15:22
is to start working on these questions, but on a general compact Riemannian manifold, where you don't have formulas for the eigenfunctions or the eigenvalues. So that's where we are headed. The problem is though, that if you want to work on a general manifold,
15:41
if you fix the base manifold, and you look at the space of all Riemannian metrics that you can put on it, generically all the eigenvalues are going to be simple. So each eigenspace only has one eigenfunction, so you cannot do linear combinations of eigenfunctions within an eigenspace. So you will have to change this definition,
16:01
this doesn't make sense anymore, you don't get anything random here. So one thing that you can do is to fix an epsilon, and instead of working with frequencies that are exactly equal to lambda, you can work with frequencies that are in a window from lambda to lambda plus epsilon, now.
16:21
So what you're doing is you're incorporating more eigenfunctions to your random linear combination mix by working now with a window from lambda to lambda plus epsilon. So what you get when you do something like this, when you're working with this random linear combinations,
16:43
it's a function that has a frequency that's concentrated near the lambda that you pick, but it's no longer an eigenfunction, right? I'm mixing different eigenspaces, I'm mixing frequencies from lambda to lambda plus epsilon.
17:01
So that's one of the things that you have to keep in mind from now on. However, we do believe that they should behave like eigenfunctions, that these are an honest model for what eigenfunctions look like, and this is the content of the random wave conjecture by Berry that what it says is that if you're working on a manifold where the geodesic flow is chaotic enough,
17:22
what happens is that the statistics of these waves that are called monochromatic random waves, by the way, should be the same as that of actual eigenfunctions whose eigenvalues are in these windows, whose frequencies are in these windows.
17:40
So what you have here, for example, is just a histogram for the value distribution of an actual eigenfunction, whose frequency is 500 on an arithmetic surface. And so you can see that actually, so those are the points that are plotted, and you can see that it actually adjusts to a Gaussian distribution.
18:01
And there is some numerical evidence towards this conjecture, but nothing like this has been proved. But just keep in mind that we do believe that these are a good model for how eigenfunctions should behave. And so, okay, suppose we want to answer these questions for these random waves.
18:22
And now the question that you need to ask yourself is what do you need to study the number of critical points or the size of the serial sets? So these waves that we are considering here, since they have Gaussian coefficients, they are called, so they are Gaussian fields on the manifold.
18:41
And there is this theorem by Kolmogorov that says that Gaussian fields are completely characterized by the two-point correlation function. So if you know the two-point correlation function for your field, you completely understand how the field behaves. So you can compute any quantity related to the field. So the two-point correlation function is exactly this.
19:04
So what you do is you fix two points, X and Y on your manifold, and you compute the expected value of the product of the value of your wave at X times the value of your wave at Y. So you are trying to understand how these two values are correlated to each other depending on what X and Y are.
19:22
And because the Gaussians, the variables that we are considering have mean zero, variance one, and are independent, it turns out that this is exactly what you get as your correlation function. So you get the sum of the cross products of the eigenfunctions whose frequencies are in this window.
19:41
So if you're familiar with Val's law, this is an object that appears a lot to compute number of eigenvalues, say, for example, in a window from lambda to lambda plus epsilon, only that what you do is you deal with this object on diagonal and you integrate over the manifold with respect to X.
20:01
But here, if you want to understand these waves, you cannot evaluate on diagonal and you cannot integrate X out. You actually have to deal with these sums of cross products. What this is is the kernel of the projection operator from L2 of the manifold onto the direct sum of eigenspaces whose frequencies are in these windows from lambda to lambda plus epsilon.
20:22
So this is just the kernel of that operator. So if you want to say, yes? Yeah, are you thinking of epsilon as small? That's a fixed small number. Okay. How many eigenvalues do you expect to find in that window? Excellent. So now, it really depends on the geometry of the manifold
20:41
that you're working with. From now on, I'm going to work on an assumption that says that at least there has to be one point in your manifold for which the measure of geodesic loops that start at that point and close at it has measure zero. Under that assumption, you have for that epsilon fix,
21:02
epsilon lambda to the n minus one eigenfunctions, roughly, in that window. So there are a lot of them. What you need to do, as I was saying, if you want to understand this two-point correlation function is to understand this spectral projection operators.
21:21
If you want to understand actually what happens with the zero set and the number of critical points, you need to understand this two-point correlation function but at one over lambda scales because that's where the eigenfunction is, how the eigenfunction oscillates. So the largest correlations are going to happen in those scales.
21:40
So what you need to do is to be able to, so if this is your manifold, and this is a point x that you fix on the manifold, so if we identify it with its tangent space, the exponential map,
22:00
what we need to do is to work with vectors here that are of the form u over lambda, v over lambda, and map them here via the exponential map. So I need to be able to evaluate my two-point correlation function at these points. So one over lambda close to a fixed point x.
22:21
So what we really need to control is the two-point correlation function evaluated at the image of u over lambda and the image of v over lambda. This is the quantity that I need to control as lambda goes to infinity.
22:44
Are there any questions so far? So now, as I was saying, to make sure that I have eigenfunctions and to actually be able to prove any result,
23:02
I need to work under the following conditions. So fix the point x on your manifold and now look at the space of all initial velocities that generate closed geodesic loops. So it doesn't have to close smoothly, it's just closed.
23:20
And you look at the set of all possible initial velocities. So you're working in s x star m. And here you equip this with a little measure on what the condition that we are going to be working with is that the measure of the space of initial velocities that generate geodesic loops has to be zero.
23:43
And under that assumption, what we proved together with Boris Honey is that we have a limiting function for this scale to point correlation for Psi lambda. So the assumption that we have to work under is that if you fix the point x,
24:03
the measure of geodesic loops that close at x has to be zero. And under that assumption, we can control that lambda to the n minus one shouldn't be there, cross it out. The limit of this rescale covariance function is a function that's also going to be the two-point correlation function for a field.
24:22
So what I'm going to do now is to explain what this Psi infinity field is. So what you get is that this is converging to epsilon times the two-point correlation function for a Gaussian random field in Rn. So this Psi infinity field
24:42
is what's called a superposition of random planar waves. What it satisfies is that it's an eigenfunction for the Euclidean Laplacian with eigenvalue one. And since it's a Gaussian field,
25:02
it's completely characterized by the two-point correlation function. You can actually define it in terms of the two-point correlation function. And the two-point correlation function is exactly this thing that we have here on the right-hand side. So you integrate over the n minus one sphere e to the i u minus b against w. So what you have here on the right-hand side is actually the Fourier transform
25:23
of the spherical measure. So this evaluated at u b is simply the Fourier transform of the spherical measure evaluated at u minus b. So what we are getting is that
25:41
no matter what the geometry or the topology of the manifold that you start with is, when you rescale the two-point correlation function like this, you get always the same limit. And this limit only depends on the dimension of the manifold. You're integrating over the s n minus one sphere. And that's the only thing that you remember
26:02
about the starting manifold. So this result is true as long as you have this hypothesis. However, I strongly think that it should always be true. To prove this result, we use micro-local analysis. And so we have to work with the wave operator,
26:22
with the wave kernel, and we have to provide the singularities along geodesics. And that's why we need this condition for the set of geodesic loops. But I really think it's a problem of our proof that we have to enforce this condition. I think it should really be true on any manifold. It is true on the sphere and it's true on the torus.
26:43
And the fact that it's true is what allowed all these people to get these results on the n-sphere and on the n-torus for the number of critical points and the size of the zero set. So this convergence holds in this infinity topology. So you can take as many derivatives in U and V on both sides as you want,
27:01
and you still get the limit. And it holds uniformly for U and V inside a wall here in the tension space of constant radius R. Another way of reading this result is that if you start with your wave and you rescale it, so you fix the point X,
27:21
and you rescale it at one over lambda ranges about X. So now you think of this as a function in Rn. So this is a function of U. So as a random variable in Rn, it converges in distribution to these fields, these random planar waves in Rn.
27:41
That's what this result says, because you have convergence of the two-point correlation function so all the moments converge. So these random waves really behave like these limiting guys that we have in Rn. And just so that you have an idea of what the heuristics are behind this statement,
28:00
this is not a proof, and actually it has nothing to do with the proof. But what happens is that if you grab an eigenfunction, an actual eigenfunction of the Laplacian, and you fix your point X, and you rescale this function about X, then if you hit this with the Euclidean Laplacian
28:23
plus some lower order differential operators that I do not want to define, then what you recall back is the eigenfunction itself when rescaled. So what's happening is that to leaving order in lambda terms,
28:41
the rescale eigenfunctions behave like eigenfunctions of the Euclidean Laplacian with eigenvalue one, which is the property that these limiting guys have. So that's what's happening behind the scenes. And this is why, and that statement holds on any manifold. So that's why we are getting this universal limit
29:02
that forgets the metric G or the topology of the manifold. Okay, so are there any questions about this statement? What I'll do now is to show you how you can apply this result. So for example, if you try to count the number of critical points
29:22
or measure the size of the series set, we can prove that it mean they converge to a constant. This constants AN and BN only depend on the dimension. So they are the same for every compact Riemannian manifold of dimension N. As long as you work under this assumption, that what you need is that for almost every point
29:41
on your manifold, the measure of closed classic loops has to be zero. So yeah, that's the assumption. And if you want to control the variance, you actually need to ask for something else. Because when you control the variance, you need to control your two-point correlation function
30:01
in points that are very far apart. For example, picture the sphere, like the values of your function on the north pole should be super correlated to the values of the function on the south pole. So it's not always true that only the one over lambda ranges matter. You have to be careful of those things.
30:23
So if we work under the assumption that if for almost every pair of points X and Y on your manifold, if you look at the set of arcs, geodesic arcs that are joining these two points, that the set of initial velocities will have measure zero for almost every pair of points, then we can control the variance.
30:42
And not only we have that it decays to zero, but we can also control the rate. So we have convergence in probability of these two of these quotients to these constants AN and BN. So now for the second half of the talk,
31:01
I'm going to start talking about the components of the serial set. Their number, their diffeomorphism types and how they are nested. So this is the serial set for actually
31:20
it's not for C lambda, it's actually for this guy here, C infinity in R3 where we just add it to a box. This picture is by Alex Barnett. And so when you're working in dimension three, your serial set has dimension two. So it's going to be a surface or actually a collection of connected surfaces.
31:43
And the limiting guy, the serial set looks like this. So you can really not tell what's going on there. The first question was, can I count the number of components of the serial set? And this question was answered by Nazara Van Soling.
32:01
And in mean, if you are working under the assumptions that Boris and I found, you can show that the number of components will converge to, will grow like a constant times lambda to the n. And what you can ask after that is, well, what happens if instead of just counting the number of components,
32:21
what I want to do is to count the number of components with a given diffeomorphism type. Can I say that 90% of my components are always going to be spheres? That's the type of question that we are asking. So let me tell you how we are going to go about thinking of this problem. So suppose you have a realization of C lambda
32:41
and these are the components of your serial set. So the manifold has dimension three, the components are a collection of surfaces and you are going to organize this surface according to their genus. So here I have 10 components in total, five of them are spheres, three of them are tori, nothing of genus two and two components of genus three.
33:02
So what you are going to do is to collect that information into a measure, into a probability measure. What this probability measure tells you is the frequency with which each diffeomorphism type is appearing. So five of those 10 times, I get the sphere, three out of 10 times, I get the torus. And the question is, as lambda grows, is there going to be a universal distribution
33:23
for my diffeomorphism types? Is there going to be a law that tells me for lambda large enough, 90 percent of the components will always be spheres, five percent are going to be tori and so on. Yes? So I mean, the random kind of linear combination
33:40
zero is a regular value? Again? Oh yeah, with probability one, it's a regular value. So these are going to be smooth manuals. But the collection of diffeomorphism types of course depends on the choice of the aj here? Yes, definitely. So these are probability measures,
34:01
but they are random probability measures. So for each realization of C lambda, you have a different distribution of the diffeomorphism types. So let me actually define these measures. So they are probability measures, so they give you back a number between zero and one. And the domain of these measures
34:20
is the space of diffeomorphism types. So if you grab a component of your zero set, it's going to be a compact manifold. It will have dimension n minus one. It will have no boundary. With probability one, you can show that it's going to be smooth. And with high probability, you can show that it can be embedded in Rn. So this is the collection of components
34:42
that we are looking at. And we're going to quotient this by the space of diffeomorphism types. So that's the domain of the measure. To each diffeomorphism type, I associate the frequency with which it appears among the components of a zero set. So in this example, I have one over 10,
35:00
10 being the total number of components, and then I'm putting a delta mass every time a diffeomorphism type is hit. That's how the measure looks like in this example. So I have five times the delta mass of genus zero, three times the delta mass of genus one, and so on. And that's in general how you construct the measure. So if I call C, capital C sub C lambda,
35:21
the collection of all the components in my zero set, then the measure looks like one over the total number of components, and then a sum of delta masses, where I'm going to add a delta mass every time the diffeomorphism type of that delta mass is hit by a component in my zero set. Okay, so the question is, again,
35:41
in the limit as lambda grows, do I have a limiting probability that will encode how these diffeomorphism types look like? And the answer is yes. This is a result by Peter Sarnak and Jor Vigman. So what they show is the existence of this limiting guy, mu infinity, to which the mu C lambdas are converging.
36:02
And in the way in which they converges, the space of diffeomorphism types is discrete. So to measure the difference between these probability measures, you simply evaluate them on finite subsets of their domain, and you can compute the difference. So what this statement says is that for any epsilon fixed,
36:24
any small epsilon that you pick, the probability that the difference between the two measures be bigger than this epsilon is going to go to zero as lambda grows to infinity. So in that sense, these measures are the limiting guys. And the assumption under which you need to be working
36:44
is that for almost every point on your manifold, the measure of closed geodesic loops has to be zero, simply because you need this convergence of the two-point correlation function to prove any result like this. Okay, so now there is a limiting distribution.
37:04
I have to say two things though. So from the proof of Sarnak and Vigman, you cannot track how this measure looks like. This is an existential proof. They prove the existence of the limiting measure, but you cannot keep track of how this measure is built. So you cannot say that 90% of the components are going to be spheres.
37:20
You cannot hope for anything like that. The second problem is the following. We have no clue what the domains look like for high dimensions. So if the manifold has dimension three, the zero set is a collection of surfaces, so you can definitely organize them by their genus. So in that case, you do know the domain of the measure.
37:42
But if your manifold has dimension four or higher, then the zero set is a collection of components of dimension three or higher, and we really don't know what the space of the epimorphism types of such manifolds look like. So we really don't understand the domain of these measures. So that's part of the problem that one has.
38:02
But despite of that, what we did with Peter Sandmann is to show that the support of the measure is the entire space. So even though we have no clue how this looks like, if you give me a diffeomorphism type with a strictly positive probability, that diffeomorphism type will occur in the serial set of your random waves for lambda large enough.
38:23
That's what's happening. You're observing all the possible diffeomorphism types once the frequency is large enough. Okay, so what I'm going to do next is to discuss the similar problem of the nesting of the components.
38:41
So are there any questions about this statement? Yeah. Like how, well I guess super hard, but how computable would it be to find the limit from an example? Like from an example? No, at the moment we really don't know
39:00
how to find any lower bounds on this probability. Yeah. Yes. Yeah. Oh, maybe that's what you've been saying. There, yeah. If you restricted dimension three, and now you have surfaces and you have your limiting measure on what you knew on this measure.
39:22
We have Alex Barnett running numerical experiments. What he's observing is that there is going to be a giant component and then tiny components around it that will have these different diffeomorphism types. But actually in his numerical experiments, he's only being able to see spheres.
39:40
So it's very likely that the probability of seeing all the diffeomorphism types are super, super tiny. So, I mean, at this moment in time, the computers cannot give us any information. Yeah. Yes. Maybe a statement about the Gaussian field in Rn, right?
40:01
Does the theorem reduces to the... Yes. So all these statements are allowed in the end. So if you want to understand this limiting distribution, what you need to understand is the limiting distribution. It's discussion field's infinity. You need to understand the diffeomorphism types of this guy. And what happens with this guy is that it satisfies this equation. So it's quite rigid.
40:22
It's an eigenfunction for the Laplacian with eigenvalue one. So it has a lot of structure into it. OK. So now for the nesting of the components, if these are the components of your serial set, the way in which we are going to record the nesting is using a finite rooted tree.
40:41
So for this tree, the root of the tree corresponds to the big null domain. Each of the nodes is a null domain. So it splits into three pieces. This null domain here, that one, and that one. And then, for example, this one splits in three pieces farther, this, this, and that.
41:01
So each null domain is one of the nodes in my tree. And you put an edge joining the nodes every time that you have a component of the serial set separating the null domains. So you can actually record the nestings in your serial set components using finite rooted trees. And the way in which we are going to record the proportions of the different nestings within your serial set
41:23
is the following. So to each component of the serial set, so suppose this yellow one, you look at the edge that's associated to it in the tree. And once you remove that edge from the tree, it's going to split the tree into two pieces. And you just grab the smallest one. So we are defining this map that to this yellow component
41:43
of the serial set, it associates this small sub-tree here. Or, for example, to this blue component here, that's associated to this edge, you associate the leaf of the tree. And the way in which you build the probability measure
42:02
that's associated to the different nestings is simply on the space of finite rooted trees, you put a delta mass every time one of these sub-tree configurations was hit among the components of your serial set. So it's exactly the same construction as before, only that now what you're recording instead of if you morphism types
42:20
are these nesting configurations. And the question again is, in the limit as lambda grows to infinity, will it have a universal distribution of the nesting configurations? And the answer in this case, again, is yes, there is. In the same paper, Peter Sannegan, your big man, proved the existence of this limiting guy,
42:42
new infinity, to which the new C lambdas will converge. So there is a universal distribution of the nesting configurations of the serial set. Once lambda is large enough, there will always be a proportion, a fixed proportion of components that are going to be isolated.
43:00
And then a fixed proportion of components that will be a bubble inside another bubble, and so on. And what we proved with Peter Sannegan is that the support of the measure is the entire space of trees. So if you give me any nesting configuration, we can show that for lambda large enough,
43:21
that nesting configuration is going to occur with a strictly positive probability. That's what we were able to show. And what these statements, the proofs of these statements, what they reduce to is actually to working in Rn. And what you have to do is to find solutions to this equation
43:42
whose serial set contains a collection of components that are nested according to any tree that you make a map with. Or a solution to this equation whose serial set components has a component with a diffeomorphism type that you like. So that's how actually the proofs of these statements
44:02
on the supports of the measure, what they really are about is working in Rn on finding solutions to this equation for which you can make sure that the serial set will have at least one component with the diffeomorphism type that you want, or at least a collection of components with the nesting configuration that you want.
44:24
Okay, and to finish the talk in my last slide, what I wanted to do is to show you the only setting in which we actually understand what the limiting probabilities look like because of numerical experiments. So this is when you work in dimension two. In dimension two, the diffeomorphism types
44:41
of the components of the serial set are boring because you can prove that all of them are going to be embedded circles, the components of the serial set. So what you can do instead, which is way more interesting, is looking at the complement of the serial set. So you have all these neural domains, each of them with a different color, and you can study the connectivities of these components.
45:03
So you can count the number of holes that these components have. For example, this green component here has another one inside, so one hole for that one. This biolid mass here, it looks like it has at least two holes, and so on. So you count the number of holes in each component,
45:21
and you ask, is there a universal distribution for the number of holes? The answer is yes. And Alex Barnett computed the limiting distribution in this case. So this is the only setup in which we know what the limiting measure looks like. What he's getting is that 91% of the components will have no holes.
45:42
About 5% of the components will have one hole. 1% will have two holes. And then the number of components having higher and higher number of connectivity starts decreasing super rapidly. That's what he's observing.
46:01
His table actually goes all the way up to connectivity 20. And the error, if I remember correctly, was in the fifth decimal place, so you don't even see it here. And this is really the only case in which we understand what these limiting probabilities are. In the other settings, the only thing we know is that the support is the full space. And that's it. That's the limit of our knowledge.
46:23
That's it. That's all I wanted to say. Thank you very much. Questions?
46:41
This function psi infinity, does it depend on the manifold or whatnot? No. No, it's the same one for all the manifolds. What C infinity is, is the Gaussian random field in Rn that's defined by this two-point correlation function. So Gaussian fields are completely defined by a two-point correlation function.
47:01
So this Gaussian field is the one that has this two-point correlation function. So you just take the Fourier transform of the spherical measure. And you can also, I mean, physicists like to think of it as a superposition of random planar waves. They are equivalent. And the only information that you have is that they solve this equation.
47:22
But it's just a function in Rn and has nothing to do with the manifold. It's always the same limiting guy, but it's not connected to the manifold. It forgot it. Yes.
47:43
When you built your measure mu, counting the types of different morphisms, would it be meaningful to ponder it by the average measure of each component? The size of the component.
48:00
The size. Yes, definitely. That's actually the right question to be looking at. So Alex Barnett, what he's observing in his numerical computations, is that he's getting in dimension three or higher. Dimension two is completely different. But in dimension three or higher, he's getting a massive component of the serial set, always. So it looks like there is this percolating component
48:22
of the serial set, no matter how many experiments he runs, that's eating up the whole space. And then he has like this small isolated components that are then reflecting all the diffeomorphism types that we are seeing. But the main guy is this huge component that it's taking most of the volume of the serial set,
48:42
most of the house of measure of the serial set. So yes, definitely. That would be the right question. And the answer would be probably there is this guy that it's actually leading the behavior of the serial set. But so far we cannot show that there is a component that's large actually. And dimension two?
49:00
And dimension two is different because this is connected to percolation. So the probability of being able to cross from the bottom side of the square say to the top is the same as crossing it this way. So it's unlikely that you will have percolating components for the serial set. In dimension three it's different because to go from one side to the other,
49:23
you would need to block it with something of dimension two, so it's much harder. So yeah, you do have this percolating component, but we have no proof of that. We are far away from a proof of that. And by the way, I should say, similar things can be done when you instead of working with the second functions,
49:43
when you work with a problem that's slightly easier. So you do the sum from zero all the way up to lambda. So you're mimicking polynomials on the manifold and you look at the serial sets. And the proofs there are much easier and a lot more can be done.
50:00
So you can bound from below the probabilities and stuff. This problem is much more rigid because you have an elliptic PVA that needs to be satisfied throughout. Both your experimentals on your explanation suggest that the number of critical points
50:20
in a small region is kind of proportional to the volume and lambda to the power n of the small region. Can you show some kind of equidistribution? So in small scales, you mean? Well, yeah, does the repetition of critical points goes to remaining volume or something like that?
50:43
Yeah, so for the critical points, it's hard. For the serials, so let me explain one thing. So for the serials set, we can show that if I, okay, let me say it in words. So fix a wall of radius c over lambda, okay?
51:02
Fix a c and look at balls of radius c over lambda. And look at the serial set inside that ball. So now think of that serial set and the remaining measure that it's induced on it. Okay, let's call it d sigma lambda. What we can prove is that this d sigma lambda will converge in distribution to a d sigma infinity.
51:23
So in small scales, you have conversions of this measure. And what's crucial to get this convergence in distribution is that we know that in our n, the serial sets of this equation, if you restrict to a bounded ball,
51:40
it will have bounded measure. So the moments are going to be bounded. So we get convergence of all the moments. For the number of critical points, if you try to do that, we don't really know that the moments are going to be finite. We actually conjecture that at some point, actually for high enough moment, it will be infinite. So we cannot get this convergence,
52:01
but we don't really know what happens with the moments. Yeah, but that's a really nice question. Yes. I think maybe related to that. So for a lot of the local things, there's a common limit distribution that doesn't depend on the manifold. And is that just because everything is, almost everything's starting to happen in really, really small balls that are almost like Euclidean space.
52:23
So the distribution is the one from- Yeah, so eigenfunctions, they are oscillating like crazy as lambda goes to infinity. So really what's happening is that the larger picture really looks like estimate n1. Yes, that's what's happening. There was a question.