We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

1/3 Spectrum of random graphs

00:00

Formale Metadaten

Titel
1/3 Spectrum of random graphs
Serientitel
Anzahl der Teile
27
Autor
Mitwirkende
Lizenz
CC-Namensnennung 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
GeometrieLeistung <Physik>VektorraumGraphRechenbuchMinimalgradSkalarproduktFormation <Mathematik>Vorzeichen <Mathematik>ZweiKategorie <Mathematik>Nichtlinearer OperatorOrthonormalbasisEinfach zusammenhängender RaumDickeEinfügungsdämpfungPunktspektrumE-FunktionGibbs-VerteilungOffene MengeZahlensystemKnotenmengeSummierbarkeitKlasse <Mathematik>MengenlehreArithmetisches MittelOrientierung <Mathematik>Ausreißer <Statistik>EigenwertproblemMultiplikationsoperatorFunktionalFolge <Mathematik>MultiplikationNormalvektorLoopEndlich erzeugte GruppeAuflösung <Mathematik>Produkt <Mathematik>EinflussgrößeNichtunterscheidbarkeitNumerische MathematikProjektive EbeneSinusfunktionGlobale OptimierungMinkowski-MetrikEntscheidungsmodellRuhmasseRichtungZufallsgraphStellenringDifferenteLokales MinimumL1-NormIndexberechnungPerkolationMatrizenrechnungVorlesung/Konferenz
MinimalgradNichtlinearer OperatorNormalvektorSymmetrische MatrixEindeutigkeitMaßerweiterungPhysikalisches SystemMengenlehreWärmeausdehnungGraphBeweistheorieZeitbereichMultiplikationsoperatorVorzeichen <Mathematik>SkalarproduktVektorraumSpezielle unitäre GruppeKomplexe EbeneEinflussgrößeKlasse <Mathematik>MomentenproblemVerschlingungSummierbarkeitNumerische MathematikDickeFunktionalAuflösung <Mathematik>PunktFormation <Mathematik>MittelwertNichtunterscheidbarkeitProjektive EbenePunktspektrumSinusfunktionLeistung <Physik>FinitismusAusdruck <Logik>Einfacher RingGrothendieck-TopologieResultanteWahrscheinlichkeitsmaßEigenwertproblemVorlesung/Konferenz
GraphKlasse <Mathematik>Arithmetisches MittelUnendlichkeitTopologieEigenwertproblemMultiplikationRauschenRechter WinkelZweiEinflussgrößeBasis <Mathematik>ZahlensystemObjekt <Kategorie>GruppenoperationMultiplikationsoperatorMengenlehreKnotenmengeElement <Gruppentheorie>SummierbarkeitOrtsoperatorIntegralBerechenbare FunktionDreiecksfreier GraphDickeSchwache TopologieFinitismusQuaderRuhmasseDrucksondierungZustandsdichteMatrizenrechnungSymmetrische MatrixUnitäre MatrixVertauschungsrelationArkusfunktionWurzel <Mathematik>QuadratzahlGeradeEmpirische VerteilungsfunktionKontraktion <Mathematik>Inverser LimesNichtlinearer OperatorKategorie <Mathematik>MittelwertFlächeInhalt <Mathematik>Ausdruck <Logik>Maß <Mathematik>RechenbuchRandomisierungGruppentheorieVerschiebungsoperatorFaltungsoperatorDistributionenraumEndlich erzeugte GruppePunktspektrumPhysikalismusModulformVektorraumVorlesung/Konferenz
TabelleParametersystemZusammenhängender GraphKlasse <Mathematik>FunktionalFormation <Mathematik>MultiplikationFamilie <Mathematik>Produkt <Mathematik>Element <Gruppentheorie>UnendlichkeitAlgebraisch abgeschlossener KörperAlgebraisches ModellBinärbaumUniformer RaumMengenlehreEinflussgrößeWellenwiderstand <Strömungsmechanik>Numerische MathematikFaltungsoperatorSummierbarkeitModulformAuswahlverfahrenMinimalgradRandwertSpezielle unitäre GruppeMomentenproblemDiagrammGanze ZahlAusdruck <Logik>Nichtlinearer OperatorGruppentheorieKnotenmengeTensorproduktEinhüllendeKartesisches ProduktGraphEnergiedichteKurvenscharSymmetrieVertauschungsrelationThermodynamisches SystemTeilbarkeitInverser LimesOrtsoperatorDistributionenraumPunktObjekt <Kategorie>TopologieBimodulBerechenbare FunktionFolge <Mathematik>QuaderOvalSchnitt <Mathematik>AdditionVektorraumStichprobenumfangZweiRegulärer GraphMassestromPunktspektrumRechter WinkelFinitismusMathematikMittelwertMereologieGruppenoperationFundamentalsatz der AlgebraTensorAbgeschlossene MengePhysikalische TheorieVorlesung/Konferenz
MengenlehreKlasse <Mathematik>PunktEinflussgrößeProdukt <Mathematik>Wurzel <Mathematik>MittelwertFaltungsoperatorMultiplikationsoperatorKonfigurationsraumPunktspektrumEndlich erzeugte GruppeGanze ZahlGruppenoperationDickeFunktionalNegative ZahlAusgleichsrechnungAbstandBerechenbare FunktionRechter WinkelUnendlichkeitOrdinalzahlRuhmasseVektorraumTermFamilie <Mathematik>MatchingObjekt <Kategorie>Zentrische StreckungZählenGraphFreies ProduktKonforme AbbildungZustandssummeAusdruck <Logik>KurveIdeal <Mathematik>SummierbarkeitStellenringEigenwertproblemNachbarschaft <Mathematik>Charakteristisches PolynomBimodulMinimalgradNumerische MathematikExponentThetafunktionLeistung <Physik>DreiRechenbuchMomentenproblemFolge <Mathematik>Stetige FunktionFinitismusReelle ZahlNormalvektorAlgebraisches ModellAbgeschlossene MengeErwartungswertBeweistheorieKnotenmengeTopologiePerkolationResultanteQuadratzahlRandomisierungNatürliche ZahlVariableZusammenhängender GraphFreie GruppeBasis <Mathematik>GeradeGesetz <Physik>Vorlesung/Konferenz
EigenwertproblemIndexberechnungNumerische MathematikPhysikalisches SystemKlasse <Mathematik>PunktMengenlehreVorzeichen <Mathematik>KnotenmengeThetafunktionUniformer RaumGrenzwertberechnungAbstandGrothendieck-TopologieUngleichungMinimalgradTaylor-ReiheÄquivalenzklasseSpezielle unitäre GruppeEinflussgrößeMultiplikationsoperatorGeradeRechter WinkelTopologieRuhmasseStellenringLogarithmusGraphFinitismusEinfach zusammenhängender RaumMathematikWinkelGebundener ZustandVollständigkeitTeilbarkeitGraphfärbungMomentenproblemLemma <Logik>BestimmtheitsmaßArithmetischer AusdruckSinusfunktionWellenpaketElement <Gruppentheorie>Produkt <Mathematik>Schnitt <Mathematik>KapillardruckInverser LimesBijektionOrdinalzahlSummierbarkeitSigma-AlgebraTeilmengeGüte der AnpassungBetrag <Mathematik>Metrischer RaumDerivation <Algebra>PolynomWahrscheinlichkeitsmaßWurzel <Mathematik>DimensionsanalyseGesetz <Physik>Leistung <Physik>RichtungPrimidealZahlensystemSchwache TopologieZusammenhängender GraphWiderspruchsfreiheitVorlesung/Konferenz
ErwartungswertEinflussgrößeKnotenmengeKlasse <Mathematik>Kategorie <Mathematik>DifferenzkernMathematikRuhmasseLastStatistische HypotheseFormation <Mathematik>Delisches ProblemGraphExergieBerechenbare FunktionFunktionalEinfach zusammenhängender RaumWahrscheinlichkeitsmaßUnimodulare MatrixWurzel <Mathematik>SummierbarkeitVorlesung/Konferenz
Güte der AnpassungDistributionenraumAuswahlverfahrenKreisbewegungSummierbarkeitKnotenmengeModulformBimodulEinfach zusammenhängender RaumEinflussgrößeKlasse <Mathematik>FokalpunktRechter WinkelErwartungswertArithmetischer AusdruckÄhnlichkeitsgeometrieFinitismusElement <Gruppentheorie>Folge <Mathematik>EindeutigkeitRandomisierungInverser LimesGanze ZahlZahlensystemSchnitt <Mathematik>StellenringTopologieMengenlehreObjekt <Kategorie>SinusfunktionUniformer RaumGeometrieGesetz <Physik>GraphParametersystemWurzel <Mathematik>UnendlichkeitStichprobenumfangPerkolationBinärbaumRegulärer GraphSchwache TopologieUnimodulare MatrixZufallsgraphDreiecksfreier GraphGrothendieck-TopologieRuhmasseVorlesung/Konferenz
TopologieFolge <Mathematik>MinimalgradGraphArithmetisches MittelInverser LimesNormalvektorFamilie <Mathematik>EinflussgrößeUnendlichkeitSupremum <Mathematik>ErwartungswertFunktionalIntegralUniformer RaumAlgebraisch abgeschlossener KörperZustandssummeKnotenmengeFundamentalsatz der AlgebraQuelle <Physik>Algebraisches ModellRandomisierungNichtlinearer OperatorGruppenoperationAnalogieschlussMittelwertObjekt <Kategorie>TheoremRuhmasseAbstandRichtungAusdruck <Logik>ZufallsgraphVorlesung/Konferenz
EigenwertproblemNumerische MathematikLeistung <Physik>Produkt <Mathematik>DimensionsanalyseDerivation <Algebra>TeilmengeSummierbarkeitNichtlinearer OperatorMinimalgradGraphThetafunktionGrenzwertberechnungNormalvektorMultiplikationsoperatorBerechenbare FunktionErwartungswertMomentenproblemGanze ZahlNachbarschaft <Mathematik>OrdinalzahlStellenringReelle ZahlTheoremUnendlichkeitPunktSupremum <Mathematik>Innerer AutomorphismusArithmetisches MittelPolynomLemma <Logik>UngleichungStetige FunktionBeweistheorieWahrscheinlichkeitsmaßRuhmasseCharakteristisches PolynomAbgeschlossene MengeAbstandArithmetischer AusdruckVorzeichen <Mathematik>StandardabweichungEinflussgrößeWurzel <Mathematik>GeradeFinitismusFunktionalDickeVorlesung/Konferenz
Transkript: Englisch(automatisch erzeugt)
organizer for this very nice invitation. I'm very happy to start. So we speak about the
spectrum of random graphs. And broadly speaking, I will try to understand some of the subtle connections between the geometry of the graph and its spectrum. So it will be a spectrum
of what? It will be the spectrum of a local operator which are naturally defined on a graph. Okay, so today I will introduce some basic notion and the setup. Tomorrow we will study
something which we call spectral percolation. So we'll look at how does the spectrum is on a random instance of graphs, how the spectrum of a random graph behaves. And on the last course, I will speak about spectral gaps and outliers and extremal eigenvalues of graphs. Okay,
but so today, so what will be the setup? So you have a graph, so you have a set of vertices, v is countable, and you have a set of edges which are pairs of vertices. Okay,
so our graph will be locally finite, which means that for any vertex, the degree is finite, where the degree is a number of outgoing edges. So it's an indicator that over all vertices of
the graph that x, y is in an edge. Okay, for simplicity of notation, my graph will be simple, but meaning that there will be no loops or multiple edges. So this is a multiple edge,
but it's x and y, and this is a loop. Okay, but you can add them by changing slightly those notations. Okay, so my graph will be locally finite, meaning that the degree is
finite for any vertex. Okay, when you have a locally finite graph, you can define the adjacency operator, and many other local operators associated to your graph. So the adjacency operator, what is it? So you have the L2 space indexed by the vertices of the
graph, and you define it like that, but for any function which is compactly supported,
A psi estimated at x will be the sum over yi x, or y which are connected to x. Okay, this will be the notation of psi of y. Okay, and in a matrix type notation,
if ex is ex are the canonical orthonormal basis of L2 of v, you write ex scalar product with
a of ey is simply the indicator that x is connected to y. Okay, and this is indeed in
L2 of v because the degree is finite. You can check that very easily. Another simple property is that if you look at, so let's call that a of xy,
ak of xy is simply the number of walks from x to y of lengths. Okay, so the power of the
adjacency operator encodes the counting function of walks in your graph. Okay, should I define what is a walk? So it's a sequence of vertices and successive vertices
are connected by an edge. Sorry, I think you explained it a few minutes ago, but what's the difference between L2v and L2 complete? It's finitely supported vectors of L2 of v. Okay, and if my graph is simple,
so if you have a meaning that there is no orientation of the edge, so a is symmetric.
Okay, so to look at the spectral properties of, let's say, for example, if v is finite. Okay,
so you know that a will admit an orthonormal basis of eigenvectors, so you will be able to write a as psi k psi k star. Okay, where the psi k are the orthonormal basis of eigenvectors,
and the lambda k are the eigenvalues of a. Okay, and you could write it like that, where e lambda, where e is a projection value measure, which is called the resolution of the
identity, is the sum of the psi k psi k star direct mass at lambda k. Okay, this is called the resolution of the identity. So it's a spectral projection, so e,
if you estimate it on a set i, it's simply the orthonormal projection on the vector space
spanned by the psi k with eigenvalue in a. Okay, so it's an orthonormal projection of
eigenvectors with eigenvalues in a. Okay, so I write it in this complicated way, you will see why. Because if, in the general case, okay, this resolution of the identity
still makes sense, provided that your operator a is a self-adjoint. So I will comment on that in one second. So being symmetric is not enough to be self-adjoint. For example,
a could be bounded. Maybe I could have written that. But if the degree of x is
bounded by d in v, the operator norm of a is bounded by d. Okay, as operator norm.
So for example, if the degree is bounded, your operator is a bounded symmetric operator, it is self-adjoint. When the degrees are not uniformly bounded, there are examples of graphs
which does not have unique self-adjoint extension. Essentially means that there is a unique self-adjoint extension. So if a is essentially self-adjoint, then you can also hide.
A will also be unitarily equivalent to a diagonal operator. And so it has a domain. So I will not
bother you with these notions. You are not familiar with them. We will not use them. And you can always, you can still write a as a lambda d of e lambda, where e is a resolution
of the identity, which I will not define. So it's the same idea, but it's a spectral projection on the eigenspaces. Okay, if you don't have seen that already,
it's fine. We will not use that. What we will use is what we call the spectral measure at the vector. So if you take a vector of psi. Okay, so if your operator is essentially
self-adjoint, there exists a unique measure such that the moments of this measure,
so I will denote it like that, mu g of psi, are equal to the scalar product of psi with respect
to the powers of ak for any k. Okay, so this measure, let's see in examples what is it. So
this formula, you could put a k here and a k here. And if you find that mu g, this spectral
measure, if you're familiar with the resolution of the identity, will simply be the projection of the vector psi on the vector space ei. Okay, so in the finite case, we have seen
that it will simply be the sum of sub psi k of psi squared. Okay, so in the finite case,
mu g psi is simply the sum of sub psi k of psi. What?
Do we really need psi to become betterly supported? No, we need that, but if you call the domain of A d of A, this would be true for any psi
in d of A. For example, if your operator is bounded, it's true for any, when your operator is bounded, d of A is just L2 of e, so it would be for any. But I want to put another carpet on this issue with domains. So for example, the Cauchy-Stieltjes transform, maybe I should
have said that also, if psi is normalized, mu g of psi is a probability measure. So we will
be interested by understanding this probability measure. Another point is that from this
definition, if you apply it to psi, which is e of x, the moments, if you take psi, which is e of x, the moments of mu g of x are simply the number of closed walks of length k
from x to x. So this measure encodes closed walks in the graph starting at x. As I was
saying, so the moments are the powers of A, so this Cauchy-Stieltjes transform, so if you look at, for example, the integral of lambda minus z, take z,
imaginary part of z, the Cauchy-Stieltjes transform of this measure of psi will simply be, you look at the result of A estimated at C, it takes a scalar product against psi.
Okay, another comment, if when v is finite, okay, and if you take the sum,
so let's say, if you take the sum of all, if you take the special average of the spectral
measure of the vectors, okay, so the special average of the spectral measure of the vectors against the special average with respect to the canonical basis, you apply this formula.
Okay, so you switch the two sums, and since the ex form an orthogonal basis,
and the psi k are of unit norm, what you arrive is simply the sum of all k of the
delta lambda k, okay, which is the empirical distribution of eigenvalues, right,
which we call also the average spectral measure, because we obtain it as a special
average of this spectral measure of vectors, and in the physics community, it's sometimes, it's usually called at least its limits as the size of the graph goes to infinity as a density of states. Okay, so when you, so this mu g of ex,
if you look in a finite graph, it depends on the eigenvector basis and on the eigenvalues, but when you take a special average, a miracle occurs, a small miracle occurs, it does not depend on the eigenvector basis again. So this special average is much easier
to understand than the spectral measure of the vectors. So we will see that many times in this course. So in particular, if you have a finite graph, we define mu of g as being
this subject, which we call, okay, the average spectral measure of the graph. So we'll be
interested by this measure when the size of the graph is large and on random instance of graphs. You can make some computation. If you look at mu of a cycle of length n, okay,
this will be, you can make it, this will be the sum of 2 cos of 2 pi k over n. Okay,
because the adjacency matrix, you will write it as S plus S star, where S is the permutation
matrix of the shift. Okay, S and S star, they commute, so it's unitary matrices. Okay, I can check that. Okay, you can make some simple computation like that on finite graphs. And as n goes to infinity, this converge weakly to the arcsin distribution, which is 1 over pi,
1 over square root of 4 minus x squared indicator, x is smaller than 2 dx. Okay,
you can do the line, finite line exercise, and it also converge to, as n goes to infinity, if mu is arcsin distribution to mu. Okay, so now what we want to do is,
we want to build, to construct this, so we have on finite graphs at least, we have constructed an object, the average spectral measure, which has a nice property
to depend only on the eigenvalues and not on the eigenvectors. We want to do the same on infinite graphs. So the first setting in which it's simple to do is on transitive graphs. So I will do it for Cayley graphs. I didn't use that. So Cayley graph, so what is it?
You take x, a finitely generative group, okay, and you take s, a symmetric set of generators.
So symmetric, meaning that s minus 1 is s. I will use a multiplicative notation for group operation. And so now you can define the Cayley graph of x associated to the set
of generators s, so it's a graph. Okay, so the vertex set is the elements of the groups,
and you put an edge, it's a set of edge, you take the set such that, I have to see which convention I use, x minus 1 is in s. You take the set of pairs x, y, such that x, y minus s
is in s. Okay, so it's you take the set of pairs of the form x, g, g x is in the symmetric set, and x is in. Okay, so for example c n, the cycle of length n
is a Cayley graph of z over n z with the generator plus and minus 1.
Then you can, the adjacency operator, you can write it as a convolution operator,
you can write it as a sum of the lambda g, where g is in the generating set, lambda is the left regular representation, so it's simply the multiplication operator,
lambda g of, let's say, e x is simply e of g x. Okay, so a is a convolution operator,
and the adjacency operator is one of, is an element of the fundamental algebra, which is an algebra of operators, which is called the group, the left group fundamental
algebra, which are the algebra generated by this lambda of g for all g comments,
that was a fundamental algebra. So it's also the algebra of operators
on l2 of x, which are invariant by the multiplication on the right,
okay, which commutes because if you define the multiplication on the right, then multiplication on the left and on the right they commute. And why do I mention that? Okay, and so it has been, and then you can define what is called the Planchetal measure,
so for you, you are only interested by this adjacency operator, and the Planchetal measure is simply, you look at the spectral measure of the vector, any vector. For example, if I
g and you multiply it by x, it will be isomorphic to, you have not changed the graph. If you multiply all entries, all vertices by x on the left, on the right, you have not changed the
vertex set, and you do not have changed the edge set. So g is transitive, and in particular, the number of closed works does not depend on the point you take, and this is called the
Mongevin name, and if your graph is finite, these two definitions coincide.
Okay, so of course now you can make some, you can, you can make some computation. So mu of
z, where is the addition? So it's simply this arcsine law, okay?
You can also, for example, if you look at mu of zd, which is a tensor product, which is a,
Cartesian product of d copies of z, it will be the convolution of d copies of the spectral
measure of z. So if we are not familiar with the spectral theory of infinite graphs, then we could take as a definition of the spectral measure, the guy with the number of walks and say that there exists a measure with those moments. Yes, so yeah, so if you are not familiar. Could you take it as a definition, or?
Yeah, yeah, yeah. So you can, you could define, yeah, as I said, it's an open parenthesis, so the, if the moment, if your graph, if the degrees in your graph, if the balls do not go too fast, there will be a unique measure, such that
the moments encode the number of closed walks, there will be a unique measure which would satisfy that, and it can be your definition, yeah. And for example, to check these formulas,
for example, you could just look at the generating function of closed walks on z, on zd, and that's one way to prove them. Another way is to compute their Stiget transform, okay?
Which, because the Stiget transform is just a generating function of the... But to prove that, say, assume that the degrees are bounded, the fact that there exists a measure whose moments are given by that, you need some spectral theory, or you could do that by hand?
No, no, you can do that by hand, if you just check that. So the fact that this does not go too fast, it is trivial. So you will have... The fact that there are always that always exists... Yes, so there is some positivity that you have to prove, but it's not that tough. It will rely on the symmetry of the operator A.
Okay, so the usual group operation, like cartesian product or tensor product, you will be able to, if you have a cartesian product between two graphs,
the spectral measure will be the usual convolution operator of the two spectral measures. You can do also other type of products. So for example, Kesten has computed the spectral measure of the tree, of an infinite, I think it was 58,
of the infinite regular tree, and so on. And there is a nice explicit formula, so it's called the Kesten measure, the Kesten-McK measure. It's D times square root of
40 minus 1 minus x squared divided by 2 pi D squared minus x squared.
One way to prove that is to just count looking at the generating function of works. Yes, so the Stigest transform, I recall you, d mu g. So if you call that
mk, it depends on x, but it will not depend on x. So it will be the sum of the minus, at least for modulus of z large enough. Okay, so the Stigest transform is just
the generating function of the works. And so you can compute the generating function of closed works
on the infinite tree, and you will find this computation. That's what Kesten did. The tree is also the free product of z over 2z of D copies, so this would be the free product.
And when you have free products of groups, the spectral measure, it's not the usual convolution, but it will be the free convolution of the free convolution of the spectral measure of z2, if you know what is a free convolution. And mu of z2
is simply D. So this is a free convolution of D copies of
a Bernoulli variable. But if you don't know what is a free convolution, it's fine. I forgot something important. I have some lecture notes exactly on that, on my web page, where everything is crystal clear, hopefully. You will find it on lecture notes, and then it's called spectrum
fundamentals, or something like that. So the 2D regular tree would be the free product of D copies of z. But if you take z over 2z, it would be... Okay, so you could write if D is even. It's also the free product of
D copies. For example, the arcsine law is the free convolution of two Bernoulli variables.
Okay, so I will not go much more into explicit computation. Maybe at the end, if I have time, I will do a fantastic computation. Very nice, because as I said, there are examples where
an important point is that the spectral measure depends very strongly on the set of generators. It's not something which only depends on the group. So it depends on the set of generators, and on the group, and on the set of generators.
There are some examples of... So if I have time at the end, I will do this computation. I will define the long platter group. And we have an exotic set of generators,
Grigor, Shook, and Zuk, 2001. They prove that the spectral measure, the Planchet measure, the spectral measure of this group, with an exotic set of generators, is purely atomic.
Grigor, Shook, and Zuk have found an example of a purely atomic spectral measure,
okay, on an infinite graph, obviously. And if time... And there is a very linear
and thus, they have given a very neat proof of this Grigor, Shook, and Zuk result. And they have shown that long platter groups with this exotic set of generators, they are related to percolation graphs. I hope that I will have time to make the
computation explicit, otherwise we'll do it in exercise. Okay, now I want to continue and to define. I'm more interested by random instance of graphs. So I want to extend this notion of spectral measure, average spectral measure, to graphs
which have some stationary idea, so graphs which would be unimodular. So let's start. So this was K-D graphs. What can I do to get that back?
Okay, I will use the... Okay, so
with another set of generators on the long platter graph, with another set of generators, with the natural set of generators, you get the measure which is continuous.
So unimodular graphs. So unimodular graphs, it will be another setting. I mean, it will be kind of the marginal setup which will include the scaling graphs and
these fine line graphs in which we can make sense of an object, an average spectral measure, which in some sense, it does not depend on the eigenvector basis. Okay, so what is it? So first I have to define quickly. So a rooted graph. What is that?
It's a collection made by connected graph plus a distinguished vertex which we call the root.
Okay, so I insist on connected graph. And then you can define an equivalent, as a usual graph, you can define an equivalence class by if there is a bijection
between these two graphs which sends a vertex set from the one to another and which sends the edges of the graph from one to another. You can define an equivalence class between rooted graphs by saying that two rooted graphs are equivalent if
there exists a bijection from v to the vertex set of g to v prime such that sigma of g is g prime. Okay, so the edges of the graph e are sent to the edges of the graph p prime
and it's which sends a root of the graph g to the root of the graph g prime. Okay, so by that I mean that where the sigma of an edge is defined is simply sigma of x,
sigma of y. Okay, so when you have this notion of equivalence, you can... So this is an equivalence class. It's what is called an unlabeled rooted graph.
What you have lost is just the name of the index of the vertices. Okay, and you can, since you have a rooted graph, you have an origin which is a root. So you can define a
local topology. So you can define the distance between two rooted graphs or equivalence class of rooted graphs or unlabeled graphs. So I will be lousy and make a systematic confusion between a rooted graph and its unlabeled version. So distance between, so let's call g star
the set of equivalence class. So if g o and g prime are in g star, you define a distance
for it by just saying that they are close if around a large ball, around their roots, they are isomorphic. So the indicator, the sum, for example, you can take as distance 2 minus k
indicator. So this is the notation for saying this is a graph spanned by vertices at distance at least k. No, at most k from the root. Okay, and by that I mean non-unitary.
So this defines the distance on g star. So this defines a local topology. Local because
it's defined on the finite size balls around the roots. And g star with this distance is a polyspace. So it's a
complete separable metric space. So you are in a good setup to study the probability measures on
g star and with the distance, which will be the levy distance, so the distance of consistent with the weak topology associated to this distance t. Okay,
so it will be again a polyspace. And so an element rho in this set, in p of g star, is a random rotted graph. Okay, so up to now nothing really happened. Then Benjamini and
Schramm in 2001, so this setup was proposed by them. But the important idea is that
when I have a graph, a finite graph g, I can associate an element in p g of star as follows. So you take g, a finite graph, and you define u of g as being, so I will write it in two ways,
it's the law of you take a uniformly distributed vertex, you root it at a uniform
point on the set of vertices of g, and you look at the connected component g of x is a connected component of x in g, and so you look at that. So in the world, this is one over v, sum over all x in fate of z mass. Okay, so you take what you do, you have a finite graph,
and you associate it with a probability measure which is obtained by rooting it uniformly at four examples. So you have lost some information on the graph.
For example, if g is, let's imagine that this is your graph, okay, what u of g will be,
it will be four over seven times the direct mass at that, plus three over seven, no plus two
over seven direct mass at that, rooted here, plus one over seven direct mass at, okay. So you have lost some information, you have lost the labels, but you also have lost some information
on the connected components. Okay, so this measure u of g enjoy a very nice property, they are unimodular, so you take a probability measure on the rooted graph, it's unimodular
if, okay, so I've defined g star, so you can also define g star star, which would be the
the unlabeled doubly rooted graphs for any measurable f, so this would be the doubly
rooted graphs, unlabeled doubly rooted graphs. So it's a function which takes as an entry a graph and two vertices, and which is invariant by
by relabeling of the edge, that's another way to put it. So if for any function like that, we take it positive, you have this equality, so I will write expectation of rho to say that I take the expectation of something where the expectation is taken with respect to this
measure rho, the sum over all vertices in my graph of f of g o x, so under rho, g o as low rho, okay, is equal to the expectation of the same thing of f of g x o. So this notion
of unimodularity, it's a mass transport principle. If you think about f of g of u, v,
some mass sent from u to v, it says that the amount of mass that the root sends to the rest of the graph, of the vertices, is equal to the amount of mass that the root receives. So it's a mass transport principle. And u of g is unimodular.
You take a finite graph, u of g satisfies that. How do I realize what?
g o x has an element in g star. So in this notation, g o is a random Monte-Kraff whose distribution is rho. And now I sum, so g o is a well-defined object, so it has
some vertices, v is a vertex set of g, and I sum over all vertices of x of g o. So f is defined on a g of star-star, but since I have a variable, I can define, since I have
my random Monte-Kraff, it has some vertices, so I can take f of g of u, v. It just has a pair formed by the vertex and the root as distribution rho. The pair formed by the graph and the root as distribution rho. The notation is a bit confusing, actually, a little bit.
Let's see what this, if u of g, I claim that u of g is unimodular, let's take that. So what is this expression on the left-hand side? Let's say there's a cardinal of v.
So one of cardinal of v. This will be the sum. I take, I sample the root uniformly at random. So sum of, so this is my expectation with respect to rho, of f of g of y y x.
Because when I sample the root, I take the root y, my graph g, I recall you, but it's the law of g o o, where o is uniform at random. So it's g of y y x. This is what is
on the left-hand side. Where g of y is a connected component of g, which contains y. But g of y is equal to g of x, if x is in the connected component of g of y.
So what is on the right-hand side? So this is, so you can rewrite it as the sum of that,
of the sum of y belonging to g of x of f.
What's the expectation on the right-hand side? On the right-hand side? So this expression is exactly the expression of rho of the sum over all y of f of g x g.
So in fact that's the only example of unimodular measure we know. Why do I say that? Why do I say that? You define the set of u of g, g finite,
and you close it for this local weak topology. And this is a set of
sofic measures. It's a definition. Let's say rho is sofic, if rho belongs to, a random-moded graph is sofic if it's a weak, local weak limit of finite graphs.
So the set of unimodular graphs is closed, so it's contained in the set of unimodular measures.
But we don't know an example of a unimodular measure which is not sofic. So that's what I said. This is only example of unimodular graphs we know. For example, so you can, and if u g, another definition, if g is finite, g n is finite,
is a sequence of finite graphs, we will say that g n has Benjamin-Isham limit rho.
if u of g n converges in this local weak topology to rho. Okay. So, Bs is for Benjamin-Ishram. And rho will be, by definition, it's a sofic measure.
Okay. Some examples, quickly, of Benjamin-Ishram limits.
So, if you take Cn, if you take, okay, so Cn as Benjamin-Ishram limit,
a Dirac mass at z, centered at 0h. Okay. So, you take the root uniformly at random.
And around, as n grows large, you will not see the cycle. So, you just see z. So, if you take zd, and you intersect it with a finite box,
this will have Benjamin-Ishram limit, a Dirac mass at zd, centered at 0h. If you do the perc, for example, if you do site percolation,
you keep each site, or let's say bond percolation, you keep it edge with probability p. And this will have Benjamin-Ishram limit. So, it's random. So, it's almost surely. This random graph will have Benjamin-Ishram limit. So, law of, you look at the percolation graph in zd with parameter p,
you look at the connecting component, which contains the origin.
If you take, for example, a binary tree of depth n, mu of tn, sorry, will have Benjamin-Ishram limit, which, let's say, you take a three-regular tree cut at depth n,
the Benjamin-Ishram of tn will not be the infinite regular graph,
because when you sample a vertex uniformly at random, you are quite likely to be close to the edge as a Benjamin-Ishram limit, which is something which is called the canopy graph. I think the word comes from Eisenman and Varzer. It's 12? Where is she? It's 12?
Which is a random tree that I invite you to describe.
In fact, many random graphs have infinite trees as limit. For example, so let's just say that there is one family of random graphs, of random infinite graphs, an Erdos-Schrini graph, so maybe infinite like that, an Erdos-Schrini graph
with n vertices and c over n, where you put, you have n vertices and you put an edge, uniformly, you put an edge independently for btc over n, it will have almost sure limit, a Benjamin-Ishram limit, which will be a Galton-Watson tree with Poisson of spring
distribution, a Galton-Watson tree with Poisson of spring distribution, which, by the way, it implies that this tree, this random tree, is unimodular. It's a nice exercise to check
directly that this tree is unimodular. In fact, all graphs, all sequence of final graphs, nearly all sequence of final graphs have Benjamin-Ishram limits.
For example, let me just say that the sequence Gn is pre-compact if for some function
f such that f of x over x goes to plus infinity. For the sum, the average, if you look at the degrees of x, as soon as the degree sequence is uniformly integrable,
your sequence is pre-compact for this Benjamin-Ishram limit. So all families of graphs that you will come up with, which have degrees which are not too wide,
they have some Benjamin-Ishram limit, or sub-sequential limits. Okay, so maybe now I come back to spectrum. Yes? No? It's a lemma. So pre-compact, meaning it's closure, it's compact. Okay, so and a sufficient criterion
for being pre-compact for a sequence of finite graphs to be pre-compact in the sense of
the Benjamin-Ishram topology is to have these degrees uniformly bounded. Pre-compact means it's closure is compact. Okay? Did I lost you? What? It's okay?
Okay. Is it for any fast growing function or? If you take your favorite function which satisfies that, if your graph will be pre-compact, if there exists a function f which goes to infinity faster
than x, such that, so in other way, your sequence will be pre-compact if the degree sequence is uniformly integrable. This is the value-percent criterion of uniform integrability.
Okay, so back to spectrum. Back to spectrum. Ah, I forgot to say something.
Okay, too bad. Back to spectrum. There is, I speak about, there is this spectral measure. So
you define if rho is a unimodule, if rho is in p g star, you define mu of rho as being the expectation of mu g. Okay, so this is, if rho and rho almost surely,
the adjacency operator of g is essentially self adjoint, then this object exists and I
can take an average and I define it as a spectral measure of rho. Okay, so there are two claims. There is a theorem of Nelson from 74 which says that, I mean, it's a consequence
of the theorem of Nelson in 74, which is a consequence that if rho is unimodular, a rho almost surely, a is essentially self adjoint. So when you have a unimodular measure,
when you have a unimodular measure, your adjacency operator is always up to taking
closure, is always self adjoint. So this particular mu rho always exists. And the reason behind that, this theorem, so Nelson is about, because when you have a unimodular measure, you can associate, I mentioned this group, fundamental algebra. There is an analog of this group fundamental algebra that you can associate with any unimodular
measure. But I will not say more about that. I don't have enough time. So it means, at least it's a very neat statement. So it means that you are in a very, it's a good framework. And this definition contains all previous definitions. So if g is finite,
mu of mu of g, this is simply my special average formula. This is mu of g. And if g is a Cayley graph, a direct mass at g rho t that is unimodular, and mu of g,
and again mu of rho, mu of direct mass at, is mu of g. Okay, so another theorem,
it's that if gn has Benjamin-Nisham, so it's a sequence of finite graph,
has Benjamin-Nisham rho, then mu, the Kolmogorov-Smirnov distance between mu of gn and mu of rho goes to zero. Whereas the Kolmogorov-Smirnov distance between two measures is a supremum of, is an infinity norm of the partition function.
So it's Kolmogorov-Smirnov. So this Kolmogorov-Smirnov distance, convergence,
implies the convergence of atoms. So this theorem probably it's, we can credit it to,
probably to, at least for the atomic part, we will see. A corollary of that is that
if rho is a system, if rho is unimodular, is a suffix, and mu of rho of lambda is positive,
then lambda is an algebraic integer. Because atoms of a finite graph, they are the zeros of the characteristic polynomial, which is an integer value,
it's a monic integer value, the polynomial. So the atoms, the eigenvalues of a finite graph, has to be an algebraic integer. So it's even a totally real algebraic integer. Totally real means that all the Galois conjugates of lambda are also algebraic integer. So if you find a
unimodular graph which has an atom at a point which is not an algebraic integer, you will have proven that there are non-sophic unimodular graphs and you will be famous. Another corollary is that if
for all n large enough, Gn has at least delta n distinct eigenvalues, then
the n Gn has Benjamin-Shahm rho, then mu of rho, if you look at the total mass of its
continuous part, is at least delta. And again it's an exercise but it's based on the fact that when you converge in converse distance sense, the atoms converge. So let's prove this theorem.
So you will find a full proof in my notes. So if the statement would have been that mu of Gn converged weakly to mu of rho, it would be very easy. So let's assume for simplicity
that the degrees are bounded uniformly. Then the moment is simply the number of closed works
by the Nicholas definition. Number of closed works from the root to the root of length k. This is some function of g. It only depends on the graph
at finite depths, at the graph span by vertices, the distance at most k from the... So it's some function of this local neighborhood and it's bounded because my graph,
it's bounded by theta power k because I have assumed that my degrees are bounded. So you have a bounded continuous function. So it will imply that the expectation of that,
which is the case moment of d mu rho, will converge to mu rho weakly.
So it will converge to the case moment because it's bounded function. So this implies a weak convergence. So what is the septal here? It's a convergence of for a stronger distance, which is this Kolmogorov Snyanov.
So if you want to check that mu n, you have a real probability measure on r, and you want to check that mu n converges in this Kolmogorov Snyanov sense to zero, it's equivalent to
that for any lambda, the atoms converge and that mu g n converges weakly.
So the weak convergence, we already did it. It's just that because the convergence of moments implies a convergence, at least for uniformly bounded graphs. And we have to check the convergence of atoms. And there is this. So from the
portmanteau theorem, the lim sup of mu of g n of lambda, so when I write mu g of lambda, I should write that. The lim sup of mu g n of lambda is bounded by mu rho of lambda.
And the lim inf of mu g n of lambda minus epsilon lambda plus epsilon, it's at least mu rho of lambda minus epsilon lambda plus epsilon,
which is larger than mu rho of lambda. So the statement that the atoms converge is implied in this lemma, which is due to Luke, which says the following,
that for any lambda and theta, there exists a continuous function delta, which depends on lambda and theta, such that delta of zero is zero. And
for any finite graph g with degrees in g bounded by theta,
you can bound the mass of the interval of size epsilon on lambda by the mass of the atom at lambda plus delta of epsilon. So if you buy this lemma,
you will get precisely the convergence of atoms, because you will say that mu g of lambda
minus epsilon lambda plus lambda is bounded by mu g of lambda plus delta epsilon, so you have the inequality in the other sense. So this is a beautiful lemma. And so for general lambda,
Albert found the proof. I will do the proof for lambda equal to zero and found the delta, it's one line. At the beginning? Yeah, it's just to avoid, I mean,
if your degrees are not uniformly bounded and you want to prove this theorem, you first have to do some truncation of the degrees of the graph and say that the spectral measure does not change too much if you bound the degree, so you have to use some
deviation inequalities of eigenvalues. But okay, let's forget about that. In the notes, it's written. So when lambda is equal to zero, what do you do? You say, you observe,
so g is my finite graph, you take lambda one up to lambda, and let's say, again, it's eigenvalues, and then the observation is that the product of the lambda i,
lambda i non-zero is an integer. Why is it true for various reasons? One of which is that it is,
one of this expression up to the sign is equal to a derivative of the characteristic polynomial at zero. So the case derivative at zero will be the sum over all subsets i of cardinal k
of the product of the lambda i not in i of the lambda i, and there will be probably a minus one power k, or n minus k, minus one power k. So you take k as being the dimension of the
image of a, and you will get that. So in particular, its absolute value, since it cannot be zero, it's at least one. So it means that the eigenvalues, they repel,
so if they are not zero, they cannot be arbitrarily close to zero. And but you can also bound it by if, let's say, k is a number of eigenvalues in minus epsilon,
so k is equal to n times mu g of minus epsilon, epsilon. This is bounded by
k. Either you are at most epsilon, and then you are bounded by epsilon, or you are less, or you are not in this interval, but then you use the fact that the degrees are bounded,
and the operator norm of the graph is bounded by the maximal degree. So it's bounded by theta power n minus k. Okay, so you just need that. And then so you take the log of that, and so zero k log epsilon. So you can take delta
of epsilon, which would be log theta divided by log of one over epsilon. So this is for
lambda equal to zero, marginal lambda, as you can see in the note, but it's the same idea. But the fact that you are not exactly equal to a given value gives you some repulsion,
but the fact that you have the adjacency operator is integer valued. Okay, I'm already late. Okay, so I will not do the computation on the lamp later,
and yes, so maybe I will stop here.