2/4 Liouville conformal field theory and the DOZZ formula
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Teil | 2 | |
Anzahl der Teile | 4 | |
Autor | ||
Lizenz | CC-Namensnennung 3.0 Unported: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/46726 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
|
00:00
RelativitätstheorieSpieltheorieSingularität <Mathematik>Produkt <Mathematik>ModulformVariableBerechenbare FunktionUnendlichkeitAlgebraische K-TheorieEbeneErwartungswertFreie GruppeFunktionalFunktionalintegralKomplex <Algebra>Leistung <Physik>MaßerweiterungMereologieMomentenproblemPhysikalische TheoriePotenz <Mathematik>Zentrische StreckungUnordnungEinflussgrößeParametersystemGewicht <Ausgleichsrechnung>Gebundener ZustandRandomisierungKorrelationsfunktionGammafunktionQuadratzahlExistenzsatzPunktKörper <Algebra>KoeffizientGrößenordnungRadiusJensen-MaßKonditionszahlEbener GraphMultiplikationsoperatorStandardabweichungSpezifisches VolumenVorlesung/Konferenz
08:25
RelativitätstheorieAusdruck <Logik>ErwartungswertFunktionalGrenzwertberechnungInverser LimesKugelLemma <Logik>MaßerweiterungQuantisierung <Physik>TheoremEinflussgrößeGebundener ZustandKorrelationsfunktionGammafunktionSummierbarkeitPunktZustandssummeKoeffizientSpiegelung <Mathematik>Jensen-MaßKonditionszahlOrtsoperatorVorlesung/Konferenz
16:50
Algebraische StrukturOrdnung <Mathematik>ZahlensystemZufallsvariableAusdruck <Logik>FaserbündelGesetz <Physik>BeweistheorieErwartungswertGrenzwertberechnungInverser LimesKonforme AbbildungLeistung <Physik>Lemma <Logik>Lokales MinimumMereologieMomentenproblemPhysikalismusPrimidealTermTheoremMatchingRandomisierungKorrelationsfunktionGammafunktionPunktDruckspannungSpiegelung <Mathematik>Arithmetischer AusdruckJensen-MaßBootstrap-AggregationVorlesung/Konferenz
25:15
KurveSingularität <Mathematik>StochastikBrownsche BewegungVariableAusdruck <Logik>UnendlichkeitOrdnungsreduktionFunktionalInverser LimesLeistung <Physik>Lemma <Logik>Lokales MinimumMomentenproblemMultiplikationTermTheoremGewicht <Ausgleichsrechnung>Helmholtz-ZerlegungGebundener ZustandRandomisierungKorrelationsfunktionGammafunktionSummierbarkeitKoeffizientSpiegelung <Mathematik>Arithmetischer AusdruckJensen-MaßSupremum <Mathematik>PolstelleMultiplikationsoperatorOrtsoperatorVorlesung/Konferenz
33:41
KurveRauschenSymmetrieModelltheorieBrownsche BewegungBrownsche BewegungEbeneFreie GruppeHyperbelverfahrenLemma <Logik>Lokales MinimumMereologiePotenz <Mathematik>StichprobenumfangTheoremWärmeausdehnungUnordnungTrajektorie <Kinematik>Helmholtz-ZerlegungPunktKörper <Algebra>Jensen-MaßMultiplikationsoperatorStandardabweichungVorlesung/Konferenz
42:06
MathematikOrdnung <Mathematik>RauschenZahlensystemBrownsche BewegungKategorie <Mathematik>UnendlichkeitGesetz <Physik>ErwartungswertGrenzwertberechnungHyperbelverfahrenInverser LimesLeistung <Physik>Lokales MinimumMomentenproblemPotenz <Mathematik>UnordnungGammafunktionDistributionenraumQuadratzahlPunktKörper <Algebra>Prozess <Physik>Jensen-MaßDivergenz <Vektoranalysis>KonditionszahlMinkowski-MetrikVorlesung/Konferenz
50:31
MathematikOrdnung <Mathematik>RauschenBrownsche BewegungBrownsche BewegungVariableIntegralBerechenbare FunktionUnendlichkeitÜbergangHyperbelverfahrenLokales MinimumMereologieMomentenproblemPotenz <Mathematik>TermTheoremThetafunktionGammafunktionSummierbarkeitKörper <Algebra>KovarianzfunktionOffene MengeKreisflächeRadiusJensen-Maßsinc-FunktionMultiplikationsoperatorVorlesung/Konferenz
59:44
Ordnung <Mathematik>RauschenMengenlehreVerschiebungsoperatorModulformBrownsche BewegungIntegralUnendlichkeitGesetz <Physik>ErwartungswertKugelLeistung <Physik>Lokales MinimumMereologiePotenz <Mathematik>Quantisierung <Physik>TermZentrische StreckungKonstanteEinflussgrößeExogene VariableParametersystemHelmholtz-ZerlegungSterbezifferGammafunktionDistributionenraumQuadratzahlGleitendes MittelZustandssummeBeobachtungsstudieDifferenzkernSummengleichungJensen-MaßEreignishorizontKonditionszahlSupremum <Mathematik>Objekt <Kategorie>MultiplikationsoperatorRechter WinkelVorlesung/Konferenz
01:08:58
Maß <Mathematik>Ordnung <Mathematik>RauschenTopologieZufälliges MaßModulformBrownsche BewegungBerechenbare FunktionUnendlichkeitWahrscheinlichkeitsmaßGesetz <Physik>Total <Mathematik>ErwartungswertFunktionalGeradeGrenzwertberechnungHyperbelverfahrenKreiszylinderKugelLeistung <Physik>MereologieMomentenproblemPotenz <Mathematik>Quantisierung <Physik>TermVerschlingungEinflussgrößeRuhmasseGewicht <Ausgleichsrechnung>RandomisierungKorrelationsfunktionGammafunktionQuadratzahlSummierbarkeitAdditionPunktKörper <Algebra>Konforme FeldtheorieZustandssummeOffene MengeProzess <Physik>FlächentheorieJensen-MaßKonditionszahlMultiplikationsoperatorSchlussregelMinkowski-MetrikSpezifisches VolumenVorlesung/Konferenz
01:18:11
RelativitätstheorieVarianzIntegralAusdruck <Logik>DimensionsanalyseDerivation <Algebra>Berechenbare FunktionUngleichungÜbergangErwartungswertFunktionalInverser LimesLeistung <Physik>MomentenproblemPhysikalische TheoriePotenz <Mathematik>PrimidealZentrische StreckungQuadratische GleichungEinflussgrößeParametersystemGammafunktionDistributionenraumQuadratzahlPunktKovarianzfunktionProzess <Physik>Jensen-MaßMultiplikationsoperatorStandardabweichungVorlesung/Konferenz
01:27:25
DifferentialgleichungMaß <Mathematik>RauschenRelativitätstheorieSpieltheorieStochastische DifferentialgleichungModulformBrownsche BewegungKategorie <Mathematik>Ausdruck <Logik>UnendlichkeitL-FunktionGesetz <Physik>EbeneFunktionalGrenzwertberechnungHyperbelverfahrenInverser LimesKugelLeistung <Physik>MomentenproblemPaarvergleichQuantisierung <Physik>TermTheoremGüte der AnpassungEinflussgrößeGewicht <Ausgleichsrechnung>KorrelationsfunktionGammafunktionDistributionenraumQuadratzahlAdditionPunktKörper <Algebra>ZustandssummeKreisflächeKoeffizientDifferenzkernt-TestRiemannsche ZahlenkugelSpiegelung <Mathematik>Jensen-MaßRechter WinkelSpezifisches VolumenVorlesung/Konferenz
Transkript: English(automatisch erzeugt)
00:30
Okay, so welcome back. So I'm going to start, I kind of cheated a bit, I started writing
00:41
before you arrived. I'm going to recall a bit of what we did, so in the first lecture, to refresh memories. So in the first lecture, I gave rigorous probabilistic definition
01:01
to the Youville correlation functions. And so what did I do? So just to refresh memories, I introduced the main parameter of the theory, which is gamma, which in all these lectures belongs to the interval zero two. I introduced q equals gamma over two plus two over gamma,
01:20
so it's a function of gamma. And I introduced the cosmological constant mu, which is a positive parameter, which is just a scale, it just appears in this scale relation here. So it's not an important parameter of the theory, but nonetheless, it's essential for the existence of the theory. But it's kind of a trivial parameter in some sense. So what I did last
01:48
time is I justified that if you have the bounds up there, so here I'm calling them the extended Seiberg bounds. So sometimes this is how we call it, and so I define
02:01
the product of, so in the path integral language. So remember, these are the, should be seen as something like this in the path integral language, but okay, I'll justify this in lecture three and show why these correlations can be interesting to study from the random
02:26
planar map perspective, for instance. So I define the product of these fields, so with weight alpha k and in the point z k in the complex plane. So I define the correlations of these fields as two, so this mu to the power minus s, where s is given by this
02:44
thing upstairs, gamma of s, the standard gamma function in the complex plane, times this product here, which is the Gaussian free field part of the theory, and times the interesting part of the theory, the interesting piece of the theory, which is an expectation
03:02
of, and so this is what I wrote above. So under these extended Seiberg bounds, I can define this thing here, okay? So it's the integral of the Gaussian multiplicative measure or random volume form integrated against a function, and what is this function?
03:25
It is a function which essentially has singularities around each point z k, and these singularities, let's say their intensity or their magnitude is given by these coefficients alpha k, okay? And so the extended Seiberg bounds up there, just to, okay, let me recall a few things.
03:46
So the extended Seiberg bounds ensure that this thing here is non-trivial. It's between zero and plus infinity strictly, okay? So if I write it in with, so usually in the
04:01
lecture notes I introduce this notation, and the extended Seiberg bounds are for all k, alpha k smaller than q, so remember gamma over two plus two over gamma, and minus s strictly less than four over gamma square infimum of two over gamma q minus alpha. Okay, so these
04:29
are the extended Seiberg bounds. So if alpha k is above or equal to q, the Gaussian multiplicative chaos measure, it explodes. So this bound comes from the fact that if
04:45
I integrate, say, around a point, so if alpha is greater or equal to q, if I integrate my exponential of free field, I get infinity, so almost surely. So say in a ball of radius
05:04
one or radius whatever, the singularity is too strong and it can't integrate it anymore. So this is where the first bound comes from, and then, so if I have this, essentially depending on if minus s is positive or negative, I get an expectation which is zero or infinity. So of course I can make sense of this, but it's not the right object,
05:24
okay? It's not the right way to construct the deville correlations. And so in the probabilistic approach, it's a trivial thing. And these bounds came from the fact
05:40
that I explained to you that if I take, say, if I integrate a Gaussian chaos measure in some open set, so say some ball of radius r and center z, any point, then the moments exist if and only if p is strictly less than four over gamma squared.
06:06
And then this bound here comes from the fact that when I put, you know, a singularity around the point, the moments of this variable exist if and only if it is smaller than four over gamma squared, but especially infimum two over gamma q minus alpha,
06:27
where alpha is the, so it's just a condition to ensure that I can, I'm allowed to take the moment of this variable. I have to look at what happens around each singularity here, and I have to look also, say, away from a singularity that everything's okay.
06:41
Okay, so that's the, that's what I kind of, okay, maybe it was a bit abrupt, I mean, direct, but I introduced directly these correlation functions. I set this as a definition, and I tried to show to you why, you know, where these bounds come from. Okay, so today, something that was maybe not that clear on Wednesday on, you know,
07:07
in the first lecture is, I said that n is greater or equal to three, but in fact, I didn't really explain why. It's just that, and this is the computation I'm going to do in front of you, is if I look at, okay, I wrote them down here, if I look at these
07:24
two conditions, this implies n greater or equal to three. That's the point. And this is what I'm going to develop today, explain to you why, and then, okay, how do you do to define nonetheless a two-point correlation function, and when I try to define a two-point correlation
07:42
function, you know, the material of the Mating of Trees paper by DuPontier, Miller, Sheffield will enter the game. Okay, so let me first just give you back the definition of, you know, the three-point correlation function, because I'm going to be working on it.
08:03
Okay, so in the case of, so I defined, let me, let me define the three-point correlation function. So I can also send one point to infinity, so here the z k's are in the complex
08:22
plane, but I can send one to infinity, say this one, and okay, it comes out of a scaling relation that's in, if I send the third point to infinity, I have to renormalize to get something, so delta three, and so I also, so in the lecture notes, this is denoted c of alpha
08:49
one, alpha two, alpha three, but you can also denote it quite naturally like this. So this was my, so in the lecture notes, I know, I set this equal to this,
09:03
and I had this explicit expression, so this is, I'll be working on this explicit expression, so mu minus s, where s is, you know, the sum of the alpha one, alpha two, alpha three, minus two q for gamma, gamma of s, expectation. So I set this kind of definition with,
09:41
I send one point to infinity, and what do I get? I get this. Okay, so that was the definition
10:01
of the three-point correlation function, where one point is sent to infinity, and at the end of the last lectures, I explained what was the main theorem that we proved, that is, you know, the purpose of these lectures, is to prove that this thing, this expectation, is the same, has an explicit expression, which is called the D'Auziz formula.
10:25
Okay, so let me try to explain now, so of course, okay, we defined the three-point correlation function under these conditions, and let me try to argue, it's rather easy,
10:40
why there's no two-point correlation function, so, because in fact, it never really appeared in the first lectures, so if you want to define a two-point correlation function, say, the first thing you do is you set alpha two to zero, for instance, okay, so, and you try to, and you look at the bounds, so what are the bounds? The bounds are
11:01
alpha one smaller than q, alpha two smaller than q, and if I look, two q minus alpha one, sorry, let's say I'm taking alpha two to zero, so here are the bounds, so four over gamma,
11:21
infimum two q minus alpha one, two q minus alpha two, okay, so these are the bounds that I have to satisfy to define the two-point correlation function. Now what does this imply? This implies that this guy here is smaller than this, so saying that this is smaller than this,
11:41
it's very easy to see that, so this smaller than this implies, well, let me get it right, alpha one smaller than alpha three, right? This is the same thing, and so of course this guy, smaller than this guy here,
12:03
well, it's the other bound, it's alpha three smaller than alpha one, so of course you see it's obvious there's a contradiction, you can't have both of these strict inequalities, and in fact it shows something more, it shows that, okay, that if alpha one is different than alpha three,
12:27
then for all, okay, for all epsilon, or at least for epsilon small, okay, then for epsilon small, for epsilon positive and small, alpha one epsilon alpha three
12:52
does not satisfy the extended, the extended Seiberg bounds, so okay, maybe I'm going to
13:04
put in, well, let me write in the extended Seiberg bounds, okay, so that's one first. However, and this is, you know, the somehow the key observation if you want, if I choose
13:30
alpha in the interval gamma over two q, okay, then, so this is a non-trivial interval, right,
13:40
because this is gamma over two plus two over gamma, then, okay, then four over gamma is bigger than two q minus alpha, so that's a, I let you do the algebra, and so what we have to look for, and for all epsilon strictly positive,
14:05
alpha, epsilon alpha does satisfy, okay, so does satisfy the extended Seiberg bounds, so this means that I can, so it's an easy computation, this is bigger than this, then
14:22
if I look at this, the condition is two q minus two alpha minus epsilon smaller than two q minus alpha, and this is obvious, for all epsilon strictly positive, and so this means that I can define, so conclusion of my discussion,
14:41
so, again, v alpha, if alpha is in here, v alpha zero, v, so what would be zero, okay, the two-point correlation function if you wish, this is infinity, however, for all epsilon strictly positive, v alpha zero, v epsilon one, v alpha infinity,
15:08
well, exists, so it's a three-point correlation function which exists, okay, so it means that we can define, if we take the same weights, we can define three-point, and now we, of course,
15:24
the natural thing to do if you want to define a two-point correlation function is to renormalize this guy, take the limit, and see if the limit exists, and that's the purpose of today's lecture, okay, and so the answer will be, you get something called the
15:41
reflection coefficient in UVIL, and which can be interpreted as the partition function of the quantum sphere measure introduced in the Prandtl Miller Sheffield, okay, so let me start by, um, so I want to keep this, right, so I think I'm going to keep this this thing here right now,
16:10
and I'm going to erase my definitions, uh, anyways, you have the lecture notes for for this, and do I have, I have to, oh, here it is, so let me jump straight away to,
16:41
and the lecture notes what is called the, what is, goes under the name of lemma 3.4, so I'm going to prove directly lemma 3.4 by, by first, you know, assuming something that is completely non-trivial, but, so lemma 3.4, the luva reflection coefficient, so for all alpha
17:07
and gamma over 2q, 4 are alpha, so okay, this is some normalization, it's like linked to the two in the definition, it's because at the end you want to match with the dozz formula physics,
17:22
but I mean it's not important, I can define the limit as epsilon goes to zero of epsilon c gamma, so the three-point correlation function, so I, okay, this is, okay, in the notations of the lecture notes, I chose to call, when I have one point at infinity,
17:42
I chose to call this the three-point structure constant, oh, by the way, I'm sorry, everywhere, of course, with respect to the lecture notes, I have to put a gamma and a mu, all these quantities depend on gamma and on mu, okay, so sorry, I let you,
18:02
in your mind, correct to put, so this thing, of course, c gamma also depends on mu, but we didn't stress the dependence on mu, maybe we should have in our papers, the dependence on mu is essentially trivial, so sometimes we don't stress its dependence,
18:23
so this thing exists, and it converges to something which has a nice probabilistic expression, okay, so it gives a, so this thing appears in the bootstrap approach to uville very often, and what we get here, what this theorem will provide is a nice probabilistic expression to the two-point correlation function, so I'm going to give
18:42
a proof of this by first, you know, taking for granted a completely non-trivial thing, so I'm going to introduce, so proof, okay, so the proof goes this way,
19:07
so a whole of alpha epsilon alpha, so remember, I'm taking the expectation of this guy to some power, I'm going to take my lecture notes, wait, the statement of the lemma is the limit
19:39
exists, and our alpha is going to be defined in some way, I'm going to explain, so lemma is,
19:46
okay, I went kind of backwards compared to the notes, I wanted to, it's this limit exists, and then I'm going to spend an hour at the end of these lectures giving you a, you know, a probabilistic expression to this, so first, okay, it's going to be, so yes,
20:03
the statement of the lemma as I'm stating it here right now in front of you is this limit exists, yes, it's a non-trivial, okay, I can multiply by epsilon square, the limit exists,
20:24
okay, sorry, so let me explain how you prove this, so c gamma of alpha epsilon alpha is equal to two mu, so two over gamma q minus alpha minus epsilon over gamma,
20:44
so a constant, the gamma function times, so this is always the interesting part, so it's, you know, it's all this, the first trivial part of the, so it's
21:04
this thing here, I'm just, and the expectation of a whole alpha epsilon alpha to the power two over gamma q minus alpha minus epsilon over gamma,
21:20
and this thing, this random variable, it's this, so I haven't, so remember this is the maximum, yes, the maximum between x and one, okay, so sorry, so what I mean is that this thing,
22:06
the expectation part is, you know, I'm just copying what I did there, but in a specific case, it's given by this thing, and what I want to really register now is this formula for the moment, so of course when epsilon goes to zero, this goes trivially to, okay, mu to this
22:25
power, this goes to gamma evaluated here, so everything is about trying to understand what this random variable is doing, okay, the rest is a trivial matter, but I have to keep these terms on, because they're important, you know, to
22:45
match the physics and everything, okay, so what's going on? Let's look at this variable, so first let me explain what's roughly going on, when epsilon goes to zero,
23:05
I'm hitting the threshold two over gamma q minus alpha, and if I look at what's going on around zero, I said it doesn't have a moment of order two over gamma q minus alpha, so this is making this expectation blow up, I'm hitting exactly the moment where it explodes, okay, and so,
23:25
and this guy plays no role at all, roughly around zero, okay, and so you're going to admit, I'm going to let epsilon be zero here, it's not, this guy's playing no role, epsilon is playing a role in the fact that the moment is blowing up, so I'm going to replace this guy
23:43
here by, with epsilon equals zero, okay, it's pretty safe, and if I do that, I have to study this variable, and this variable is i alpha plus some other term, which I don't know, I called it i prime, okay, and what is i alpha? If x is less or equal,
24:04
it's what, it's this guy around zero, okay, where it's blowing up, and i alpha is, so roughly, it's not roughly, it's actually equal, i alpha is going to be the integral
24:20
for x less or equal to one of one over x to the power gamma alpha, okay, and it's i prime alpha, and in fact, you can see that if you do a change of variable x goes to
24:43
one over x, you can believe me, it's symmetric, it has the same law as this guy, it's by conformal invariance, what's happening at infinity and around zero is the same thing, so alpha and i prime alpha, they have the same distribution, okay, and here's the main tool,
25:02
the main theorem that I'm going to spend an hour on after, it's the following theorem, theorem 3.3 in the lecture notes, the probability that i alpha is bigger than t, it's equal to some r bar of alpha plus an o, so something which is a constant over
25:32
two over gamma q minus alpha plus eta, where eta is positive, so of course, you know,
25:43
okay, this guy roughly, it is two, it's a sum of two variables, and you're hitting the threshold where the moment explodes, so what you want to do if you want to study this, you have to study the tails of this random variable, okay, and the main theorem I'm going to spend an hour on is this, so once you have this, what goes on, well, it's not very
26:07
complicated, right, I mean, roughly, okay, this guy, the tail of this is really concentrated around zero, okay, because the singularity is really adding some weight to the random variable,
26:22
and so roughly these two guys are independent, because, you know, if you're around zero or around infinity, you have a finite correlation, so the tails sum up, okay, so that's reasonable to believe that, you know, they have a finite, there's a finite correlation between the two variables here, and so roughly this implies, so let me say what this implies,
26:43
it implies that, okay, what do you do when you have two independent variables, well the tails, they just sum, so I get two r bar, the alpha over t, plus this o term, which is neglectable, okay, so we're done, this implies, okay, so I let you take home exercise,
27:13
this implies that if I multiply by epsilon, pi of alpha, so it's a simple exercise in probability that this implies that this thing
27:27
converges when epsilon goes to zero, I know everything now, I know completely the tails, so it converges too, so there's a, there's a gamma here, but two gamma, two q minus alpha
27:42
over gamma, r bar of alpha, and so I think no one will complain, so if I know, the take home message is the moment is blowing up, it's blowing up because of a singularity in zero and in infinity, the two guys are roughly independent, so provided I know the tail of one
28:07
of them, I know the tail of the sum, and so once I know the tail, when I'm going to the threshold where the moment doesn't exist, I can completely easily study what's going on, and up to some, you know, trivial terms, it's just the tail of the random variable,
28:21
okay, I think everything in reduced probability will find this, I hope, completely clear. Okay, so everything, you know, all the juice is contained in this theorem, but I first wanted to show you that, okay, it's just a question of moment blowing up, so the key is to understand the tail behavior of GMC, Gaussian multiplicative chaos,
28:44
random, okay, probably not, no, okay, I let you as an exercise, I don't think I need the, you know the eta positive here? Yeah, no, yeah, I think it should be true,
29:06
yeah, if I have a little o, but okay, the eta is very important in our work, because we need to analytically continue these moments, so we have to take out poles, so you won't see these in these lectures, there are lots of technicalities like
29:22
taking out poles of these guys, and if you want to take out poles, you need control on the second bound, these are very important bounds in our work, but here, no, it plays no role, but okay, I'm stating, actually anyways, I'm not going to prove you, you know, this term, I'm going to explain to you where this term comes from.
29:42
Okay, so this, so now I'm going to spend time on proving this, I'm going to give you a probabilistic expression for this, if I have a probabilistic expression for this, then I get a probabilistic expression for the r alpha here, because r alpha at the end is going to be, so if I sum up everything, let me just
30:06
say what r alpha is, so at the end of the day, what is r alpha? This limit here, the reflection coefficient of uville, so r alpha is 4 mu to the power 2 over gamma q minus alpha
30:33
over gamma of minus 2 over gamma, so the gamma function, 2q minus alpha over gamma,
30:45
r bar of alpha, where r bar of alpha is still a bit abstract for you guys, but I'll be discussing in a moment, okay, so that's the two-point correlation function of uville. So now everything is about understanding this, okay, and at the end you'll get an exact formula
31:04
for this, which comes out of d of zz as a corollary, so where's the, oh it's here, okay, so let me introduce some, so now I have to describe this tail, so I have to introduce some
31:21
material, and so all things that here people know rather well, I think, but still have to refresh memories, it's the Williams, so let me start by introducing the, by recalling the Williams decomposition theorem, okay, so what is the Williams
31:45
decomposition theorem? So I think it's stated in the lecture notes as lemma 3.1, but it's hard to read, it's better to do a drawing, and that's what I'm going to do,
32:01
so if I take, so I consider, no I can wander a bit more, so I consider
32:21
a Brownian motion with a negative drift, a negative drift, so I look at a Brownian motion minus nu s, and nu is positive, so I'm looking at a Brownian motion, so it, so of course it goes to minus infinity at infinity, even I remember this
32:44
is a stochastic calculus, so, and of course it's going to hit a, it's going to hit a maximum, so if it goes to minus infinity, it's going to hit a maximum, so and the maximum m, so this is m, the maximum is the supremum of my Brownian motion, and okay, it's a very known
33:09
fact that the probability that m is bigger than x, so it's distributed like an exponential variable, so the probability that m is bigger than x is exponential minus 2 nu, so nu is the drift,
33:25
times, so this is, okay, and so the Williams decomposition is the following thing, so I'm going to shift my curve here down and recenter it, okay, and, and so
33:47
the Williams decomposition, it says that the following thing, so if I take,
34:03
if I take that curve up there, and I take the, around what's going on around the maximum, what do I get, so let me try to reproduce a, I guess, okay, you get this, and on the other side you get what, you get this, okay, and so here, what do I have, I have minus m,
34:33
minus the maximum, and the Williams decomposition says that if I take a Brownian motion and I,
34:42
I look at what's around the maximum, conditionally on the value of the maximum, I get on this side, so I think I called it b2t, it's a drifted Brownian motion, conditioned, so it's a drifted Brownian motion, but of course it's, it's below zero,
35:05
so it's conditioned to be negative, non-positive, so this is, you know, a standard diffusion which has been studied for years in probability, so I get a drifted Brownian motion conditioned to be less or equal to zero, and on the other side, what do I get if I, if I look at things
35:28
backward like this, I get the same thing, a drifted Brownian motion conditioned to be negative,
35:41
so for all s positive, I'm negative, I get the same thing, except that I'm not looking at the full trajectory, I'm looking at the trajectory between zero and l minus m, and l minus m, well, you see, it's the last time, it's the last time that
36:08
my drifted Brownian motion hits minus m, so the take-home message is when I decompose my trajectory on my Brownian motion around the maximum, I first sample the maximum according
36:23
to an exponential variable, and then if I, you know, if I recenter, what do I get on the right, I get a Brownian motion conditioned to be negative with the same drift, and on the other side, if I look at it this way, I get the same thing, but I stop it at the last time it hits minus m, so that's the Williams, so that's the Williams decomposition lemma that we're
36:45
going to use, so quite naturally, I'm going to introduce some definitions which I'm going to use now in the sequel, I'm going to introduce, of course, okay, this picture, but on r,
37:00
I'm going to extend this everywhere, and of course, you know, somehow what's going to happen is that I'm going to look at tail events, so m is going to go to infinity, and so naturally I'm going to have a two-sided drifted Brownian motion conditioned to be negative, so let me introduce some notations, so here now I'm really following, now I'm really following
37:23
3.2 tail expansion of GMC, page 12, okay, so let me introduce the main, all the main guys of this thing, so I introduced, I think, I don't know how I call this a matzcal b, so I'm going to
37:58
introduce the two-sided drifted Brownian motion, so if s is negative, so this is a straight b and
38:08
this is a curvy b, okay, b alpha s if s is positive, and b alpha s, I put a bar on this
38:25
one here, and b bar alpha to distinguish them are independent, okay, so b m, so Brownian motions, conditioned to be negative and with drift, alpha minus q, or if in the picture that
38:48
I just wrote, I take, I take the drift to be minus nu with nu positive, and nu is given by q minus alpha, okay, so these are two independent Brownian motions, so conditioned to be non,
39:02
with drift, so with drift and conditioned to be negative, to be less or equal to zero, okay, so this is, okay, for those who know a bit of these stories, when I'm going to study
39:24
the exponential of the free field, there's going to be a radial part, the radial part is going to be described by this, by, well, by this guy, sorry, and the non-radial part is going to be defined by what, you know, DuPont, Tim Miller, Sheffield called the lateral noise, and I'll introduce the lateral noise right now, somehow the lateral noise in all these
39:43
businesses, what makes things kind of, you know, simpler to study when you have two points is you have lots of symmetry and somehow the lateral noise plays no role, essentially, you can all, you can state all the theorems on, on toy models where there's just a Brownian motion, and essentially the lateral noise plays no role, but we have to introduce it,
40:06
so what is, the lateral noise is going to be the non-radial part of the Gaussian free field around zero, that will be the radial part which appears, so the lateral noise process, it's a Gaussian field with this covariance, so I'm going to switch to the cylinder, so
40:31
s is going to belong to r, and theta is going to belong to zero to pi, okay, and, you know, the complex plane by a conformal map, it's the same thing as a cylinder,
40:47
r times zero to pi, so I'm going to introduce this guy, I'll explain in a moment, we'll see why, when this appears, but the lateral noise process, it's just a Gaussian field with a log, with this covariance, so I take the maximum of exponential minus x, exponential minus t,
41:07
and I get this, and if I have this, you know, this thing defined on r times zero to p,
41:21
two pi, sorry, I can introduce the Gaussian chaos measure, the exponential of this guy, oh yesterday in the first lecture sometimes I forgot the two, but of course there's a two here,
41:43
okay, this is a standard Gaussian chaos measure, and I'm going to be interested in slices on a fixed s, so I'm going to introduce this thing, so integrating this guy on a slice with fixed s in my cylinder, so things will become quite clear I hope in a few,
42:12
so this is lateral noise process, so a way of constructing this thing is I take a Gaussian
42:22
free field in the full plane, I take out the radial part, what do I get if I map it to the cylinder, I get this guy here, okay, so this is a generalized notation because this thing will only exist in the space of distributions, what makes sense is
42:40
integrating zs against ds, it's not necessarily defined as a function, a slice, okay, but I'm still going to use this notation in a very important property of this if I'm still with my abusive notation as if it were a random function, it's a random generalized function, so
43:00
let me keep i alpha, it's stationary, so if I translate this guy, you know, on the cylinder, it has the same distribution as if I, okay, so I'm going to introduce,
43:31
did I erase the notation? I'm going to introduce rho of alpha which is the limit in some sense of rho of alpha epsilon alpha, the random variable that's blowing up,
43:41
now that I have all the material I'm going to introduce rho of alpha, okay, which this notation should be seen kind of as, you know, the limit of let's say rho of alpha epsilon alpha as epsilon goes to zero in my picture, oh it's okay, it's written over up there,
44:03
so it's this variable, so I take my two-sided drifted Brownian motion condition to be negative with drift minus q minus alpha and I integrate it against the lateral noise process
44:23
and now I can introduce the tail r bar of alpha, so r bar alpha is the expectation of rho of alpha to the power two over gamma q minus alpha, so here's my definition,
44:54
yes it's a definition, definition, definition, okay, so let me give,
45:26
no it's not equal in law to rho alpha zero alpha, no, no, no, it's, you'll see in the proof, it's not, if it were anyways, no, no, because the expectation of rho alpha zero alpha
45:46
to the power two over gamma q minus alpha is infinity, it's blowing up, whereas this guy, it's not, why is this guy, okay, let me explain, this thing here, what you should see,
46:02
it's really, it's the same thing, so it's the same thing as the exponential of a log correlated field with no singularity, you should really see this variable as kind of the exponential of a log correlated field with no singularity, this guy, okay, it's the exponential of a log
46:25
correlated field, so remember it's against dx, but so never mind, okay, this is max of x and one, so it's one, this guy, it's the exponential of a log correlated field, you know,
46:48
when I say log correlated field, it means it explodes on the diagonal, you know, it's log one over x minus y, but it has a something blowing up around zero, and this guy here, and the blow
47:00
up is due to the maximum over there, in fact, and when you condition the thing to be negative, you're killing the singularity, and so let me stress once again expectation of rho alpha zero alpha two over gamma q minus alpha, so maybe it's a bad notation, I don't know,
47:20
it's worth infinity, that's the full point, you can't define two point correlation, now here's something, here's something, so I'm going to admit, but is that roughly, okay, this guy, it's just an ordinary log correlated field with a divergence on a diagonal, okay, so
47:46
just like I said in the first lectures, when I integrate a log correlated field, it has a moment of order four over gamma square, this is, you know, set into stone, right, so if I take any interval included in r, the expectation of zs of ds to the power p,
48:07
which is equal to, you know, the expectation of my Gaussian, how did I call it in gamma, my Gaussian chaos measure, integrated on the interval times, this thing is finite,
48:26
it's just an ordinary log correlated field if and only if p is smaller than four over gamma square, okay, this is the important thing, and what we're going to admit now
48:42
is that since I'm integrating against that sds, not an interval, but with something that is, you know, really going fast to minus infinity and plus infinity, which is going fastly to minus infinity around this and around this, it doesn't change the moment property, okay, it doesn't change them, and so what I want to say is that,
49:09
what I want to say is that the moment that the expectation of whole alpha to the power p
49:33
is finite, just as with an ordinary exponential of a Gaussian free field,
49:43
and so in particular, this is a well-defined definition because four over gamma square is smaller than two, is bigger, so two over gamma q minus alpha is smaller than four over gamma
50:01
square if and only if alpha is bigger than gamma over two, so I have a well-defined random variable here, okay, so I think that I'm going to take a two to three minute break because I've introduced everything I need now, and so I think so you can digest a few minutes
50:25
and maybe ask me questions, then we can start again in five minutes, okay, so I introduced the material to understand, it's still written, it's going to be more transparent in a minute I hope, I introduced the material to understand the tail of this variable,
50:46
and I told you, okay, that in theorem 3.3 we proved this, and I introduced the r bar alpha which works, okay, and this is what I'm going to justify in this hour, okay, but I had to introduce lots of material, and let me just say something again which is all
51:07
over the lecture notes and I haven't justified, but I don't know if I'll have time, if I take the exponential of a log-correlated field and I integrate it on some open set, it has a moment of order p for all p smaller than four over gamma square,
51:23
and since, you know, this lateral noise business is just a log-correlated field, if I take a compact interval and I integrate, then I'm integrating the exponential of my log-correlated field in some, okay, this is not an open set but a compact set, sorry, in a compact set, this is a finite if and only if I have this,
51:43
and the fact that here this guy is going very quickly to minus infinity when s goes to plus infinity and minus infinity, it says that roughly it's like integrating this exponential of log-correlated field on a compact interval, okay, so I hope this is clear.
52:05
Okay, so let's go, theorem 3.3, so everything's going to be I hope clear on why this appears, so I'll say it in two languages, one which is very analytic and one maybe which is more
52:22
geometric, so I want to study i alpha, remember i alpha up there, so my Gaussian free field has this covariance, this is the covariance of my Gaussian free field, it's log one over x minus y plus log x plus y plus, and so x plus is the maximum of x
52:51
and one, so if I'm inside the ball of center zero and radius one, this thing disappears,
53:01
right, this is with zero and this is with zero, so inside the ball, my GFF, I can take out this term, okay, now the expectation, so I'm going to do a change of variable, I'm mapping my complex, my ball of radius one and center zero to a cylinder, so it's not a cylinder,
53:27
it's a half cylinder, now if I look at the covariance, it's nothing but this and it's this, the minimum of s and t plus log of this, this is a straightforward computation,
53:53
okay, this is straightforward, this plus log of the upstairs here is zero,
54:05
okay, so I see that, so if I'm writing it at the level of covariance, but if you want it in a more geometric picture, you take your Gaussian free field, you take the radial part, so this corresponds to taking, this is the covariance of the radial part
54:24
of my Gaussian free field and this, so the covariance of the radial part is this and plus the covariance of the lateral noise, okay, so I did it
54:42
as a simple computation or you can see it geometrically, I take my Gaussian free field, I project it on the circle of radius exponential minus s and what's left is an independent Gaussian field which has covariance this, the lateral noise, okay, and of course,
55:02
everyone recognizes here, this is Brownian motion, so this guy, the radial part is distributed like a Brownian motion, okay, so now what am I going to do? Well, I'm going to do a change of variable in i of alpha and see Brownian motion and lateral noise appear and
55:23
then take the limit, so let's go, i of alpha is the integral for x less or equal to one,
55:52
okay, so I'm going to do the full computation at a formal level, but if you want to justify
56:01
rigorously, you just add cutoffs here and then make them go to zero, so I'm doing my change of variable, I set x minus s ei theta, so I get what? Integral of zero plus infinity, so I get a Jacobian minus 2s because there's, you know, there's, if I do
56:25
radial, it's r dr d theta and since I'm taking an exponential, I get a two here, so it's a standard, this guy becomes exponential of gamma alpha s and I get, of course, okay,
56:56
so let me emphasize that it's very important to always write this explicitly, this guy here,
57:02
you know, this hidden normalization, if you don't, you're going to end up saying something false, so now I decompose my x over there, okay, it's the sum of two independent parts, so I'm going to call this bs by the way, so this is my bs, my Brownian, so it's a Brownian motion, right, with this covariance, so I get, so x is a Brownian motion plus an independent guy,
57:26
so I get integral zero plus infinity exponential minus 2s, so there's a typo by the way, I wrote alpha s in the notes and there's a typo, there's a gamma in front, okay, and I, so still going a bit slowly, I'm going to write things explicitly, this,
57:51
I get exponential gamma, my Brownian motion minus, you know, the term that comes out of this times the lateral noise, okay, and so at the end of the day, if I, you know,
58:24
two over gamma, I get the integral of exponential gamma, a Brownian motion with drift minus q minus alpha times, and I integrate over d theta, so zs ds, so remember, I wrote it here,
58:48
zs is integrating my lateral noise on the slice, so I get this expression, so this is the very, this is the very important expression, the key thing, where you see how this guy is going to
59:00
appear already, right, okay, because, you know, the Brownian motion is going to have, we're going to zoom around the maximum and we're going to see these two parts, the two-sided guy appear here, okay, so let me continue, whoa, so now I,
59:56
I used the Williams decomposition, which
01:00:00
I erased and it says this so the integral so now my Brownian motion of course it has a maximum so I'm going to so it's equal in in distribution or in
01:00:21
law okay to what so I take out I take out the maximum so exponential gamma M so where M here is the the maximum of this guy this Brownian motion and so if
01:00:41
I if I shift I get minus L minus M plus infinity exponential gamma of my curly set of s plus L minus M TS so I take the supremum of my radial part you know
01:01:08
my drifted rate minus the drift that appears naturally so I and I saw the Williams decomposition says that if I look around the maximum what do I get on the right I get this this drifted Brownian motion condition to be
01:01:24
negative and on the left I get the same thing up to the the first time the last time it hits minus M now here remember this guy is independent from this guy okay the lateral noise is independent from this Brownian motion so
01:01:41
it's in this picture I'm shifting but this is this guy is independent of this guy and it's stationary so I'm allowed to take out the shift here so it's equal in law to exponential gamma M integral minus L minus M plus
01:02:02
infinity of exponential gamma B alpha s z of s ds okay so now now now we're almost there right so let me make a few so this is the variable we have to
01:02:23
study so the variable is this okay so this is this is I of alpha right so
01:02:46
I of alpha is equal in law to this so let's let me call this variable okay it's it's I of alpha so but it's in distribution so what does it say it says that this I of alpha I can I can have this trivial bound it's less than minus infinity of so this is the whole of alpha it's clearly below okay
01:03:07
and it's clearly above this so what is what is what is not a
01:03:25
exponential okay but what is nice here is that these tails are obvious right this is this guy here the maximum well I know the mat this is an exponential variable with parameter Q minus alpha to Q minus L so in fact
01:03:43
this is completely explicit this is equal to 1 over T to the power 2 gamma Q minus alpha this is there's no other terms here it's it's an exponential variable it's completely explicit and this guy here well it's independent and this guy here is independent and so I have a random so
01:04:03
I have a random variable here which has a tail of this form and I have a an independent variable here which has a tail of the form 1 over T to the power okay at least it has a tail which is smaller than 1 to the T over P for all P less than 4 over gamma square so it's trivial you have to
01:04:27
independent variables it's a it's a it's a scaling argument so this guy here so this is all of alpha the tail of this guy well by a simple scaling it's nothing but equal to the expectation of all of alpha 2 gamma
01:04:52
over Q minus alpha divided by T 2 over gamma Q minus alpha okay up to some
01:05:06
little correction because this is only valid if T is a is bigger than one okay so this these two guys are independent so I I wrote this probability as T over this and then I apply my scaling they're independent I
01:05:21
get this right what no not yet I'm saying I get this I get this if I look at this guy I get the same thing except that I don't got I don't get whole of alpha I get okay half let's this but what am I saying I'm just
01:05:43
saying that the tail of this guy it's clearly a constant between it's it's clearly between two constants 1 over T to the power 2 okay so now it's easy because look at this so this is I about okay so what happens when I look at the
01:06:02
tail if I say M is bounded then if M is bounded I'm going to end up with a tail here which is of this order so if I look at the probability that this guy is big necessarily M is big so if M is big this means that I of alpha if
01:06:25
I of alpha is big roughly on this event I of alpha it really looks like M has to be big so it looks like this guy integrated so if M is big then this is minus infinity and so what I'm saying what I tried to argued and I didn't give
01:06:41
you all the technical details but I argued that on the event that this tail is very big it on this guy is very big it looks like exponential gamma M and I replace M by infinity here times the whole of alpha and so at the end if you work a bit you just have to control a bit you know what's going on with if M is not very big and and you can actually get even
01:07:02
balance you know that I you know correction terms and so this explains why I erased it but at the end of the day if you sum up these considerations I get that this guy is big is roughly you know the whole of
01:07:21
alpha divided by T over 2 over gamma Q minus alpha okay so you work you get you get the second bound etc the second order okay so I think that this explains why why you see the these kinds of objects and so let me let me
01:07:44
explain okay what why do we call this oh I'm what time is it I'm I think I'm going almost too quick but I wanted to just show this in great detail and
01:08:03
so let me recall what the the quantum sphere is so ah I'm erasing exactly the wrong thing so this is the definition of let me not erase this sorry so I
01:08:41
have my measure here so why do I say this is the partition function it's because so definition so let me call this like the alpha quantum sphere so
01:09:21
alpha quantum sphere and so duplicate another Sheffield they took they consider only the case alpha equals gamma but okay it's so the alpha quantum sphere is going to be the following random measure defined up to
01:09:44
translations and on the cylinder so it's going to be the measure if I take a function defined on measures then the alpha quantum sphere well it's it's exactly the measure it's the measure associated to this guy okay or no
01:10:04
it's going to be the measure F of so the unit volume one so I take my my lateral noise GMC measure and I multiply multiply by exponential the
01:10:20
two-sided drifted Brownian motion condition to be negative and I divide it so I want a unit volume guy I divided by total mass okay by rho alpha rho alpha is okay rho alpha is this guy it's the total mass so I if I integrate this on the cylinder I get one it's unit volume and but it's in
01:10:42
very important that it has an extra term here I don't need to deem derivative which is this which is the total mass okay and so if I want this to be a probability measure then I have to divide by so remember this is
01:11:06
about our bar of alpha and so okay it's very natural to to interpret the unit volume as the partition function of the of the alpha quantum sphere okay so I wanted to give this definition because I I realized that for most
01:11:24
people it was not clear that there was this term here in the definition there's a hadon you could deem derivative it's not just you know this radial part which I've conditioned to be negative times the lateral noise it's also there's a hadon you could deem derivative which comes out naturally and
01:11:44
so and so what you really have to understand is that the mating of trees paper it's it's really constructing random volume forms associated to uville conformal field theory with two mark points one at zero one at infinity
01:12:01
okay and okay what I did today is I showed you how you you construct you know the partition function so the two point correlation function out of the three point correlation function but similarly I mean you can take so we constructed so I I give a definition in lecture three we give I gave a
01:12:25
definition in lecture three I and in the same way if you if you take the three-point the uville volume form associated to the three-point correlation of uville something we've constructed if you you take weight alpha
01:12:41
alpha epsilon you're going to converge to a random measure in the space of quantum surfaces and it's going to be this so you can also what I did for the two-point correlation function you can also very easily lift it to the convergence of random measures okay so this is nice because it really you know it shows how really the link between the two works
01:13:05
and it had to be that way they have two marked points so it's a it's the uville two-point correlation function okay so I've been I I realized that I still have some time so I'm maybe I'm going to explain to you when one or
01:13:23
two points that I admitted on the moments of order p and then probably I don't know if it's a problem if I finish a bit earlier than expected I'm said said yeah okay well is it I hope it's clear all this that
01:13:44
I'm saying that okay so let me yes so uh let me explain something in a very simple way something that I've been admitting in lecture one in lecture two
01:14:04
I'm not going to exactly prove this but something I use very often is this bound for over gamma square so so imagine I take some some say some ball of center X and radius R and I always I told you something which is important is that if
01:14:33
I if I integrate the free field say on and I told you that this is finite
01:14:45
I take of course gamma and 0 2 this is finite if and only if p is smaller than 4 over gamma square so let me try to explain to you where this 4 over gamma square can be seen very easily okay so I use this very often so this is
01:15:05
maybe a little digression so let me do the following thing so of course this this will not depend say I'm I'm not forget about the other terms they play
01:15:22
no rule let's say I have a nice caution law correlated field here okay so of course this will not depend on R I mean if you show it for one one open set it's true on all of them you just cover them and it's so I want to explain what how you can see in a three three line computation where this
01:15:48
so well do it like this take a square okay so I'm going to integrate on so
01:16:15
this is a square alkaline C and I can cut it into four squares okay and so
01:16:24
the integral over the square of my exponential I of course I can write it as the sum on the four sub squares and so I take p bigger than one by
01:16:43
super additivity I have that the integral on the square it's greater equal to the sum of these on the sub squares to the power p and so by this
01:17:08
is a stationary process okay so it's so it's bigger than the sum so remember x plus y I don't the power p so a plus b say because this is a variable a plus
01:17:22
b to the power p is grand if p is greater than 1 so I get this on each square and by stationarity I get for expectation on the square so
01:17:48
exponential gamma X cetera to the power p so this is a this is obvious so
01:18:22
these are not in the lecture notes I I can put them okay now the integral over zero one-half okay what is this I do a change of variable so X is what is
01:18:48
u over 2 and I get what so this creates a 1 over 2 so wait at the X to you that's a 1 over 2 square so I get 1 over 2 integral on this on the
01:19:01
initial square okay so on the initial square X to you X is u over 2 yeah minus gamma square and once again what I'm doing here on a formal level you can do it rigorously you just put a cut off here you do all my stuff and
01:19:24
you go to the limit okay now now let's look at this guy the covariance of this guy is log so I was playing around with these relations ten years ago when
01:19:42
I started GMC theory and I really love these little manipulations so I hope you'll like it too it's it's this so it's log 2 plus log 1 over u minus v so what does it mean it means that this guy it has the same
01:20:03
distribution as X you the initial guy plus a random variable okay a long variable with centered and with variance log over 2 okay I'm just so this guy it has the same distribution as my initial guy so 1 over 2 square
01:20:29
exponential a Gaussian variable minus its variance times the integral over the square but this time of not X u over 2 but X u so my initial guy now
01:20:55
now I plug it so I plug this inequality this guy integrated on the
01:21:01
square is bigger than 4 times the same guy integrated on this the on the c1 square but I know that integrated on this the the first square c1 I can relate it it's the same thing as the initial guy times an independent Gaussian variable so let's see let's plug this in so I get that the
01:21:22
expectation of the guy on the standard square so C well to the power p is greater or equal to 4 so what is this guy I get 1 over 2 2 to the power
01:21:46
2 p I take this guy I get exponential of so gamma square p square over 2 log 2 ok minus gamma square 2 p times the same thing so integrated
01:22:09
over p so of course if I want this to be compatible I have to check that this thing has to be a less or equal to 1 otherwise the moment is infinite I mean it's a contradiction so let me write it so for those who so you're
01:22:31
not actually going to see the you know that the KPZ formula here this is the KPZ formula on the host or dimensions if you like it's the way
01:22:41
the measure scales when you you start zooming in so what is this if I if I you know if I if I put this this is equal to 2 to the square minus zeta p zeta p okay so I get expectation of my business to the power p greater or
01:23:01
equal to the same thing to the power p times 2 to the power 2 minus zeta p where zeta p is equal to 2 plus gamma square over 2 p minus gamma square over 2 p square so the KPZ quadratic KPZ relation okay and you can check that
01:23:26
zeta of 4 over gamma square is equal to 2 compute take home computation so it means that if p is above this guy okay what does zeta look like so it's a
01:23:44
concave not get this wrong something like this and then it starts going back down etc and and what it says is that zeta p is going to be a wait a
01:24:04
second yeah it's 4 over gamma square it's starting to go down so if you're above then this is going to be greater or equal to 1 ok so that explains the this thing here it explains the threshold and that that we've been that
01:24:22
I've been using all along it's a very simple you know scaling argument and actually let me let me finish with a few minutes by explaining roughly by the same kind of trick I'm going to explain why if gamma is bigger than two
01:24:40
the measure is zero okay so I mean I hope this is useful I think you can really explain very easily these thresholds so I'm going to explain now why gamma has to be smaller than two it's the same kind of ideas I've
01:25:07
already done some job here so so I introduce zeta of alpha 2 plus gamma square over 2 so I introduced my quadratic relation you know which is
01:25:24
the way the measures the moment scale around the when you zoom in around the point now if gamma is bigger than 2 zeta prime of 1 is negative okay
01:25:42
zeta prime of alpha is equal to 2 plus gamma square over 2 minus gamma square alpha so it's easy to see that zeta prime is negative and I have zeta of 1 equals 2 okay so this means that zeta if gamma is bigger than 2
01:26:11
then zeta is like this so here there's 2 and it's doing this okay so
01:26:29
zeta is a is a concave function zeta of 1 is obviously 1 that's true is obviously 2 that's trivial and if I take the derivative of zeta prime gamma bigger than 2 is equivalent to the derivative at 1 being negative so it
01:26:44
means that I'm I'm like this okay so this means that I can find alpha strictly smaller than 1 such that zeta alpha is strictly bigger than 2 okay
01:27:15
now now if I take I do the same thing alpha strictly less than 1 I can
01:27:28
use sub additivity right this is true and so I do the same game I take my square okay and I cut it into little squares size 1 over 2 to the power n
01:27:51
okay so I can cut it into squares so into 2 to the n so I cut it into
01:28:03
little squares of size 1 to the 2 to the power n and I have 2 so I have 2 to the 2 n of these squares right chopping it into little pieces and I do that I can do the same thing but in the other way around it's sub additive it's not super additive because alpha and what do I get I get that if I integrate the
01:28:24
measure on my my square to the power alpha I'm going to get less or equal to 2 to the 2n and I'm going to get 1 to the 2n zeta alpha because in each
01:28:45
square I scale like and so since zeta alpha is strictly bigger than 2 I can choose 1 I let n go to infinity and I get 0 so this shows that gamma over 2 is
01:29:04
the right is really the threshold where the thing starts to become 0 and it also these little manipulations explain to you why the moment when I take the a lot correlated field I get this bound P smaller than 4 it's just looking at scaling properties around the measure and for those who
01:29:24
know this is this zeta P is really behind you know the geometric KPZ relations okay so I'm going to stop here I finished a bit early but I hope that you don't mind I mean it's just that I could go in lecture three but I think it's not a good idea right okay thank you
01:30:00
no they did they did something different they said I raised it they say
01:30:05
if I take a gamma quantum sphere basically when you have a two-point you can construct a three-point with the same values of the alphas so what you construct is exponential okay roughly what okay in the language of of let's
01:30:24
say of of correlations what you can the alpha quant the gamma okay you take alpha equal gamma so the gamma quantum sphere is this and you can by picking a point you can construct the three-point correlation function with
01:30:40
gamma gamma gamma that's essentially the the message of their paper in the language so you but you can only construct the three-point with the same weights exactly the same weights so that's kind of the take-home message what they did is they did they take a quantum sphere with alpha equals gamma so it only works for alpha equal gamma you take the the quantum sphere with alpha equal gamma you pick a point randomly on the sphere you map
01:31:05
it back to the complex plane to one say then you can you have the same distribution as the youville volume form that we construct by putting three weights gamma gamma gamma so it's what why is what I said today it's
01:31:22
it's kind of the other way you take three points you want to make one disappear you converge to the quantum sphere you can pick a point on the quantum sphere to to construct a you measure with weight gamma gamma
01:31:49
yes here no no no no what's really important is that
01:32:04
okay the main what is very important is that the the random variable it has to you know it has a moment of order P for all P less than 4 gamma square essentially what we what is very important is that if I take my two sided Brownian motion so I forget about you can forget about lateral
01:32:26
noise it plays no role what you really need what we do need and you can yeah comparisons theorems and stochastic differential equations to get this what you really need is that this guy he's really going very fast to minus infinity on both sides and so you can use stochastic calculus for
01:32:44
comparisons that's that's where you can use this yeah but you okay on some technical details you can you use it actually yes much faster because he can't it can't go get a maximum okay so no oh sorry of course you it's a good
01:33:19
question I forgot to say something so if I may take two minutes just to
01:33:26
answer what I forgot to say is oh I'm so let me just take one one minute sorry it's a good it's related to your question and it made me say that so
01:33:46
what what I proved today is that for 2q minus alpha over gamma the gamma
01:34:00
function so this so the unit volume guy so this is the the partition function if you wanted the quantum sphere I showed that this was the limit as epsilon goes to 0 of C gamma alpha epsilon alpha okay this is I took an hour and a
01:34:23
half and I proved this today now remember we proved the dozz formula so as a corollary of the dozz formula and this is kind of answering your some of your question as a corollary of the of the dozz formula we can take the limit in dozz here and we're going to get that since we know how to compute
01:34:44
this we know how to compute this and this is worth just let me write it because it's it's you know it's appears it's worth this how there's a mu here it's worth minus pi mu so the L function is the ratio of gamma
01:35:00
so it's a much easier formula it's called the reflection coefficient no formula but to answer your question so this is the corollary of the dozz theorem you get an explicit value for the two-point correlation function so
01:35:25
it's much easier it's only four gamma functions okay but you don't get the law right okay so this term is completely explicit so if you you just take out the gammas this one and this one the mu etc you get this moment so
01:35:45
it's roughly three gamma function but it doesn't give you the law because the alpha here is linked to the alpha and the power so the the the theorems that we get you know on the sphere on the Riemann sphere up to now they don't enable to get the full law of the random variable
01:36:02
because the moment is linked to the definition of the law itself however what's interesting is that what our PhD students are doing is is looking at the circle or the disk and in these cases you can completely characterize the law in some situations like in the Fidorov-Bouchou formula I discussed but no you you get the moment is linked to the definition of
01:36:24
the variable
Empfehlungen
Serie mit 4 Medien
Serie mit 2 Medien