4/6 Automorphic Forms and Optimization in Euclidean Space
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Teil | 4 | |
Anzahl der Teile | 6 | |
Autor | ||
Mitwirkende | ||
Lizenz | CC-Namensnennung 3.0 Unported: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/46349 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
|
00:00
Hadamard, JacquesHadamard-MatrixMinkowski-MetrikEuklidischer RaumModulformGlobale OptimierungFormation <Mathematik>EnergiedichteFunktionalKonfigurationsraumExistenzsatzPunktgitterMultiplikationsoperatorLineare OptimierungOptimierungsproblemSpezielle FunktionPunktDickeAlgebraische StrukturVektorraumFourier-EntwicklungGlobale OptimierungAbstandFolge <Mathematik>Dichte <Physik>Wurzel <Mathematik>DifferenteInterpolationDimensionsanalyseSummierbarkeitGeometrieKategorie <Mathematik>KonditionszahlNumerische MathematikEuklidischer RaumDivergente ReiheZahlensystemJensen-MaßSchwarzsches LemmaGruppenoperationProfil <Aerodynamik>Freie GruppeFourier-TransformierteGerichteter GraphKurveMinkowski-MetrikGradientKonstanteModulformMultifunktionOrtsoperatorUngleichungVorlesung/Konferenz
09:35
FunktionalMereologieKoeffizientNichtlineares GleichungssystemArithmetischer AusdruckNichtunterscheidbarkeitFourier-TransformierteFunktionalgleichungInterpolationModulformImaginäre ZahlWurzel <Mathematik>Jensen-MaßDerivation <Algebra>EbeneBeweistheorieAusdruck <Logik>Fourier-EntwicklungMultiplikationVariableStrategisches SpielDifferenzenrechnungSummierbarkeitRechter WinkelNormalvektorBasis <Mathematik>Globale OptimierungSortierte LogikMultiplikationsoperatorDivergente ReiheParametersystemOrdnung <Mathematik>UnendlichkeitOrtsoperatorStützpunkt <Mathematik>Numerische MathematikVorlesung/Konferenz
19:05
PrimzahlzwillingeNumerische MathematikKategorie <Mathematik>Element <Gruppentheorie>FunktionalGruppenoperationGammafunktionRelativitätstheoriePunktGruppendarstellungMultiplikationsoperatorDifferenteKonditionszahlEinfacher RingSigma-AlgebraAlgebraisches ModellZahlensystemMeromorphe FunktionÜbergangswahrscheinlichkeitEbeneDimensionsanalyseProdukt <Mathematik>QuotientLineare AbbildungIdeal <Mathematik>FunktionalgleichungNichtlinearer OperatorPolstelleIntegralResiduumRandwertQuadratzahlBimodulEinsVorlesung/KonferenzTafelbild
25:01
WellenlehreÜbergangswahrscheinlichkeitFunktionalTranslation <Mathematik>SummierbarkeitZeitbereichResiduumGammafunktionQuadratzahlNachbarschaft <Mathematik>Ordnung <Mathematik>Leistung <Physik>RadiusPunktTermPolstelleDimensionsanalyseEinfacher RingVektorraumAlgebraisch abgeschlossener KörperKonditionszahlEinsElement <Gruppentheorie>Wurzel <Mathematik>HorizontaleKorrelationsfunktionProdukt <Mathematik>VariableGradientUnendlichkeitImaginäre ZahlDifferenzkernBetrag <Mathematik>MultiplikationsoperatorPunktgitterKlasse <Mathematik>RelativitätstheorieOffene MengeJensen-MaßFunktionalgleichungDifferentialGruppenoperationKategorie <Mathematik>LinearisierungIdeal <Mathematik>Aussage <Mathematik>Strategisches SpielHolomorphe FunktionDeltafunktionMultiplikationStandardabweichungDickeDifferenteVektorrechnungFundamentalbereichVorlesung/Konferenz
30:56
Topologischer VektorraumGlobale OptimierungElement <Gruppentheorie>Objekt <Kategorie>InterpolationModulformFunktionalTermFreie GruppeBimodulKlassische PhysikAusdruck <Logik>DimensionsanalyseZeitbereichVektorraumVorzeichen <Mathematik>RandwertNachbarschaft <Mathematik>MultiplikationsoperatorMaßerweiterungAlgebraisch abgeschlossener KörperWurzel <Mathematik>QuadratzahlJensen-MaßTranslation <Mathematik>Fourier-EntwicklungGammafunktionGruppendarstellungTransformation <Mathematik>EbeneWärmeausdehnungDickeNichtlineares GleichungssystemPunktÜbergangswahrscheinlichkeitRadiusCoxeter-GruppeUngleichungSortierte LogikFlächentheorieParametersystemIntegralResultanteGruppenoperationNumerische MathematikFunktionalgleichungMatrizenrechnungProdukt <Mathematik>InverseSingularität <Mathematik>Reelle ZahlTeilbarkeitFundamentalbereichOrtsoperatorImaginäre ZahlVariableSubstitutionStrömungsrichtungLeistung <Physik>HorizontaleMengenlehreBeweistheorieZahlensystemPolstelleUnendlichkeitPotenz <Mathematik>Fourier-TransformierteGrenzschichtablösungAdditionSinusfunktionVorlesung/Konferenz
40:00
GradientGammafunktionFunktionalSpiegelung <Mathematik>OrtsoperatorEbeneZeitbereichPunktDimensionsanalyseÜbergangResultanteEinfach zusammenhängender RaumNumerische MathematikMultiplikationsoperatorMengenlehreAusdruck <Logik>PrimidealElement <Gruppentheorie>TeilmengeGruppenoperationMaßerweiterungGerichteter GraphFlächentheorieSchätzfunktionCoxeter-GruppeInterpolationRelativitätstheorieÄhnlichkeitsgeometrieKategorie <Mathematik>KonzentrizitätGlobale OptimierungJensen-MaßBeweistheorieFourier-TransformierteNichtlineares GleichungssystemRechter WinkelFunktionalgleichungMultiplikationAussage <Mathematik>Freie GruppeImaginäre ZahlAnalogieschlussBetafunktionKonditionszahlDifferenteInverseVektorraumBasis <Mathematik>UngleichungParametersystemGüte der AnpassungMinkowski-MetrikSchwarzsches LemmaVorlesung/Konferenz
48:29
GruppenoperationCoxeter-GruppeAlgebraisches ModellGruppendarstellungVektorraumMaßerweiterungOffene MengeEbeneAussage <Mathematik>Ideal <Mathematik>Translation <Mathematik>ZeitbereichFunktionalElement <Gruppentheorie>PunktQuotientNachbarschaft <Mathematik>MereologieKreisflächeAttraktorMorphismusHomomorphismusTeilbarkeitKomplexe EbeneKette <Mathematik>FundamentalbereichKonditionszahlHorizontaleFunktionalgleichungFlächentheorieInverseGeradeMatrizenrechnungGesetz <Physik>Transformation <Mathematik>TermÜberlagerung <Mathematik>RadiusGammafunktionAlgebraisch abgeschlossener KörperRelativitätstheorieNichtlineares GleichungssystemBeweistheorieHolomorphe FunktionUntergruppeBetrag <Mathematik>Basis <Mathematik>Computeranimation
56:57
Konvexe HülleVektorraumElement <Gruppentheorie>Translation <Mathematik>MultiplikationsoperatorAlgebraisch abgeschlossener KörperGammafunktionProdukt <Mathematik>FunktionalGruppendarstellungDifferenteNachbarschaft <Mathematik>SummierbarkeitLeistung <Physik>Ideal <Mathematik>EinsSigma-AlgebraQuadratzahlTrigonometrische FunktionQuotientMengenlehreEinfacher RingKorrelationsfunktionFundamentalbereichTransformation <Mathematik>Freie GruppeNichtlineares GleichungssystemResultanteBeweistheorieAusdruck <Logik>MaßerweiterungZeitbereichEbeneFunktionalgleichungNormalvektorWechselsprungCoxeter-GruppeResiduumImaginäre ZahlOffene MengeAnalytische FortsetzungVorlesung/Konferenz
01:05:26
FunktionalMaßerweiterungPunktFunktionalgleichungDifferenteJensen-MaßZeitbereichResiduumFundamentalbereichMengenlehreÜbergangswahrscheinlichkeitGrenzwertberechnungTranslation <Mathematik>AbstandNumerische MathematikNichtlineares GleichungssystemKurvenintegralAusdruck <Logik>Singularität <Mathematik>Kategorie <Mathematik>RandwertKomplexe EbeneWechselsprungGammafunktionImaginäre ZahlIntegralPotenz <Mathematik>Element <Gruppentheorie>Leistung <Physik>Algebraisch abgeschlossener KörperSortierte LogikMultiplikationsoperatorEbeneTeilbarkeitVektorraumSigma-AlgebraZahlensystemMatrizenrechnungNachbarschaft <Mathematik>Arithmetisches MittelSubstitutionEinsSummierbarkeitVorlesung/Konferenz
01:13:55
Numerische MathematikNichtlineares GleichungssystemPotenz <Mathematik>Formation <Mathematik>FunktionalgleichungVerschiebungsoperatorGruppenoperationZeitbereichElement <Gruppentheorie>GammafunktionCoxeter-GruppeFunktionalFundamentalbereichVektorraumTeilbarkeitPunktReelle ZahlDifferenteTeilmengeAnalogieschlussSpiegelung <Mathematik>Ausdruck <Logik>PrimidealFundamentalgruppeIndexberechnungMultiplikationsoperatorMengenlehreEbeneTranslation <Mathematik>ResultanteTorsionGeradeAlgebraisch abgeschlossener KörperVorlesung/Konferenz
01:22:23
Maß <Mathematik>GammafunktionMultiplikationsoperatorPrimidealTermKonditionszahlElement <Gruppentheorie>Sigma-AlgebraCoxeter-GruppePunktDreiecksfreier GraphFunktionalGruppendarstellungVorlesung/Konferenz
01:27:30
ZeitbereichNachbarschaft <Mathematik>PunktMaßerweiterungNichtlineares GleichungssystemTranslation <Mathematik>Algebraisch abgeschlossener KörperFunktionalCoxeter-GruppeVektorraumHomomorphismusBeweistheorieNormalvektorResiduumFunktionalgleichungÜbergangswahrscheinlichkeitKonditionszahlGruppendarstellungRandwertMengenlehreTermKurvenintegralMereologieKette <Mathematik>DifferenteAbstandZweiEbeneWärmeausdehnungNumerische MathematikPotenz <Mathematik>Jensen-MaßAusdruck <Logik>WechselsprungKategorie <Mathematik>Physikalischer EffektIntegralSingularität <Mathematik>VariableImaginäre ZahlHolomorphe FunktionAnalytische FortsetzungOffene MengeKomplexe EbeneFundamentalbereichVorlesung/Konferenz
Transkript: Englisch(automatisch erzeugt)
00:16
So it's very nice to be here again after a short break, and thank you all for coming.
00:22
So maybe I'll start by briefly recalling what happened in the first three lectures, because we have two weeks of break, so maybe somebody has forgotten some of the details in the previous three lectures.
00:52
So maybe I'll write a little bit of schematics. So our big goal of this series of lectures is to prove the universal optimality of the E8 and Leach lattice.
01:15
Of lattice we denoted by lambda d, and here d is the dimension, and it's always either 8 or 24.
01:25
And so this was our big goal. And so just briefly to recall you what it means then, what is the universal optimality.
01:43
So it means that for each configuration of point C inside a d-dimensional Euclidean space, such that the density of this configuration is 1, which is the same as the density of Leach lattice and E8 lattice.
02:10
And for each positive constant alpha, what we require is that the Gaussian energy of the configuration C
02:25
will always be bounded by the energy of our lattice,
02:40
either E8 lattice in dimension 8 or Leach lattice in dimension d, and alpha is related to P, so here P is Gaussian with exponent alpha.
03:08
And so this energy, it's an energy of mutual interaction of our points, and we think that our points, they repel each other, and the energy is given by this function where r is a distance, it depends only on distance between two points.
03:24
And so we take a normalized sum over all pairs, and this way compute the energy of the whole configuration. And so what we also explained in the previous lecture that our method for proving this universal optimality is the linear programming, so it's a linear programming method which is used quite a lot
03:48
in this kind of geometric optimization problems and particular linear programming we are using. In this case, it was adapted by a notation found by Kohn and Kumar,
04:01
and so what we will show is that linear programming, it implies universal optimality. And so linear programming, it means that we need to show an existence of a certain function,
04:25
so we need to show that there exists, for each alpha we'll find a special function f alpha, which would be a radial Schwarz function, such that the following conditions hold is that
04:52
this f alpha should not exceed our energy profile, so to say, for all points in the Euclidean space,
05:08
and also its Fourier transform has to be non-negative. And if we are able to construct such a function like this, then for configurations of density one,
05:30
so this number, the difference of the values of this Fourier transform of the auxiliary function at zero,
05:44
and the function itself at zero, it gives us a lower bound for the energy of any possible configuration. And if we wanted our bound to be sharp, it means that this difference has to be exactly equal
06:00
to the energy of our optimal lattice in question. And so what we also observed last time that, so we suppose that such an optimal function does exist, and then what will happen is then the existence of our optimal lattices,
06:22
it will pose certain restrictions on this function. So the existence of this optimal lattice lambda d, it will then imply that if such a function exists, then it satisfies the following conditions.
06:41
So that these inequalities, they have to become sharp, the vectors which have the same length as some non-zero vectors from our lattice.
07:16
And so now what also comes into play is that now we know actually...
07:21
Dual lattice. Dual, yes, but it's the same. And so now what comes into play is that the lenses of vectors in both lattices, they have very nice algebraic structure.
07:43
So they will be just possible lenses, they will be square roots of even integers starting from non-zero, which would be the length of shortest vector. And actually this coincidence or this good property of the both lattices
08:05
make this problem somehow accessible for us. They give us hope to solve it. And that's because we observe that there exists a Fourier interpolation formula,
08:28
which helps us to reconstruct radial Schwarz function exactly from these values. And so this interpolation formula is like this.
08:44
So here again, let d be our dimension and n0 be the number which depends on d, it will be the length of shortest vector in our lattice. So if dimension is 8, then n0, because it's the length of shortest vector,
09:00
squared divided by 1. So it's almost length of shortest vector. So it's either 8-1 or 24-2. And so then the interpolation formula says that there exists a sequence of Schwarz functions,
09:27
the radial Schwarz functions on the corresponding Euclidean space, such that for any radial Schwarz function f,
09:44
we can reconstruct this function just from its values as the square roots of even integers and the same information for its derivatives and for its Fourier transform and derivatives of the Fourier transform.
10:01
So we would have a formula like this. And so this is also the formula we discussed on the previous lecture.
10:51
And so how do we approach proving such a formula? So what we are going to do,
11:01
we are going to reduce the proof of this Fourier interpolation formula to a solution of a certain functional equation.
11:25
And so the functional equation relates to this formula in a following way. So let's consider the following generating series, which will include all the functions a n and b n of the interpolating basis.
11:45
So here what we do, we do the following. Consider the following sum. It starts again from n0, goes to infinity, and we take functions a n of x and multiply them by this, by the coefficients, e to pi i n tau.
12:05
And the same we do with b n, only with b n for convenience, we will need this additional coefficient depending on n, and also tau so that we can distinguish between a n and b n.
12:26
So we consider a function like this, and also another function x tilde, which will contain the same information about the
12:40
Fourier transforms of these functions in variable x.
13:01
And so now what we do, now we take our interpolation formula and apply this interpolation formula to a complex Gaussian. So now what we do, so interpolation formula, if it is applied to the following function f, which is
13:26
2 pi ln x squared times tau, and here tau is a variable at the upper half plane, then what we get is the following functional equation.
13:51
Yes, right. So the functional equation is like this. So the functional equation itself tells us that the function,
14:01
namely this complex Gaussian e to the pi norm of s squared tau, it will be equal to f of x tau plus the following modification of f tilde.
14:30
And so also implicitly we have two more equations for these functions. For example, we know that the function f is linearly periodic,
14:40
so if we take a second difference of these functions with step one in variable tau, then we will get zero, and the same is true for f tilde.
15:01
And so now what we want to do is to solve this functional equation,
15:27
then it will give us explicit form for our interpolation formula. And so here one important thing that once we have explicit interpolation formula, we can also find this function f alpha,
15:41
because we know all the information about the function f alpha, which is needed in the interpolation formula. And so from the interpolation formula we will see that not only that we can find the function f alpha explicitly,
16:01
but it also will actually coincide with some values of our generating function f. So it will be the same as our generating function f, only it has to be for the second parameter, we have to take a purely imaginary number i alpha.
16:23
So this would be the number, the imaginary axis on the positive half plane. And so now with this picture, what is still remains to be done.
16:45
So here our main goal is to prove the universal optimality. So what do we still need to prove the universal optimality? So for the universal optimality, we still need to do the following.
17:10
So actually for both for universal optimality and for, maybe I skipped one more step about how, so before we write what still has to be done,
17:21
so let's write down about what is our strategy for solving the functional equations. And so what we are going to do, we are going to search for our function f
17:44
in this very special form. So we want to search for our function f in a form like this. So f of x tau, we want to define it as a following expression. And so this part comes from our knowledge about the special values of this function.
18:20
And so this part will give us the double zeros.
18:35
And it turns out that the multiplier here,
18:41
it's convenient to assume that it has the following form. And so this function k, which we want to take here,
19:13
we also assume that it is a meromorphic kernel on the product of two upper half planes
19:24
and that it has the following properties. So we will introduce this kernel in the previous lectures.
19:42
So the properties of k. So first it's that we want k to be a meromorphic function, which is defined on the product of two upper half planes
20:05
and not only along our path of integration. So other properties that if you look at k of tau z as a function of z and tau fixed,
20:24
then it will have only simple poles. The points where z and tau have to be invariant with respect to action of SL to z.
20:50
And also we want our function to satisfy the homogeneous version of the functional equation.
21:07
And so the slash notation, we introduced it on our previous lecture. And so this should be true for all elements a, which belong to the following, to the ideal.
21:31
And this is an ideal inside of a group algebra of PSL to z. And it's generated by these two elements.
21:43
So first t-1 squared and s t-1 squared. And in the previous lecture we discussed how to... the relation between this notation with a slash operator and the functional equation as it's written on this blackboard.
22:11
Here r is a group algebra of SL to z.
22:28
And so now, so k has two more important properties. One of them is the... So we know that it has simple poles at tau equals z, but the residues,
22:43
they also have to be certain particular numbers.
23:04
And so this will be true for all elements of our group ring. And phi, it's a particular linear map from the group algebra module of the ideal I.
23:26
It's a linear map. And on the previous lectures we have seen that this quotient is actually finite dimensional and it has dimension equal to six.
23:42
So to define this map it will be sufficient for us to define it on some representatives. And so we define it in the following way. So on the last previous lectures we have seen that these six elements,
24:12
they are indeed representatives for this quotient. And so we define our linear functional to be one applied to element t
24:24
and zero applied to all other representatives we have chosen. And so another important condition, which I probably will not repeat in details,
24:40
but we also had certain gross conditions on k near the boundary. So gross conditions in general at the boundary and certain particular ones at the cusps.
25:08
For example, we know that this kernel has to vanish as tau goes to zero or to infinity and that it has a pole in z as z goes to infinity, but this pole has, so to say, bounded order.
25:23
And this integral f with integral of k, is it convergent? Perhaps it's convergent, so we should kind of regularize somewhere. Yes, we will have to regularize it. So this is our big picture, but actually this integral here, it will be defined only,
25:41
as for now it will be defined only, so this will be well defined for, first it is well defined for tau in the domain D, which was this standard fundamental domain for gamma 2.
26:10
And it somehow, so priori it's probably clear that it is defined away from SL to the images of imaginary axes,
26:23
but because we have so many residues vanishing here, so it's actually well defined on this domain D. And also because this kernel, it has a pole, as the second variable goes to zero and to,
26:40
no, as it goes to zero, I think it's fine, the pole is, the second variable goes to infinity. So it's only defined, well defined in the absolute value of our vector, Euclidean vector x, it has to be bigger than square root of two and zero minus two.
27:00
So in dimension eight, we have a problem only at point zero, but in dimension 24, we have to be careful around a bowl of radius, square root of two, around zero. Do you expect that because we kind of inserted a zero where there shouldn't be in dimension 24?
27:20
Yes, yes, it's because we know that these conditions hold for all lattices, for all vectors which are in the lattice. And for example, in leach lattice, the first possible length is omitted. And so we actually will not have the equality here.
27:51
And so now what is our strategy to proceed? So maybe I will say a few more words about the kernel.
28:02
So here's the proposition. So the kernel k with all these properties and with gross conditions properly specified, as we did it in the previous lecture, the kernel becomes unique.
28:21
So the kernel with this property is unique. And we can write it down explicitly.
28:42
So it will be kernel, which first will be two different kernels for two different dimensions. And we will write in the following way. So here it's a Ramanujan's delta function taking into some power, which will depend on dimension.
29:05
So here we'll have a holomorphic function of two variables tau and z divided by difference of j invariance, which will give us our simple poles.
29:21
And what we know about the holomorphic function p. So what we know that in tau variable it satisfies the homogeneous functional equation. So it would be only in weight, will be now not d divided by over two, but d plus 12 alpha,
29:45
which comes from this from this term. Sorry.
30:02
Thank you. And so also very small in our gross conditions actually require that after multiplication with the suitable powers of delta functions, we will get function which is belongs to this class curly p.
30:23
So which is holomorphic and also has moderate gross. And so this will be with respect to variable tau and respect to variable z. We know that it also our function will be annihilated by a different ideal, which is actually related to the ideal i and linear functional phi, which we defined before.
30:47
So again, we have 12 beta. Here we have a different ideal and also will be a holomorphic function of moderate gross. And so what we can do, we can actually compute these functions p explicitly in terms of classical modular forms.
31:10
And so now what remains to be done. So now we have two different objectives. So to say one of them is our primary goal to prove the universal optimality.
31:22
And another goal for for this course is to prove the free interpolation formula. And so for the universal optimality, what we still what needs to be done now.
31:45
So first, what we have to do, of course, we have to show that the function f is which we defined above. We know that it's defined not only for x outside of this ball around origin, but actually for all vectors.
32:03
So what we need to do, we need to extend x in Rd. But this is kind of actually easily done because for our kernel K, we can write it's a Fourier expansion in the second variable.
32:28
And it will have like two first two terms in this expansion. They will have negative exponent, so they will be responsible for this pole at infinity. And so what we can do, we can just integrate this term separately.
32:47
And so here in the formula, somehow the singularity which we get after this integration will be again just a simple pole. And it will be killed by this sine squared, which we are multiplying by. So it's easy. It's an easy task.
33:09
Then we also have to work a little bit to show that this function actually belongs to its radial Schwartz function.
33:25
In variable x is an additional parameter tau.
33:40
And so then also what we have to do, we have to be able to compute the Fourier transform of this function. So the Fourier transform again with respect to the Euclidean variable x. So show that the Fourier transform of this with respect to x.
34:08
And so this time it will be actually the function f tilde, which we have defined before, which is related to f by this functional equation.
34:22
And it also has a nice integral representation. So this time it will, it has an integral representation like this. So it equals to, we are going to have the sine squared integral from zero to infinity.
34:47
And here instead of integrating function k, we integrate over the following function.
35:12
This time we have an integral like this. And so this is not exactly obvious, but here it comes with some integral,
35:28
the proper integral manipulation. So here we do have to work with a contour integrals and then to apply, to exchange taking integrals with the Fourier transform. And use the fact that Fourier transform applied to the Gaussians here, it looks very nice.
35:50
And so now we are almost done because now the thing which actually reminds for us to prove, actually prove the universal optimality is only to check the positivity.
36:13
And now, so as we discussed last time, so now to prove the positivity, what we actually need to show,
36:21
we need to show that this function is bounded by the corresponding Gaussian. And, but it actually suffices for us to prove the following, to prove that actually f tilde of,
36:45
so now it suffices to show that this function is non-negative when alpha is a positive real number.
37:15
And so here, because we have actually integral representation, is also helpful for us here,
37:22
because what turns out that, what we can show, we can show that the, this kernel function k which, so this modification of kernel function k, let me denote it by k again also with hat.
37:51
This time hat does not mean the Fourier transform, because here the transformation which actually applied to k is different, it's only for our convenience.
38:03
So what we also will have is that this kernel k, it is positive if alpha and t are both positive numbers.
38:25
So if we look at this function defined on the product of two upper half planes, and we restrict ourselves to product of two imaginary axes, then this restriction of the morphomorphic kernel is positive.
38:44
And so this works, this helps us to prove inequality at all points where the, our integral representation for function f tilde everywhere where it converges. And we still have a problem in dimension 24 in this small ball of radius square root of two,
39:04
because there our integral representation does not converge. So knowing something about sine of a function under the integral does not tell us anything about the sine of the result. So this is something what we have to handle separately.
39:21
So also what we have to do, we have to prove separately that the function f tilde, which corresponds to dimension 24, it is actually positive or negative for all vectors of length smaller than square root of two.
39:49
And so this, both inequalities, the only way we found to prove them is by checking it by computer, by numerical computations. And so this is something I will speak about in our next lecture.
40:03
I will tell more details about how we proved these two inequalities. And so now what still remains is the interpolation formula. And as you see that for universal optimality, we actually did not need the interpolation formula itself.
40:23
The interpolation formula, it was an inspiration for us. It showed us a way how to construct this magic functions f alpha. But the interpolation formula itself is not needed for proving universal optimality.
40:43
However, we thought that maybe it's a nice result on its own. So for this reason, we also decided to prove the interpolation formula.
41:11
And so to prove interpolation formula, we need a slightly different information about the function f capital.
41:27
And so for universal optimality, what really interests us is the positivity of functions f and f tilde. Then for the functional equation, what's important for us is that these functions,
41:41
they can be extended to the whole upper half plane and also that these functions are, they have nice growth properties at this plane. So now for the interpolation formula, what we still need to do.
42:05
Yes, in principle, we could do it in any dimension, but maybe we are a bit lazy. So we did only in dimensions 8 and 24.
42:21
Yeah, but I think it's exactly the same method that would work in other dimensions as well. So there is maybe small differences that if dimension is not divisible by 8, then maybe more modifications have to be done. Probably nodes have to be shifted by one or so.
42:52
So what do we have to do for the interpolation formula? So first for the interpolation formula, we cannot, it's not enough for us to know our function only on the imaginary axis.
43:02
We have to extend it to the whole upper half plane. So we have to extend capital F of x and tau as a function of tau from the domain D to the all upper half plane.
43:29
And then, of course, also we have to show that functions f and f tilde, as we have defined them,
43:40
that they will satisfy the functional equation. And now for our formula to work not only formally, but also to be an analytically nice formula to have good convergence,
44:06
for example, not to have a very rapid growth of this basis functions a n and b n at every given point. We also need to know that our functions have moderate growth.
44:22
And so what's more explicitly what we need, we take our functions of x and tau. So now we consider them as functions of x and tau runs as a parameter. And we take the semi-norms of the Schwarz space on these two functions.
44:47
So then what we get will be only functions of tau. And what we want, so these are semi-norms that are taken with respect to x. And so what we want, we want them to have moderate growths in tau for all,
45:22
and this should be true for all multi-indices alpha and beta. And so for what is our plan for today would be to concentrate on the interpolation formula
45:47
and to prove the extension and the functional equation. And probably we'll have no time left for the growth estimate. So maybe if time remains, I will just discuss this a little bit.
46:02
And for the next lecture, I will show you our numerical results and explain to you how we prove the positivity. And probably since I will use the projector, maybe I could present you some more numerical results on proving positivity and also some experimental results just related to this problem and maybe other dimensions.
46:32
I also can show you this one. Yes, and then in the last lecture, probably I would like to discuss some open questions which remain.
46:44
Like, for example, if you have this interpolation formula, which other interpolations or formulas do we have? Or can we theoretically have? Or which other problems can be solved with this approach? And which problems seem to be definitely impossible to solve with methods like this?
47:10
And so maybe we'll make a small break. And so what I planned to do for today is to show that function f defined by the formula above
47:26
in this particular region of tau and x, it can be actually extended in tau and in x to a function which satisfies the functional equation.
47:46
And so probably for now, we will work only with x, which is big enough.
48:00
We will not address this problem of extending our function in x and we'll concentrate on extension in tau. And so first what I'm going to do is to prove the half of the proposition which I formulated in the previous lecture,
48:26
which tells us that if we want to extend f to the whole upper half plane, so what suffices is to extend it only to a small, to a neighborhood of this domain D.
48:44
Rather, it's closure. And if the extension will satisfy the functional equation, then it can be extended further through the whole upper half plane. And so the proposition is the following. So let k be an even integer.
49:15
And suppose that we have two holomorphic functions, h1 and h2.
49:30
And suppose that it will be an open neighborhood of the closure of D.
49:47
And if you have a function from on O, which is holomorphic,
50:00
and it satisfies the following transformation law. And the transformation law is very similar to the law we had before,
50:22
only here at the right hand side, instead of having particular functions prescribed by our problem, we allow ourselves having any functions h1 and h2. And so we want this to be satisfied, so whenever both sides are defined.
51:09
Because our function f is not defined in the whole upper half plane, but it's defined only in this neighborhood O.
51:25
And so suppose that these conditions are satisfied, then we claim that f can be extended to the whole upper half plane holomorphically.
51:52
And this new holomorphic function, the extension, will satisfy the condition,
52:07
also the function equation.
52:27
And so here for the proof, we make one important observation.
52:42
It is that the representatives, we can choose representatives of the quotient of a group algebra by the ideal I to be the same as representatives of a group,
53:01
of a quotient of a group cell to Z by its subgroup gamma two. So, and how it works, and it works in the following way. So let's consider our fundamental domain. So this will be domain D. So it contains, consists of all points of the upper half plane with the real part
53:27
between one and minus one, and these two semicircles excluded. So the semicircles of radius one half around plus one half and minus one half.
53:42
And so now what we can do, we can divide this domain D into six subdomains, and each of these subdomains will be a fundamental domain for the action of PSL to Z on the upper half plane.
54:00
And so we will take this domain here, and then we call it f. And so f, it will consist of points such that their real part is between zero and one,
54:32
and also we exclude these two big circles, so we write that absolute value of tau should be bigger than one, and absolute value of tau minus one has to be bigger than one.
54:49
And it is a fundamental domain for the action of PSL to Z on the upper half plane. And so if here this is f, then this would be the translate of f by the matrix t minus one.
55:03
Here, just remind again that t is this matrix, and s is this matrix. And so here it's this part, it's an image of f under the action of s.
55:24
And here it is s t inverse, and here it's t s, and here it's s t s. And so now what we claim is that these elements one, t minus one, s t, s s,
56:00
it is a basis for the quotient of the group algebra by the ideal I. And so now let's consider a colon vector with these entries.
56:29
Just before, could these functions h1, h2, could be arbitrary, or they... Actually, they could be arbitrary, but... There's no relation. Yeah, in principle there is no relation.
56:44
And so what we consider now, we consider this line as a colon vector, and let's call this colon vector m. So then it will be an element of 6.
57:12
And so we see that the... So it's elements of this vector will also call them like m1, m2, m3, and so on, m6.
57:26
Then we see that the closure of d, it's the union of this translates of the closure of f.
57:43
And so as we discussed it in the previous lecture, so now we have a representation.
58:03
So there exists a representation. So we denote it by sigma, and this sigma which I use now, it's slightly different from the sigma we defined last time, because in the previous lecture we used a different set of representatives for this quotient.
58:24
But I don't want to introduce new letter, so I will also call it sigma. And then this will be the only representation I use for the lecture today.
58:49
And so also we have the following maps. So now we remember that last time I discussed this ideal I.
59:01
Ideal is freely generated by these two elements t-1 squared and s t-1 squared. And so therefore we have the following two maps. So we'll write them as Ni from PSL2z into R6.
59:27
And these maps, they are the following that... So i equals either 1 or 2.
59:44
And so we define these maps in the following way. If we take our vector m and multiply it by some element gamma of PSL2z on the...
01:00:00
Right, then what we got, we got this would be the same as sigma of gamma times m plus t minus one squared times n1 of gamma and plus s t minus one squared n2 of gamma.
01:00:32
So gamma is an element of PSA2. And this happens because we know that the
01:00:48
representation gamma is defined in such a way that difference between m times gamma minus sigma of gamma times m, it will always belong to the ideal to the
01:01:01
sixth power of the ideal I. And now each element in the ideal I, it can be uniquely represented as a sum of two summands, one of them is t minus one squared times some element of the ring and another is s t minus one squared times some element of the ring. So therefore we have a functions like
01:01:22
this. And so now these functions, n1 and n2, they will satisfy the cosine correlation. And so the cosine correlation is the following. If now we want to
01:01:47
compute this function on the product of two elements of the group, then what we get will be the following. And so this will be true for both i equals 1 and 2.
01:02:36
And so now, so what do we do next? So now by, we can shrink our neighborhood
01:02:55
O such that only, so if we had here our neighborhood, somewhere above it we could have our
01:03:10
neighborhood O. And so we can make a neighborhood O smaller so that
01:03:22
it intersects as little of its gamma 2 translates as possible. So, and because d is also a fundamental domain for gamma 2, so at the end we see that the only
01:03:45
translates which seem to be necessary are these, where here it translates by t squared by t minus 2 by s t squared s and by s t minus 2 s. What we can
01:04:15
arrive is that the only gamma 2 translates of intersecting O are the following ones.
01:04:38
So this would be the t squared O and t minus 2 O and s t squared s O and s t minus 2 s O.
01:04:58
No, no, so what we wanted to take, so O is a neighborhood of the closure of
01:05:03
d and what we want to do, we want to, somehow it's not important for us which neighborhood of d we are considering, so for example we can make it smaller as long as it is an open neighborhood. And to make it so small that if we take O and take gamma 2 translates of O, so they intersect, so to say, only as
01:05:22
long as it is necessary. So we don't want it to intersect with some translate which is somewhere far, far away. We want all of them to be only these ones where somehow this is unavoidable. And so now what we also want
01:05:44
to do, we want to take an open neighborhood of the closure of f and so also there exists an open neighborhood of the closure of f such that, so what
01:06:05
we want, we want that the union of this translates of O f by our elements m g, they are coordinates of this vector m, so that they have to, this
01:06:30
union have to live inside of neighborhood O. And so we can also do
01:06:41
it by taking this open neighborhood just small, small enough. And so we do it so that, so we want that each in particular, if we take our f and slash
01:07:03
it with this element m g, we want it to be well defined on O f, all j from 1,
01:07:25
and here also we want somehow to make this O f small enough so that if we have different PSL2Z translates of O f, we don't have unnecessary intersections.
01:07:42
So we also assume that intersection of O f with its PSL2Z translates is not zero, it's only if gamma, if gamma translate of the closure of f and f,
01:08:12
closure of f, they share a boundary point. And so namely we will have, this will be
01:08:29
the full list of elements of, so this Omega denotes an element of PSL2Z where this is true, and so this will be the elements s, t, t inverse, s t inverse, t s
01:08:55
and t s t inverse. And so what we wanted to do, we want to make a sort of say
01:09:39
vector valued version of our function f. And so we do it in the following way,
01:09:48
we define vector value for us, which would be the vector, column vector which consists of all translates of f by this, by the elements of vector m, so it would be named like this, f, exactly what this notation means.
01:10:28
And so now it would be a bit more convenient for us instead of extending f and making sure that satisfy functional equation is rather to extend, so now this vector valued version of f, it is not defined
01:10:45
on the domain D anymore, but now it's defined on a smaller domain f. So now I want to extend this function from vector valued function from f to the upper half plane, and now we have, we need to find a substitute for the functional equation, so we have to translate the functional equation
01:11:02
into this vector valued language. And so the translation would be this following, so if we have for each tau in O f and each small omega in this set capital omega, we have such that
01:11:24
the image of tau under the action of omega is still in the set O f, and so we denote it by this number by this element of the upper half plane by tau prime, we would have the following,
01:11:41
so that the J, so the optomorphic factor to the power minus k times the vector f evaluated at omega tau, or in vector valued notation we can denote it like this, so the same as f slash k, it has to be the following number, it has to be
01:12:10
so the matrix sigma of omega times vector f plus the function h1, and now we slash this
01:12:22
function which h1 with this vector n1 of omega, and the same with h2 and the function n2. And so now
01:12:47
let's denote this formula by sum for example, and so now actually this formula, the sunny formula, is the equivalent to our formula which we denoted by star on that blackboard,
01:13:05
so now this formula is equivalent to f satisfying the functional equation for
01:13:24
all tau in the union of these six translates of the domain O f.
01:13:43
And so now what we want to do now, we want to extend our vector valued function f actually not only to the union of this neighborhood but to the all upper half plane, and we will do it so to say by imitating this equation. So now what we do now, we choose, sorry, so capital
01:14:12
omega it is this set of elements, yeah six elements such that if we take our fundamental
01:14:24
domain f of a cell to z and its intersection with translation by this element is not empty. And so now what we want to do, we want to take actually any element of the upper half plane so
01:14:44
let w be any element of the upper half plane, and so then we know that because the domain O f contains the closure of fundamental domain f, so we know that there exists
01:15:04
for sure an element gamma in the group p cell to z and some element tau which is in this domain O f such that w is a translate, is a gamma translate of tau.
01:15:31
And so now we will define the value of this vector valued function f at point w
01:15:45
in a following way. So here we just, simplicity we multiply it by this automorphic factor on this side, and so we set this to be the following values. So it is just
01:16:37
an analog of the formula we had above, but now we replace omega which was an element in this
01:16:46
subset by any element of gamma. And so now what we have to do, we have to show that the function defined in this way, at first that it's actually well defined, and then that it is also holomorphic.
01:18:08
So for reflections, if you kind of can see the image of this, but for reflections and then it will be kind of easy, for example you can translate by two types integers, yeah,
01:18:20
on the left and right using the first equation. It means that you generate two, consider like the hidden group generated by two reflections with respect to one vertical line and another vertical line, and this is another group which is completely, the torsion and the same row of zeroes, and the whole thing, it will be more like, you know,
01:18:41
like mouth or curves, a short description, you get like three groups sitting in a fundamental group on the surface, and here it's something similar, you get a three group with, not exactly, four involutions. Nothing about number six will be in this. Yeah, yes, yes, I think like number six here, maybe it's not so important in here, but...
01:19:03
Because it's kind of namely tools that you can immediately from the first functional question can extend by shift by two, and shift by two it's like three group with one generated, it's index two by two reflections by one plus one, yeah. Because I think in a sense this is what we are probably doing.
01:19:23
Because you also have some reality conditions, maybe it will be just really reflection principle, it's end of the day, without any... Yes, that's the problem. Yeah, because your function's kind of real value, I suppose, on this boundary, yeah?
01:19:42
Yeah, but here also there is this like h1 and h2, which like in this setting can be anything, so maybe we have to think a little... Yeah, I mean I'm sure there might be some either easier way to do it, or maybe it already follows from some more general known results. Yeah, yeah, yeah.
01:20:07
Okay, so maybe I'll still finish the long proof, sorry for that. Okay, so now what remains for us to do is to...
01:20:25
Before we find some better way of thinking about this is that function defined by this equation, it's actually... It is first well-defined because of course this gamma which we have chosen here, it's not unique, right?
01:20:43
So we still could have this ambiguity because of, for example, because of that set omega. And also that it is holomorphic. So what we will do, suppose that this W has two presentations.
01:21:49
Presentations, suppose it has two possible presentations with two different elements in
01:22:06
OF and gamma and gamma prime in PSL2Z, then what we know is that
01:22:21
by our definition of the set omega that tau prime, it has to be omega times tau for some omega in our set capital omega. And from this we also see that gamma has to be gamma prime times omega.
01:22:47
And so now we can see that we could have, let's say, two different definitions for the value of this vector-valued function f at point W.
01:23:06
So one of them uses gamma and tau and another uses gamma prime and tau prime. And actually it will follow from the cycle condition that these two different
01:23:21
presentations will coincide. So what we see is then, so here, how we can write it.
01:24:26
So now we can just use the, here we use the fact that sigma is a representation and here we use that n1 and n2, they satisfy the cocycle condition. And so from here we see that this would be the same as.
01:25:30
And so here we use the definition of our, so here we use this, not the definition, but
01:25:42
this condition which we know that it holds for all elements omega in a set capital omega. And so we see that this now will be equal to here, same for h2.
01:27:15
And now we will see that some of the terms will cancel here.
01:27:24
So namely this term here, it will cancel with this term here and this term
01:27:41
here, it will cancel with a term here. Okay, so maybe not all of them, only the first part. So this will stay, but this one will end here the same. So this one will cancel and this one will survive.
01:28:05
And so from here we will get exactly the representation as we hoped for. And so, but of course probably the easier solution is also possible.
01:28:24
This would be just the same as.
01:29:04
And so here we would have this part then equals to this part. So it would be the same as.
01:29:48
And so now up to this automorphic factor, which actually is not a problem because it also satisfies chain condition. So from here we see that this is exactly the presentation which we would get for
01:30:05
f vector valued if we started with a different representation of omega as in the translation of a point in this domain OF. So this tells us that, now we see that this function is well defined.
01:30:37
Also we know that it is a homomorphic well-defined function which satisfies this condition.
01:30:50
And so if vector valued f is well defined, it means that one of its coordinates, our initial function f is also well-defined.
01:31:01
It's also obviously well-defined and homomorphic. And so this function is also well-defined and homomorphic. Then it's homomorphic because it's homomorphic in all translates of OF.
01:31:21
And so when we glue them together, we don't get any discontinuities. And so f is also well-defined and homomorphic. And so now we know that this condition star it holds for extension of the vector valued
01:31:52
version of f. And so we know that the functional equation has to be true for the function f itself.
01:32:05
Because it's a homomorphic equation, we know that it holds in this open domain O. And so it also has to be true at the all upper half plane. And so now what I would like to do in the remaining part is to show that
01:32:45
our function f capital, which we defined by a certain integral representation, that it indeed can be extended to an open neighborhood of this domain D.
01:33:01
And that this extension will satisfy the functional equation. And then by combining these two results, we would obtain an analytic continuation and also functional equation for the function f.
01:34:22
So now what we want to show is that the function f extends to a homomorphic function
01:35:01
on an open subset H, which contains the closure of the domain D.
01:35:40
And this homomorphic extension, then it satisfies the transformational f of...
01:36:35
So this has to be for e to the...
01:38:10
And it will satisfy these functional equations. And so probably from the two equations, probably I show only the upper one.
01:38:25
And so now this is, so here the proof is also not difficult. Here we just use the fact that we know the residues of our kernel K. And so by it, we can assure that the functional equation holds.
01:38:44
And so how we do it? We do it in a following way. So what do we do? So first we consider...
01:39:00
So here again, we somehow assume that, just for simplicity, we will assume that the absolute norm of x is bigger than our critical value.
01:39:20
And so now we take our tau in the upper half plane. And from the upper half plane, we exclude all the images of imaginary x's by the SL2C. And so now, when we exclude the imaginary x's, what we can do, we define a function like this.
01:40:08
So we call it f sharp. So it would be a function like this.
01:40:23
So it's exactly a function which has exactly the same integral representation as our function f.
01:40:40
So this formula x changed into r, where r is the norm of our vector. So because we know that our functions are radial, so we can just look at this as a function of one real variable. And so now, what we see that this function f sharp, it will be a piecewise holomorphic function.
01:41:05
So it will be holomorphic everywhere except on these images of the imaginary axis, and on those images that will have some jumps. And so the jumps, of course, they will be controlled by first residues of k and this function here.
01:41:28
And so now, what we know is that for every alpha in SL2C, we know that the residue of it,
01:41:40
now we look at this as a tau as a fixed number and z as a variable. And we want z to approach alpha translate of t. We know that here, this residue, it will look like this.
01:42:12
And so this will help us to understand the jumps of this function. And look, another property of, I forgot to say about this function, so f sharp.
01:42:24
So f sharp, even though it's not holomorphic anymore, but what we gained for this is that now this function, it is, so f sharp, it satisfies the homogeneous functional equation.
01:42:41
Because our kernel k satisfies the homogeneous functional equations.
01:43:38
And so now what we can do, we can, so now what we would like to do,
01:43:49
we would like to extend function f holomorphically outside of this domain. And so one way how to do it, suppose that we have function tau,
01:44:01
which is here just on the boundary. And we would like to extend our function f for example to this domain here. So this would be domain, for example, we can call it u.
01:44:22
And u, it will be the set of all points w such that the real part of w is bigger than one. And that the distance between this fixed tau and w is smaller than epsilon.
01:44:45
And so now to extend it, what we have to do, we have to look first what are the translates of tau, which of them lie on the images of imaginary axis.
01:45:08
So here it will be tau minus one, and here will be tau minus two. And so also here we will have some translates here and here.
01:45:22
But then if you go carefully to our formula here and recall how we defined our linear functional phi. So then the only point of these four points which will cause problems for us, now remember that we are integrating on this path from zero to infinity.
01:45:40
And so the only points which will cause problems for us, it will be this point here. So what we have to do, we have to go like this. And actually for these points, they will not contribute to the to the singularities because the residue of k there will be zero. So what we have to do, we have to change our contour of integration.
01:46:04
So this will be some new contour, for example, we can call it gamma. And so now we can define for if we have maybe let this be some not just tau,
01:46:21
but tau is zero, for example. So now for tau in this region U, we see we can define that f of tau r to be this.
01:46:57
Define it like this. Instead of integration just from zero to infinity, we take this new path.
01:47:08
And so now what is important is to see what would be the difference between our extension of f at this point and the function f sharp, which is also well defined on this set.
01:47:24
And so now we will see that now we see that the difference here,
01:47:45
it exactly will be the integral somehow around this point tau zero minus one. So by knowing the residue, we can compute this explicitly. And so this is the answer.
01:48:10
And also we know that actually f of tau r actually equals to the f sharp of tau r
01:48:23
for tau belongs to the fundamental domain. And so now if we take, for example, points like tau here, tau minus tau,
01:48:41
this would be tau minus one, and this would be tau minus two. So now from, also we know that this functional equation holds.
01:49:12
From here we can compute what would be this number. So I think here I did one small mistake.
01:49:36
So now this is not zero. What is true is that... Okay, so what satisfies functional equation is not f sharp, but rather f sharp minus the expansion.
01:49:55
So this would be the same as... This is actually the same as exponential. So we applied the same functional equation.
01:50:10
So from here we can actually compute what is the...
01:50:29
And so now if we somehow don't make any more mistakes or make a... So if we make no more mistakes or make correct number of mistakes,
01:50:42
then we will obtain that here we have zero. And for similar considerations, we can also prove the second equation here. Yes, for functional equation.
01:51:10
Okay, sorry.
01:51:20
Just so probably then it's all for today. And then tomorrow we will continue with some interesting numerics.
Empfehlungen
Serie mit 6 Medien