1/4 Automorphic forms in higher rank
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 36 | |
Author | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/16425 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | |
Genre |
2
3
4
6
13
14
17
18
19
22
23
24
27
28
30
33
34
35
36
00:00
Moving averageBoom barrierRankingGamma functionLogarithmOvalBinary quadratic formFundamentalbereichComplex numberQuadratic formFinite setSummierbarkeitNatural numberSymmetric matrixModulformSkalarproduktraumDiscrete groupComplex (psychology)Nichtlineares GleichungssystemDescriptive statisticsPlane (geometry)Multiplication signTerm (mathematics)Order (biology)Object (grammar)AlgebraIdentical particlesDifferent (Kate Ryan album)Universe (mathematics)Matrix (mathematics)Model theoryFree groupHyperbolischer RaumSquare numberGroup actionThermal expansionSpacetimeSimplex algorithmConnected spaceRight angleWell-formed formulaMassSeries (mathematics)Standard errorParameter (computer programming)Functional (mathematics)VotingArithmetic meanL-functionInsertion lossElement (mathematics)IntegerClifford algebraLinear mapVector spaceBasis <Mathematik>Real numberMultiplication tablePower (physics)Chemical equationDefinite quadratic formQuadratic fieldDegrees of freedom (physics and chemistry)RankingAnalytic number theorySpezielle orthogonale GruppeProduct (business)Prime idealWeightPositional notationCoefficientInvariant (mathematics)Dimensional analysisBlock (periodic table)Maß <Mathematik>Position operatorDeterminantHeegaard splittingSymmetry (physics)Degree (graph theory)Hecke operatorInverse elementRootSet theoryEigenvalues and eigenvectorsExpressionCausalityPoint (geometry)Correspondence (mathematics)DiagonalOperator (mathematics)Condition numberCommutatorMereologyGroup representationFood energyPhysicistResultantHypothesisNumerical analysisInequality (mathematics)Time domainClassical physicsStudent's t-testSlide ruleCategory of beingVariable (mathematics)SubgroupEnergy levelAutomorphismNegative numberSimilarity (geometry)Commutative propertyTheoryMusical ensembleTournament (medieval)Goodness of fitRoundness (object)Observational studyFrequencyEvent horizonSinc functionSubsetAnalytic continuationPoisson-KlammerMaxima and minimaSign (mathematics)Phase transitionSocial classPolymorphism (materials science)Theory of relativityState of matterScaling (geometry)MeasurementAnalogyGamma functionConjugacy classComplete metric spaceFigurate numberUniqueness quantificationLimit of a functionInclusion mapOpen setLinearizationDifferenz <Mathematik>MultiplicationModulo (jargon)Cartesian coordinate systemDirection (geometry)Bound stateFourier seriesINTEGRALFundamental theorem of algebraSymplectic groupQuaternionSymmetric algebraThetafunktionDoubling the cubeAbsolute valuePotenz <Mathematik>TheoremProfessional network servicePrice indexIwasawa decompositionModule (mathematics)Power setHolomorphic functionHelmholtz decompositionRational numberIsomorphieklasse2 (number)Coordinate systemLie groupFlow separationCongruence subgroupSpinorAnalytic setLecture/Conference
Transcript: English(auto-generated)
00:14
OK, so actually the title is Automorphic Forms on Higher Rank Groups, at least the official title.
00:40
OK, well, there will obviously be some overlap with other lectures, but that's probably
00:46
not a bad thing. So I guess Philippe, in his lectures, has focused on the group SL2R. Yeah, I did that to get an extra point there. Aha, OK, well, never mind.
01:06
In any case, SL2R is the underlying group for classical automorphic forms, and there are many ways to generalize this due to certain special isomorphisms.
01:23
One can view SL2R as the connected component of SO21, and then the natural thing to generalize this is to look at SON1.
01:42
Of course, you can generalize this even more and look at SOPQ, but let's not be too general. Another way to look at SL2R is to view it as SP2R, or depending on your taste, SP1R.
02:02
And then the natural generalization is SP2NR. Or, what Emmanuel suggested, you can view this as PGL2 plus R, plus means positive determinant,
02:25
and then the natural generalization is PGLnR. Now this has rank one, so strictly speaking, this is not a higher rank group, but in some
02:43
sense it is a higher rank group. It has rank one over R, but over QP it may have much larger rank, so perhaps it does qualify as something of higher rank. This has rank n, and this has rank n minus one.
03:05
I will mostly focus on PGLn, but I will also mention some things about hyperbolic spaces and the symplectic group Siegel modular forms.
03:20
Let me start with hyperbolic spaces. We are interested in automorphic forms on SON1.
03:42
To get started, we need some coordinates. We start with the Iwasawa decomposition of SON1, and, well, abstractly it's a product
04:00
of three groups, N, A, and K, so I'm not quite sure how to, no, no, no, I'm just trying to find out how to organize the blackboards appropriately. Perhaps I'll just continue here.
04:24
Okay, so K is the maximal compact subgroup that's most easy to describe. This is one, and then SON, so that's obviously isomorphic to SON. And then A determines the rank, and you can easily see it's a rank one group.
04:55
It depends on one parameter, and then the rest is the identity matrix.
05:05
So that's isomorphic to SO11, perhaps connected component of the identity. And N is a bit harder to describe. It's easier to describe the corresponding Lie algebra, and then N is just the exponential
05:27
of the Lie algebra, and so it's matrices of the form identity plus a matrix N plus a matrix N squared over two.
05:40
That's the beginning of the Taylor series expansion, and turns out that the rest is zero, where N is of the following form. N is, well, this N is not this N. So this is a matrix N. Take a different font, yeah?
06:01
So it's a matrix of dimension N plus one, and it has a vector here of dimension N minus one, and the same vector again, and it has a vector here
06:20
with a minus sign and the same vector here, and the rest is zero. So this is a vector in R to the N minus one, and this is isomorphic as a group
06:42
to R to the N minus one. Okay, so there are other ways to choose coordinates. Here I'm assuming that my quadratic form is something like minus X squared,
07:02
and then plus Y squared, plus Z squared, plus W squared. So it's minus one, plus, plus, plus, plus one, and so on. Sometimes, okay, so let's write this down. Okay, is this gone forever?
07:24
So this uses the underlying quadratic form minus one and then one, one, one, one, one.
07:41
Or I don't know, maybe the signs are different. Maybe it's one and then minus one, one, whatever. But this is obviously a form of signature one N. Often, the quadratic form one, one, and then this is used.
08:08
So this has one hyperbolic plane, and then here it's the identity matrix. So this is also a form of signature one N is used.
08:21
And then everything looks a little bit different. Okay, so hyperbolic space is S-O-N-one, modulo S-O-N, and that's a natural generalization
08:45
of the upper half plane. And indeed, if N equals two, and perhaps I have to take the connected component here of the identity matrix. If N equals two, then this is just the usual upper half plane. And if N equals three, this is what Akshay introduced yesterday.
09:00
That's hyperbolic three space. And again, there are several models of this hyperbolic space. There is the hyperbolic model.
09:21
This is the set of all X-naught up to X-n in the positive reals times the reals to the power N, such that X-zero squared minus X-one squared minus X-n squared equals one.
09:44
And then you have a natural action of S-O-N-one on this hyperbolic. Maybe more familiar to you if you have grown up with the usual upper half plane is the upper half space model.
10:14
There are N coordinates in this case. Here we have N plus one coordinates, but one equation.
10:21
Here we have N coordinates, and we just require the last coordinate to be positive. So in the case N equals two, this is just the usual X-Y complex plane. And there is a third description
10:42
in terms of Clifford algebras, which is sometimes quite useful. And let me spend a little bit of time defining the relevant objects. Cn is the Clifford algebra, which you may or may not have heard of.
11:03
So the Clifford algebra is an algebra of dimension two to the N. So as a vector space, it has dimension two to the power N.
11:22
And a vector space basis is given as follows. We take the power set of some basis elements, I-one up to I-N.
11:43
So I take N elements, and then I take the power set. And so I get, obviously, two to the N elements. And I interpret a subset, so any element in this power set is a subset of this set I-one up to I-N,
12:03
and I interpret it as the product. I'll give examples soon. So this is a bit abusing notation, so strictly speaking, the basis is,
12:25
so it's R plus R I-one plus up to R I-N, and then all products are I-one, no, wait, I-one, I-two, and R I-one, I-three, and so on.
12:51
So it's a product and not really a subset, but it's clear how to do this. And so this defines the Clifford algebra as a vector space, which is not particularly interesting.
13:01
It's an algebra, so we need some multiplication relations. And I called them I because they play a similar role as I of complex numbers. So in particular, we have I-J squared is minus one for all J, and we have I-A, I-B is minus one.
13:25
It's minus I-B, I-A. This is what you know from Hamilton quaternions. And in fact, it turns out that the Hamilton quaternions are a special case of this. And there are some more relations.
13:40
I mean, this is not enough to get all the relations, but there are, so the multiplication table is known. So as an example, C-zero is just the real numbers, C-one is the complex numbers, and C-two is the Hamiltons.
14:07
So C-two is R plus R-I plus R-J plus R-I-J, and I-J is K.
14:27
Okay, so inside C-n minus one,
14:41
we have an important vector space, V-n minus one. And this is the vector space consisting, so generated by all basis elements of degree at most one. So I-one plus plus up to I-n minus one.
15:10
So that's a vector space of dimension n inside the Clifford algebra of dimension two to the n minus one. And I can view the upper half space, H-n,
15:27
and this up there should be H-n, as sitting inside this vector space, and in a very natural way, if I take the upper half space model,
15:40
then I have n coordinates, the last of which is positive, and I simply map this to X-zero, X-one, to X-n. So n minus one.
16:01
So I don't do anything in this map. Okay, so you can view the upper half space as sitting inside a Clifford algebra. So in the familiar case of n equals two,
16:22
it sits inside C-one, and C-one is the complex numbers. That's what you know, the upper half plane is part of the complex numbers. And as Akshay mentioned yesterday, the upper half space, n equals three, can be viewed as sitting inside the quaternions.
16:47
Now, the important question is, how does the group, so how does the group S-o-n-one act on this? I mean, this is obvious in the hyperbolic model, but it's not so obvious in the upper half space model,
17:01
and it can be very well described in this Clifford algebra model. There exists a set which I call S-V. V stands for Valen, who more than 100 years ago introduced this theory, S-V of C n minus two,
17:24
which is a subset of matrices, two by two matrices, with entries in the Clifford algebra C n minus two, acting on V n minus one,
17:44
by fractional linear transformations. If I take a matrix G, a two by two matrix with certain, not all entries, so not all of these matrices are allowed. I have to take a certain subset. But there are two by two matrices with entries in C n minus two.
18:04
Then this is A-Z plus B times C-Z plus D-inverse, now this algebra is highly non-commutative, so the order does make a difference. I cannot write A-Z plus B over C-Z plus D. I'm doing all of this in the Clifford algebra,
18:22
which is highly non-commutative. For G is A-B-C-D in S-V-C n minus two, and Z is in V n minus one, and this makes sense. So A-B-C and D are elements in C n minus two,
18:45
but of course I can embed C n minus two into C n minus one, and then I know what the product with an element in C n minus one is. And it turns out, which is not easy to see, you have to show it, that this is an invertible element in this algebra.
19:01
Not everything in this algebra is invertible, but it turns out that this element will always be invertible. Do you know the definition of S-V? Yes, so the definition is a bit more complicated,
19:23
but this is certainly one of the requirements. And the definition of S-V is rather complicated, but I can give you the definition in simple cases. So here are some basic examples. S-V of C zero is simply S-L two R.
19:41
So there is no extra assumption. It's just, well, the determinant is one, but other than that, it's just everything. The same holds for S-V C one. This is just S-L two C.
20:00
But already S-V C two is quite complicated, or a bit more complicated. This is the set of all matrices G, A, B, C, D, two by two matrices with entries in the Hamilton's such that the following holds.
20:23
A D star minus B C star equals one. A B star and C D star are in V two. So they have, okay, I'll explain this in a second. And what is star?
20:43
Star is the involution. So star is the involution that maps X plus I Y plus J Z plus K W to X plus I Y plus J Z minus K W.
21:08
So in particular, star is the identity on this vector space V two, because V two is defined by last vanishing coordinate, and it changes the sign of the last coordinate.
21:25
Okay, let's make a reality check. What's the dimension of this? And this turns out to be a group. What is the dimension of this group? Well, for each entry you have, over the reals, you have four possibilities, because these are Hamilton's. So this is dimension 16.
21:43
This means that the last coordinate vanishes, so this subtracts one dimension, another dimension, and here you have a random quaternion that has to be one, so this subtracts four more dimensions. So in total, you have 16 minus one minus one minus four.
22:01
It's dimension 10. You have 10 degrees of freedom over the reals, and the dimension of S O four one is 10. Yeah, so this is good. Okay, so reality check, the dimension of S V C N minus two,
22:26
sorry, C two is 16 minus four minus one minus one is 10, and that's the dimension of S O four one.
22:40
Okay, so the most interesting case is, so the most interesting case, I guess, is the case N equals three. Maybe that's the case that Akshay mentioned yesterday.
23:00
Well, perhaps the most interesting case is N equals two, but yeah, I'm supposed to talk, not to talk on the case N equals two. So then H three is S L two C modulo S U two, and as we have seen in the upper half space model,
23:23
typically the coordinates are X, Y, and R in R three, such that R is positive, and so you can view this as a Hamilton quaternion with last vanishing coordinate. So this is sitting inside the Hamiltons
23:42
with last vanishing coordinate. Okay, any questions?
24:01
So if you want to have further reading literature on, especially on the hyperbolic three space, but also on hyperbolic N space, and this Vahlen group, and Clifford algebras, and so on, basically everything by Elströd, Grunewald, and Menachen.
24:30
They have several free author papers, and you find everything in great detail in their works. What? There is a famous book.
24:40
The book treats hyperbolic three space in complete detail. You find everything you want to know, hopefully, in this book. So that's certainly the most important reference, but this hyperbolic N space is treated in research papers.
25:02
Okay, so the Archimedean theory is very similar. The N case is very similar to the case N equals two, simply because it's a rank one group,
25:22
and in particular, so there is one Laplacian eigenvalue, there's one spectral parameter, and one has similar, for instance, one has similar bounds towards Ramanujan.
25:41
There is a Kuznetsov formula, which has a very similar shape as the original Kuznetsov formula, and you can find this in many works. You can find this in work of Resnikoff,
26:02
Miatiello, Wallach, and there is also a long paper by Cockdell, Lee, Piatetsky, Shapiro, and Sarnak.
26:23
So the Archimedean theory is fairly similar to the classical case. Hecker theory is quite different, because Hecker theory, so there are Hecker operators. If you take an arithmetic subgroup, you can define Hecker operators in the usual way,
26:42
but the Hecker theory is a bit different, because over QP, S-O-N-1 may have large rank, not necessarily, but depending on P, it can have large rank, and then the theory is a little different.
27:01
I mean, as Akshay said yesterday, Hecker operators, if they exist, change the picture completely, and so Hecker theory is a very important part, and the Hecker theory is more complicated,
27:23
because S-O-N-1 over QP may have rank N plus one over two, Gauss bracket. So already in the case N equals three,
27:43
in the case N equals two, you see three over two, maybe 1.5, but if you take the Gauss bracket, it's still one. But N equals three, the rank can already be two, and so if you have a given rank, then at least morally, and in some sense very precisely,
28:02
the Hecker algebra is generated by as many elements as the rank says, and so you see this, so if N equals three, then the rank can be as large as two and you can see this in the classical picture
28:22
if you view this as automorphic forms over an order in an imaginary quadratic field. There are ramified primes, well, ramified primes are not interesting, but there are split primes and inert primes, and if you have split primes,
28:41
then you get two Hecker operators for both copies of the split primes. So, example, N equals three, if P, the prime ideal, the rational prime ideal,
29:05
decomposes as PP bar, then one gets two Hecker operators, TP and TP bar.
29:26
Of course, if P is, so this happens for half of the primes, and if P is inert, then of course you get only one.
29:41
Okay, so why is this interesting? I mean, you can define whatever you want, but does this have any arithmetic significance? Well, certainly the case N equals three has a lot of arithmetic significance because it's automorphic forms over an imaginary quadratic field, but what about higher rank, or higher N, so higher hyperbolic spaces? What is the arithmetic significance?
30:02
So here is an arithmetic example, and that's associated with theta series and quadratic forms. If you have a positive definite quadratic form,
30:21
you can easily write down a generating series for the representation numbers, and then you get a theta series, and because the quadratic form is positive definite, the representation numbers are finite, and there are only non-negative representation numbers, so you have no problem with convergence, and you get a modular form on the upper half plane.
30:40
If you have an indefinite quadratic form, then this doesn't really work, because there are infinitely many, I mean, the representation numbers are infinite, and you have negative, potentially you can represent negative numbers, so it's not really clear how to define a theta series for an indefinite quadratic form.
31:03
And so Siegel developed the theory, so let Q be an integral quadratic form with signature N one,
31:25
and somehow we want to define a theta series attached to this quadratic form, but the naive thing of just writing down generating series doesn't work. So Siegel introduces the following. He introduces the so-called majorant space,
31:41
a majorant of Q is a positive definite quadratic form. Well, it's a positive definite, say, symmetric, real N by N matrix,
32:04
in fact, N plus one by N plus one matrix, matrix R satisfying, okay, I continue over here,
32:26
R Q inverse R equals Q. And one can show that if you have one of them, you can easily write down all of them. So if you have one such matrix R, then all matrices are of the following form.
32:43
These are of the form G transpose R G for G in SOQ.
33:02
So the special orthogonal group attached to the quadratic form Q. So all of the matrices satisfying this are given by one, and then you conjugate, well, it's not really a conjugation,
33:20
but with a matrix in SOQ. So that means G transpose Q G equals Q. Okay, and now we are ready to define the corresponding theta series.
33:41
And we define the theta series attached to the matrix Q as follows. It's a function of two arguments. It has an argument in the upper half plane, and it has an argument in SOQ. And it's the sum of all vectors,
34:03
integral vectors of dimension N plus one, exponential. So what you would like to do is something like H transposed, this is not correct what I'm writing down, but what you would like to do is something like H transposed Q H times Z.
34:22
This is what you would do. So then there is no G. This is what you would do if Q was positive definite. But since Q is not positive definite, this doesn't make sense. It doesn't converge. If you just take the X coordinate, then it's still okay. And for the Y coordinate,
34:40
you do something different. So you do X Q. And for the Y coordinate, plus I Y. And here you take G transpose R G for a fixed majorant, so you fix your favorite majorant, H.
35:03
Where Z is X plus I Y. Z is X plus I Y in the usual upper half plane. And G is in SOQ.
35:28
Okay, this is the so-called Siegel Theta series.
35:41
And it turns out it's a modular form in both variables. It's a modular form in Z as a usual modular form on the upper half plane with respect to some congruent subgroup, which depends on Q. So Q has a certain level, and then you have to mod out a certain congruent subgroup. And it's also a modular form in G.
36:02
So in G, it's in the second argument, it's a modular form for something that's isomorphic to SO N one, because Q has signature N one. And in the first variable, it's an automorphic form, a usual automorphic form in the upper half plane. But if you keep the first variable fixed, then you get a nice interesting automorphic form on SO N one.
36:28
So it is modular in Z and G. So there is some arithmetic significance attached to these forms on hyperbolic spaces.
36:46
Okay, any questions? This is an analog of as-is-lesson theorem. Well, in what sense? I mean, certainly it's,
37:03
I guess it's not cuspital. So in this sense, it has to do with Eisenstein's Theories, and you can probably, I mean, if you keep G fixed and view it as a function in Z, then you can decompose it into Eisenstein's Theories.
37:23
Yes. I'm confused in what sense it's a modular form in G. Don't you need like some discrete subgroup to say what you mean to be a modular form? Oh yeah, yeah, yeah, same with here. I mean, you have to mod out by a suitable discrete subgroup,
37:41
which depends on Q. So I mean, it depends on the arithmetic of Q. There is some level, if you mod out by some congruent subgroup, both in Z, so in both variables. There was a question up there? How was the Hecke operators indexed?
38:00
I mean, in SL2, Hecke operators are indexed by PNs for all N in Z, or in SL2C, you can index Hecke operators by all prime items. By prime ideals. How was the Hecke operator indexed in general? Good question. I don't know.
38:20
I think, at least in this case, I worked it out and it turns out they are indexed by quaternions. So yeah, they are indexed by matrices. This is called the quasi determinant, this expression here. And you can, if the quasi determinant equals N,
38:43
a real number N, then this corresponds to the Nth Hecke operator. Just as in the, yeah. So the point is, this has to be, I mean, a priori, the quasi determinant could be any Hamilton quaternion, but then it doesn't commute. So you have to take something from the center.
39:01
And the center is just the reals in this case. And so at least in this description, and for the case of C2, the Hecke operators are parameterized by quasi determinant being N. And I'm actually not sure how they are parameterized in general.
39:25
And I doubt that you find this anywhere in the literature. I mean, for SN2 case, the Hecke algebra is polynomial algebra of TP1 and TP3. You can't do it. So what, I mean, can we at least
39:41
find a polynomial algebra of some variables? Well, so if you forget this picture with De Waal and algebra and just go back to SON1, then of course, I mean, this is a well-known group and you can read in Satake in the original paper. And I mean, you can just write down the double cosets
40:03
with the respective representatives. But yeah, but if you want to decompose the double cosets into single cosets, this is a complete nightmare if you want to do it in general. Yeah, anyway, I think very few explicit results
40:23
are in the literature other than in the case N equals two and N equals three, which is classical. Okay, any other questions? Okay, so this was just a very, very brief introduction into hyperbolic spaces just to give you an idea
40:41
of some definitions so that you at least know how to start. And equally briefly, I would like to discuss symplectic, the symplectic group, and also give a few basic definitions. And then after that, we move on to PGLN.
41:14
Okay, so symplectic groups.
41:23
Okay, so let me first define the symplectic group. And there's great confusion. Some people call this SP2N, some people call this SPN. I call it SP2N, but if you don't like it,
41:41
feel free to call it SPN. So these are all matrices in SL2NR, such that M transposed J, M equals J, where J is the mother of all symplectic matrices
42:04
minus identity, identity. So this is the identity of dimension N and this is the identity of dimension N. And you can write this as explicitly as all matrices A, B, C, D in block notation, where A, B, C, and D are again matrices of dimension N,
42:24
such that AD transpose minus BC transpose equals the identity. AB transpose is BA transpose and CD transpose is DC transpose.
42:42
In other words, these are symmetric. So this is the usual block notation that you find in most of the literature. Capital letters always denote dimension N matrices.
43:00
Okay, and there is an upper half space that I also call H, but it's not the H that we had for the hyperbolic spaces. HN is the set of all matrices, Z equals X plus IY, N by N matrices,
43:21
but now over C, such that Z is symmetric and Y is positive definite. And you can embed this into the symplectic group
43:44
as matrices IXI, so this is the same thing that you can know from the upper half plane and VV inverse, where V is the square root of Y. So Y is positive definite, so you can take a square root
44:00
and this is the usual way to embed complex numbers into SL2R, where V is the unique symmetric matrix such that V transpose V equals Y.
44:26
And okay, so if I call this G, my group G, then this is just a quotient, G modulo a maximal compact subgroup. Okay, and G acts on this upper half space in the usual way.
44:47
Group action, a matrix ABCD acts on a point Z, which is in fact a matrix, as AZ plus B times CZ plus D inverse.
45:05
And again, you have to be careful with the order because matrices are not commutative anymore. Okay, and as usual, we take a discrete subgroup.
45:25
For instance, we can take SP2N over the integers, but we don't have to. And this comes with an inner product. There is an inner product.
45:43
The inner product is just what you would guess. It's the upper half space modulo gamma. And then you take F1 of Z, F2 of Z bar times an invariant measure. And the invariant measure is DX DY
46:03
over determinant Y to the power N plus one. So the classical case is the case N equals one. And then you just recover the usual thing. So these are the analog of mass forms.
46:21
If you have holomorphic Siegel modular forms of a certain weight, then you have to include determinant Y to some suitable power K, as usual. Okay, so this looks all very similar to what you probably know from the classical case,
46:41
except that all numbers are now matrices. But formally, it's very similar in many respects. So why is this interesting? Again, I mean, you can generalize as much as you wish, but why is this interesting?
47:01
Here's the motivation, and the motivation comes again from quadratic forms. Motivation, why do we want to study Ziegel modular forms? So assume you're given a positive definite symmetric matrix, an N by N integral matrix,
47:25
symmetric positive definite, and even. By even, I mean that the diagonal elements are even. An even integral matrix is an integral matrix
47:41
with even diagonal elements. And pick an integer, a positive integer, or yeah, positive, less than N, or less than or equal to N, for a matrix T of dimension M
48:09
with half integral entries and integral diagonal,
48:24
symmetric and positive definite, study the representations of T by A. So what does this mean? Well, if T happens to be a number, so if M equals one, then this is what you usually do.
48:43
You want to know how many ways are there to write a given number as a sum of four squares. But you can just as well ask how many ways are there to write a given quadratic form, say a binary quadratic form, as a sum of four squares.
49:00
So this is not representation of numbers by forms, but representation of forms by forms, but lower dimensional forms. And in the special case M equals one, this is just numbers, but you can a priori pick any M between one and N. So we can define the representation number.
49:20
RA of T is the number of matrices, G of dimension M times N. So in the classical case, M equals one, this is a usual vector, such that one half GAG transpose equals T.
49:44
Okay, and because A is positive definite, and T is positive, well, A is positive definite, this is a finite number. And you can encode these representation numbers
50:02
into a theta series. Theta A of Z is the sum over all T, RQ, sorry, RA, RA of T, E of trace TZ.
50:25
Yeah, you need the trace to go back to numbers, where Z lives in HM.
51:04
Okay, and it turns out that this theta function has nice properties. This transforms like CZ plus D to the power N over two,
51:24
theta A of Z for gamma sum matrix with lower entries CD in some congruent subgroup gamma of SP two M Z. Yes, otherwise it makes no sense, yes.
51:55
And so it turns out that theta A is a Ziegle modular form of weight NOA,
52:16
over two and degree or genus M.
52:30
Okay, so there is a natural motivation why we want to study such objects, because they are connected to representations of quadratic forms by quadratic forms.
52:44
We have seen that many of the formulas look exactly the same, but there are other formulas that don't look the same. So many things become more complicated.
53:02
For instance, there is typically a formula for the imaginary part of gamma Z, which you can relate to the imaginary part of Z, but here the formula looks much more complicated. It's CZ plus D minus transpose,
53:21
imaginary part of Z, CZ plus D bar inverse. Now you recognize, of course, if everything is numbers, then you get the usual formula, imaginary part over CZ plus D squared absolute value. But here you can't do this really, because it's non-commutative.
53:41
And you can imagine that explicit formulas become very ugly in this way. Okay, there is a usual fundamental domain, fundamental domain for SP two M Z modular HN,
54:03
which looks very similar, in some sense, to the well-known case N equals one. One needs Minkowski reduction theory. Well, yes. Well, this was, okay, so this was a motivating example.
54:22
And now we continue with the usual theory, and we call the genus N. It just so happened that the genus here was M. So this is perhaps pedagogically not optimal. Okay, Minkowski reduction theory
54:43
tells you that there is a fundamental domain such that the coordinates of the matrix X, X I J, are bounded by one half, and the Y I J are Minkowski reduced, which means that the off-diagonal
55:00
is bounded by one half times the diagonal, and the diagonal is non-negative. In fact, it's strictly positive, and we have one half square root of three is bounded by Y one, is bounded by Y two, bounded by Y N.
55:22
And the determinant of Y is roughly the product Y one up to Y N. In other words, the off-diagonal is at least in terms of the determinant rather negligible.
55:43
But that just describes conditions satisfied in the fundamental domain. That's right, that's right. That's not, not everything is in the fundamental. This is a Ziegelt set for the fundamental domain. The exact fundamental domain has not been worked out, except for the case N equals two. So Gottschling in his thesis 50, 60 years ago,
56:05
one of the last Ziegelt students, wrote down, I don't know, 13 conditions or whatever, exact inequalities. 19? Okay, maybe it's 19, whatever. It's, okay, it's still a manageable number, but pretty large.
56:20
So there is a finite number of conditions for SP4, but for higher genus, I don't think, so no one has ever worked out exact conditions for the fundamental domain. But it's not a finite sequence. Probably is, yeah, okay. But I think it's just in the case, as in the classical case, nobody needs that.
56:45
If you, I mean, if you have a nice Ziegelt domain and you know that everything is inside the Ziegelt domain, then everything is okay. Okay, so how can I get back this blackboard?
57:00
Oh, oh, okay. Okay, I'm supposed to stop anyway. But let me just quickly say something about the Fourier expansion, because that's something that's very important in the SL2R case.
57:22
And it turns out that the Fourier expansion for Ziegelt modular forms is much less useful. There exists a Fourier expansion, of course, but it's much less useful. Fourier expansion, a Ziegelt modular form
57:43
has a Fourier expansion of the following type. One sums over symmetric matrices, positive definite, or perhaps positive semi-definite, and half integral. Some coefficient times E of trace TZ.
58:04
And it turns out that this coefficient A of T satisfies certain symmetries. For instance, okay, so this is for a modular form of weight K. There is a determined U to the power K,
58:23
A of U transpose TU for all U in GLNZ. So this is invariance by units if you want. But it's less useful for N greater than two,
58:41
or greater than or equal to two, less useful for N greater than two. In particular, A of T, the Fourier coefficient, has no direct connection to Hecke eigenvalues.
59:00
No direct connection with Hecke eigenvalues. I mean, that's something that we are very much used to, that Fourier coefficients are just Hecke eigenvalues. This is not the case as soon as the degree is not one. If the degree is two or more, then there is no direct connection between Fourier coefficients and Hecke eigenvalues.
59:22
So the Fourier expansion is really much less useful. One has the Hecke bound. At is bounded by the determinant of T to the power K over two, where K is the weight.
59:50
And the conjecture is that this is bounded by determinant.
01:00:00
dominant t to the power k over 2 minus n plus 1 over 4 plus epsilon. So if n equals 1, you can subtract a half. If f is not a lift, and I will explain next time what I mean by lift, it's not
01:00:21
a lift, we will do this tomorrow, but this is not known. Unless in the case n equals 1 for holomorphic forms, what is known is this minus a delta,
01:00:44
but delta is tiny. Delta is O of 1 over n. And we are actually expecting something linear in n. So yeah, there is lots of things to do if you want to work on this.
01:01:00
There are lots of open questions. Okay, I guess I have to stop now. So that's all for today. Okay, are there any questions or comments? Yes? For hyperbolic space, do we exactly know what is the connection for eigenvalues and
01:01:20
the Fourier coefficients? Do we know what? The connection between eigenvalue and the Fourier coefficients for each n value? I don't know. Certainly, in the case n equals 3, yeah, I don't know.
01:01:46
There hasn't been much analytic number theory on these spaces, and I think it's now the time that one introduces the methods of analytic number theory for these types of automorphic forms. Questions?
01:02:07
Yes? Would there be any consequence of analytic interest if this conjecture would be true? Well, I mean, if n equals 2, then this is the Ramanujan conjecture, right?
01:02:22
And so certainly there is some interest in the Ramanujan conjecture. Yeah, I mean, whenever you work with the Fourier expansion and you want to have bounds, then it's certainly good to know what the best bounds are for the coefficients. And this is true on average. So in some mean square sense, this is true.
01:02:41
But individually, it's not known. And I think there is some fundamental interest in knowing what the best possible bounds are. And one of the problems is that this is, in fact, wrong for certain forms that come from lower dimensional symplectic groups.
01:03:00
But I'll discuss this tomorrow. So the way to study representation is from, like, from, so... Yes, yes, right, right, right. They go into the error, right. They go into the error term. So if you, right, then you have Ziegel's mass formula and the, so then, I mean,
01:03:23
this is for cuspital forms, where the theta series are modular forms, but right. So these go into the error term, these coefficients. And if you can bound the error term, then it's certainly a good thing.
01:04:04
So I may have a comment that, so there's a very nice conjecture of Beuscherr for SP4Z, which relates its coefficients to twisted elevators of spinor functions. Yes. We suddenly suggest that they are extremely arithmetic, in some sense, in an even more
01:04:20
difficult way than Hecke in that case. Yes. Yeah, so they have certainly some intrinsic meaning. But they are not so much related to Hecke, but perhaps to other arithmetic objects. Okay. Is it possible to say we do have the spinor L function? So I will briefly mention L functions tomorrow.
01:04:43
But my plan is not to go into most details, but just to give you an overview of the objects that we are dealing with, so that you at least get an idea how to start if you're interested in working on these things.