1/4 L-functions
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 36 | |
Author | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/17012 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | |
Genre |
2
3
4
6
13
14
17
18
19
22
23
24
27
28
30
33
34
35
36
00:00
Function (mathematics)Analytic setTheoryNumber theoryL-functionRight angleCoefficientModulformDirection (geometry)Prime idealDifferential operatorChi-squared distributionAbsolute valueSigma-algebraObject (grammar)Sampling (statistics)WeightEnergy levelHolomorphic functionNumerical analysisEinheitswurzelAutomorphismBounded variationPoint (geometry)Quadratic equationSocial classKlassengruppeDistribution (mathematics)FunktionalgleichungBound stateFunctional (mathematics)FamilyMoment (mathematics)RootSquare numberDivisorMassSequelKörper <Algebra>1 (number)Closed setState of matterLogical constantInfinitySummierbarkeitProduct (business)Fundamental theorem of algebraFrequencyRange (statistics)Multiplication signModel theoryLecture/Conference
08:56
Square numberLogicOperator (mathematics)Power (physics)Mortality rateDistribution (mathematics)ResultantMultiplication signApproximationResidual (numerical analysis)Category of beingProduct (business)Right angleModel theoryNumerical analysisMany-sorted logicDirection (geometry)PressurePhysical lawCondition numberGrothendieck topologyLimit of a functionSineBound statePredictabilityRiemann hypothesisQuadratic residueP-valueInfinitySigma-algebraLogical constantInverse elementRandomizationCharacteristic polynomialRootLecture/Conference
17:52
Multiplication signGamma functionRandomizationLogical constantFundamental theorem of algebraProduct (business)Distribution (mathematics)Uniformer RaumNumerical analysisGlattheit <Mathematik>Model theorySummierbarkeitLine (geometry)RootRange (statistics)Square numberLogarithmModulformInequality (mathematics)Equaliser (mathematics)CoefficientInfinityFamilyChi-squared distributionUniverse (mathematics)ExponentiationFunctional (mathematics)Object (grammar)DivisorTowerFormal power seriesGrothendieck topologyScaling (geometry)Physical lawStatistical hypothesis testingPower (physics)Physical systemContent (media)Point (geometry)DistanceLecture/Conference
26:48
Social classThermal fluctuationsNumerical analysisMultiplication signPower (physics)TheoryRootStandard deviationSquare numberKörper <Algebra>Model theoryDistribution (mathematics)LengthIntegerKlassengruppeQuantificationLogarithmNear-ringParity (mathematics)Population densityQuadratic fieldTerm (mathematics)Standard errorFunctional (mathematics)AverageWell-formed formulaSigma-algebraPoint (geometry)Glattheit <Mathematik>Logical constantConfidence intervalOrder (biology)Asymptotic analysisEvent horizonRule of inferenceDistanceModulformGroup actionConnected spaceRight angleTable (information)Lecture/Conference
35:44
Connected spaceVolume (thermodynamics)Thermal conductivityInsertion lossKörper <Algebra>Series (mathematics)Complex numberMereologyModel theoryMultiplication signL-functionPosition operatorFunctional (mathematics)Riemann hypothesisSign (mathematics)Real numberSpecial unitary groupFrequencyRootSet theoryResultantPopulation densityDistribution (mathematics)Normal distributionSquare numberVarianceSimilarity (geometry)CalculationExtension (kinesiology)Parameter (computer programming)AnalogyMeasurementPhysical lawAreaRight angleDifferent (Kate Ryan album)Product (business)RandomizationSigma-algebraSummierbarkeitTheoremLogarithmMoment (mathematics)Lecture/Conference
44:40
Natural numberRight angleMoment (mathematics)Multiplication signResultantVariable (mathematics)Well-formed formulaObject (grammar)FrequencyLattice (order)ForestNetwork topologyStudent's t-testFormal power seriesFunctional (mathematics)Annihilator (ring theory)Hardy spaceScaling (geometry)Complex (psychology)Positional notationStandard deviationFlow separationOrder (biology)Term (mathematics)Ideal (ethics)Set theoryArithmetic progressionNumerical analysisSquare numberVarianceParameter (computer programming)Series (mathematics)Point (geometry)Product (business)Factory (trading post)Goodness of fitReal numberLocal ringExponentiationCondition numberLogical constantLogarithmHypothesisNormal (geometry)InfinityCartesian coordinate systemTeilerfunktionAsymptotic analysisPower (physics)Distribution (mathematics)Range (statistics)Absolute convergenceDivisorLecture/Conference
53:36
Moment (mathematics)FamilyDoubling the cubeGamma functionLogarithmBound stateFactory (trading post)Numerical analysisRight angleMultiplication signProof theoryModulformL-functionNatural numberRiemann hypothesisHeuristicOrder of magnitudeAnalogyTheoremDistribution (mathematics)Link (knot theory)Point (geometry)Square numberPower (physics)Dependent and independent variablesQuadratic equationTerm (mathematics)AntiderivativeDifferent (Kate Ryan album)Social classChi-squared distributionCoefficientAverageParameter (computer programming)Physical lawEqualiser (mathematics)Graph coloringFunctional (mathematics)Condition numberProcess (computing)MassExplosionMomentumState of matterResultantConnectivity (graph theory)Chromosomal crossoverRule of inferenceLecture/Conference
01:02:32
Diagram
Transcript: English(auto-generated)
00:00
So, the title was vague, but I'm not going to talk everything about L-functions. So, I'm going to talk about some topics related to the value distribution and moments of
00:33
L-functions.
00:43
So, to maybe to start with, you can imagine, so we have some notion of a family of L-functions and this might mean things like looking at the values of, say, the Riemann zeta function
01:04
zeta of sigma plus i t, where sigma is some fixed number and t varies and is large, say, let's say t lies between capital T and twice capital T. And then you can ask for the
01:25
distribution of values of this object. Or to give some other examples, you could take a character chi mod q, let's say primitive,
01:44
and ask for the distribution of values of L sigma chi, again for a fixed value of sigma and as chi varies mod q, then let q go to infinity. Or to give some other examples, you could look at a special class of characters, which
02:06
are namely quadratic characters, so these are parameterized by fundamental discriminants t, and you could look at objects like the quadratic Dirichlet L-function, again
02:23
at some point sigma, and t varies over all fundamental discriminants up to some point x. So, these are some kind of sample problems that you could consider and you could look
02:41
at other variations of this. You could take some automorphic form, maybe, and twist that by characters. So let me give maybe a few more examples. You could look at L sigma f, where f varies, let's say, over all holomorphic Hecker eigenforms
03:13
of weight k. Well, you could vary the level if you like, but let's say we vary the weight and the weight k is supposed to get large.
03:24
And in, well, in everything that I'll say, we'll assume that these L-functions are normalized so that there's a functional equation which connects s to 1 minus s. So half will be the central point always. Or something related to this would be to fix your favorite modular form and then to
03:44
twist it by quadratic characters. OK, so these are some sample objects that we would like to study.
04:04
And in almost all of these cases, the only thing that we completely understand is what happens when sigma is bigger than 1, when we are in the range of absolute convergence.
04:22
So there, you can say quite a bit, like let's say we take L sigma chi d, which is just given by this Euler product, and then it's easy enough to say something like, well,
04:43
this is at least as large as what happens when all the chi d's are plus 1 and at most as large as what happens when all the chi d's are minus 1. So it varies between two constants, and it can get arbitrarily close to this constant
05:04
or arbitrarily close to that constant by choosing the first few primes to point in a certain given direction. So this is, you could say, easy enough to understand. But even this problem of looking at values of L-functions to the right of 1 can be
05:26
non-trivial if you're interested in automorphic forms where you don't know there are monogen conjectures. So even here, some problems are not easy. There are some subtleties which we don't know in general for the coefficients of
05:59
these L-functions.
06:02
So maybe I'll just say here that we can say things here, but I'm not really going to focus on this. But let me just say there's work of Bolteni and Xian and Li, which deals with problems of this type of bounding L-functions at the edge of the critical strip or just a little
06:20
bit to the right in situations where you don't have the monogen conjectures. Think, for example, of a mass form. Okay. So the first problem, which is non-trivial, would be to ask for the value distribution at the edge of the critical strip, namely when sigma is 1.
06:59
So maybe let me just focus on one problem here, which is of special interest, which
07:05
is the case of twists by quadratic characters. So that's especially interesting if, say, the discriminant d is negative and the size of d is less than x, let's say.
07:22
Then we would be interested in L1 chi d, since we know that that multiplied by... So that is equal to the class number.
07:42
And let's see, there has to be a constant here, like 2 pi divided by w, which is the number of roots of unity in the field. Well, w is usually 2, so up to some factor of pi, this gives you information about the class number of the field, q square root of d.
08:02
So asking for the distribution of values of L1 chi d is the same problem as asking for the distribution of class numbers in this example. So that's a problem that we don't understand very well.
08:28
So unconditionally, the only kind of bounds that we know are that L1 chi d is bounded by... So this is fully explicit. You can put in a slightly small constant here as well.
08:44
This is easy. And then we have a lower bound that it grows like, at least like d to the minus epsilon, which is Ziegel's theorem, which is ineffective and remains an important open problem.
09:10
But this is not the truth of what happens for L1 chi d. If you know something like the generalized Riemann hypothesis, then we have much better bounds, so we know that it's bounded at most by some constant times log log d and some
09:35
other constant times... So I write c everywhere, but the c might be different constants in each occurrence.
09:43
So c divided by log log d. So how should one think about this, and also, you know, how do these large and small values come about, and usually what's the size of L1 chi d?
10:03
That's the first question. Okay. So to think of what the size of L1 chi d should be, we should go back to the trivial example that I did. When sigma is bigger than 1, I can just write it as an Euler product, which is convergent,
10:20
and then get upper and lower bounds that way. So you could ask, is there something that's going to say, how far do I have to go before I can approximate L1 chi d by its Euler product?
10:58
So z will be something depending upon...
11:00
The d you should think of as a discriminant about size x, and z will, of course, will maybe go to infinity as x goes to infinity. You'll certainly need something of this type, and you're interested in making z as small as possible, and the result's still being true.
11:25
And unconditionally, we would have to take z very, very large, like maybe larger than x at any rate, maybe even larger than that, maybe like e to the log x squared or something like that. But if the generalized Riemann hypothesis is true, then one can take z to be fairly small
11:43
like log x squared. And if you, once you know that, then these bounds follow, because this product at most
12:00
can be as large as a product of 1 minus 1 over p inverse up to log x squared, which gives you this upper bound, and it's at least as large as this, which gives you this lower bound. And this principle is related to some other principles that we have, like, well, some other problems that we have, like what is the least quadratic residue or non-residue.
12:34
So find the smallest value of p for which chi d of p is 1, or chi d of p is minus
12:43
1. And these are problems for which, unconditionally, we don't have very good results. We can say that the least prime is less than some power of d, like maybe d to the – so unconditionally, we would know something like, here we would get p is less than d
13:06
to the 1 over 4 root e, and here maybe only that d is less than p to the one-fourth
13:20
characteristic of d, plus epsilon. And – but the Riemann hypothesis would tell you that in both these cases, you can get
13:41
positive and negative values once you're at size log x squared. Now, maybe the first thing to get us started on thinking about these value distributions would be to understand in problems of this type, like this least quadratic residue or non-residue, what should be the truth.
14:02
GRH tells you something like log x squared, but that's not really the truth. The truth in these cases should be that there exists such a value, really, that you
14:26
can take anything which is a little bit larger than log x. So let me put log x to the 1 plus epsilon, but you'll see that I can make it even a little bit more precise, like log x times log log x. And the reason is simply to think of randomness.
14:49
So each prime p could be plus one or minus one, and it's roughly plus one or minus one with probability one-half. So I can think of this chi d of p as being like a random variable x p, and this random
15:08
variable takes values roughly one and minus one with equal probability. But you might want to be a little bit careful because there is a third possibility that the value could be zero when p divides d.
15:22
So let's allow for that possibility, and then you can work out that the right probabilities here are this is with probability p over two times p plus one, this is with probability one over p plus one, and this is with probability p over two.
15:42
So this is approximately one over p, it's one over p plus one because d is conditioned to be square-free. Well, just pretend that it's plus minus one with equal probability. Then to get z numbers pointing in the same direction, the probability, so it takes
16:06
some fixed value of sine epsilon p for all p up to z. This is about one over two to the pi of z, and so you can imagine that if this probability
16:23
is much smaller than one over x, then maybe you don't get any numbers which have that property at all. So stop when this is, so that's a plausible conjecture for what we should expect for
16:52
this quadratic residue or non-residue problem, and you might also think that that's a plausible conjecture for what the values of L1 chi d should be, that we should be able to take
17:02
this Euler product and take the Euler product up to essentially log x, log x to the one plus epsilon, and that's a good approximation to the value of L1 chi d. So by the way, this conjecture is stronger than RH in the sense that it will predict values which are, predict upper bounds for L1 chi d and lower bounds for L1 chi d,
17:26
which are about half the size of this upper bound. So that's completely open, but it gives you a good first model for how to think about
17:50
the value distribution of L1 chi d. We could say, well, this L1 chi d should be modeled by looking at a random Euler product
18:13
where the x piece are taken independently for different primes and satisfy this being
18:20
plus minus one with equal probability. So I have an infinite product here, but you can check that this infinite product
18:42
will converge almost surely, and the reason why it converges is, well, the convergence of this product is related to the convergence of sums of the form x p over p if I take the logarithms of this side, and then the convergence of the sum is okay because if you think of the x piece, they are plus minus one equally often, so counting, summing x
19:03
of them will give you square root cancellation in the sum of the x piece, and they're weighted down by something more than square root of x, which will give a convergence sum. So this converges almost surely.
19:26
So this model has been studied in the context of understanding L1 chi d for a long time. Maybe it goes back to work of Erdős and Chovla and also Eliot in the 70s, and they
19:46
proved that L1 chi d has a nice distribution function, a smooth distribution function, so you can compute the probability that L1 chi d is bigger than 10, let's say, or the probability that it's less than 1 over 100, and sometime back, maybe 10 years back now, Granville and
20:06
I studied this carefully, trying to determine with what uniformity we can match the distribution of values of L1 chi d with the distribution of these random Euler products.
20:22
So we could prove something like, well, so the way you do it is by computing the number of fundamental discriminants up to size x for which L1 chi d is bigger than some number which, let me normalize as e to the gamma times tau. The e to the gamma is for this Mertens type constant that comes in this Euler product,
20:45
and you would like to figure out when this is approximately given by the number of, by this random Euler product, which let's call L1 x being bigger than e to the gamma times tau, and let's divide here by the number of discriminants up to size x, and we proved
21:20
that these two objects more or less match in some very uniform range.
21:26
It's true for tau up to something like log log x plus maybe, so this is unconditional,
21:46
so log 4 is 4 logs, so log log log log, and then if you assume GRH, then you can replace this log 4 by log 3.
22:03
Okay, so what does this mean? Well you can ask, well what exactly does the probability of this L1 chi d being large, what does it look like as a function of tau? It's some crazy function, it's not something nice like Gaussian and tau or anything like that. It actually behaves very strangely, it behaves, it decays doubly exponentially, so this behaves
22:26
like e to the minus some constant times e to the tau over tau. Is that still legible, what I write there? Yep, okay.
22:40
And so this is an asymptotic, not an exact equality. And this constant c is some funny constant, it's about, I wrote it down because I can't, so it's e to the minus 0.8187, something like that, and this constant that appears
23:05
in the exponent is some weird thing.
23:21
So I write this down just to illustrate that this is not some universal distribution, it's something that you can compute in this case, you get some answer, if you compute it for some other function, it doesn't have anything to do with just the family of functions, it has things to do with the actual coefficients and what d you're ranging over
23:42
and so on. Okay, so what does this mean? If you look at this probability, because it's doubly exponential, it means that once tau is on size of log log x, then this probability becomes something less than 1 over x. And because it's being divided by this tau, tau really has to be, if it's on the scale
24:03
of log log x plus log log log x, then this proportion becomes less than 1 over x, and you expect nothing to be, so you don't expect this equality to hold once tau is bigger than this plus this plus 20, let's say, then there should be nothing which satisfies
24:20
that inequality. So, essentially there's a very wide range of values of tau in which this inequality can possibly hold, and we have as a theorem, at least on GRH, that it almost holds in the entire range in which it can, okay? So the fact that the two distributions match for such a wide range might lead you to believe
24:46
that, so therefore, we may believe that this is, this conjecture, which again, as I said,
25:15
you know, it's in some sense beyond, beyond GRH.
25:21
So if this is true, then we've kind of, you could say that we completely understand the distribution of values of L1 chi d, it behaves like a random Euler product, and it seems to behave like a random Euler product in essentially the whole range that it can. And whenever you see a value of L1 chi d, well of course the chances are very good that it's just lies between 0.1 and 10.
25:43
You're never going to see a value which is not in one of these ranges. Okay, let me see if I can, but this is not the only question that you can ask about
26:16
values even at the edge of the critical strip, so let me ask you one more question
26:23
along these lines about which we know extremely little. So let's, let me just think about negative discriminants, and let's say, okay, just,
26:46
which we know for the distribution of values of L1 chi d here are basically the same as the class number, at least when d is not minus four or minus three.
27:07
Now this of course is an integer, which means that these things are not arbitrary real numbers, they have to lie in certain buckets near integers divided by square roots of integers.
27:21
And if I want to understand this integer, then I'm not really interested in just understanding how the values of L1 chi d are distributed. I'm really interested in understanding them in very small intervals. Like if I take an interval of length one over root x, I would like to understand the distribution of L1 chi d in such a short interval, okay? Now this is of course impossible because the random model is not seeing anything about
27:45
the arithmetic of these class numbers, okay? So let me give you one conjecture here. We would certainly expect that every number is the class number of an imaginary quadratic field.
28:15
I think this is an obvious conjecture. And the reason why I say it's obvious is that you have about x discriminants up
28:21
to size x, you take the class numbers, they all are of size about square root of x. You're taking the square root here. So you have x numbers mapping down to square root of x numbers, so each number should get its fair share of fields for which it's a class number. So there should be roughly root x. So if I give you a number, given a number h, there should be about h fields with class number h, okay?
29:03
The discriminants go up to h squared, and then the chance of you landing exactly on h might be like one over h. So but this is forgetting some things like, well, for example, if h is odd, then genus theory tells you that the discriminants for which you can have a field with class number h, they have to be primes.
29:21
So maybe it's not actually h. Maybe it's like h over log h in that case. And if your discriminant is do so by a large power of two, then, you know, if h is do so by a large power of two, then the discriminants will be do so by a large number of primes, and maybe there are more discriminants in certain cases. So certainly I would think that if I let this number be denoted by f of h, the number
29:44
of fields with class number h, so f of one is nine, is a famous theorem, then I would expect that maybe this is always bounded by something like maybe h times log h, and maybe bounded below by something like h over log h.
30:03
Maybe one can make more precise conjectures of this, although so far as I can tell, nobody has a good guess on what the asymptotics here should be. But we know very little about this.
30:32
All we know is that, and it follows from this work on the distribution of L1 chi d that I've just been talking about, we can compute the average of this function f of h,
30:45
and it turns out to be a fairly nice constant, but the error term is remarkably weak.
31:05
You know, I can save a square root of log h, and I have no idea of how to save anything more than that, like maybe h squared over log h to the 10 would be very nice, but I have no idea how to prove that. And because you can get some asymptotic formula with some error term,
31:23
you can also prove something like the number of fields with class number h is at most h squared times some power of log log h over log h.
31:41
This is very weak. All it says is that if you look at all the fields with class number up to h, they can't all basically accumulate on one value, or maybe on log h value, if you like. So it rules out some things.
32:00
It says that it's, so I don't quite know how to put my quantifiers here. Let me say this, although it sounds idiotic, not almost all fields have class number equal to a power of two times a bounded odd number.
32:39
But if you ask for the same question with, so I hope it's clear what this means,
32:48
it's that there is a positive density of fields, actually there's a zero density of fields whose class numbers are a power of two times a bounded odd number. That you can prove. But I don't know how to prove that whether the same question,
33:02
whether the class numbers can simply be a power of two times a power of three times a bounded number. Or certainly if you ask it with, you know, two, three, five, seven, then it's wide open to figure out how to say anything about this.
33:20
So that's a very, very, sorry? I don't know how to guess it. So there are some, you know, there are fluctuations in, let's say, if three divides the class number, three divides h,
33:41
then that seems to bump up f of h a bit. So there are deviations that you see by looking at the tables, but I don't know exactly how I would formulate that. Yeah, it kind of lets you predict how often each prime power is involved. Right, so you then have to assume that they're all independent of each other and then make some formulation, and then you have to put in the genus theory thing for powers of two, right?
34:01
So I wrote down somewhere a version of this with h over log h and a power of log log, and that it should be on that order. That I feel kind of confident about, but I don't feel confident about the constants that go in front of that.
34:21
So this is a very strange kind of conjecture because it's telling you that these values of L1 chi d, maybe, you know, we know they converge to some nice, smooth distribution function, but it could still be a very granular kind of distribution, but at any point you find that they accumulate in very short intervals around a small number of integers,
34:41
and we don't know how to rule things like that out. Okay, so I don't know if there's a good way to ensure that something is the middle board.
35:10
Okay, now, so that's what happens for the edge of the critical strip.
35:28
You can ask for any other value sigma which is bigger than half and less than one if you fix sigma bigger than half.
35:43
Then, you know, there has been some work on this recently due to Lamzuri, who also discusses things like, let's say you could look at the analog of what I was talking about in the zeta function case. If you look at zeta of one plus i t,
36:01
you can think of how this is distributed in the complex plane, so its real and imaginary parts or its size and its amplitude and its argument, if you like, and he also has extensions of this to values of sigma that lie strictly bigger than half,
36:24
and the story is more complicated, but you can, but kind of similar in spirit that you can still understand things pretty well using the random model.
36:44
So in other words, so what makes this, so let's say for L sigma chi d, you would analyze it by looking at this one minus x p over p to the sigma inverse,
37:01
and what makes the random model work is that, again, if you think of the convergence of this and you look at x p over p to the sigma, so again, if these are canceling out the square root cancellation because they're plus minus one with equal frequency, you see that the sum still converges if sigma is bigger than half,
37:21
so this still converges just almost surely, but so here's one kind of corollary of this work.
37:44
If you fix sigma less than one and if you look at the values of,
38:05
this is not, so one way to say it is that this is close to any given real number,
38:24
positive real number, for infinitely many values of the discriminant d. So there's a distribution function for this, and any given real number, around it there's some positive density
38:41
with which you take values in that distribution. So the reason why I was going to write something down, and I hesitated because we would of course love to be able to prove that the image consists only of positive real numbers, but we don't know how to prove that because that would be some part of the Riemann hypothesis to say that there are no real zeros of these functions.
39:01
But on the other hand, in the density sense you can say, because even the characters for which you might have a zero at sigma, there are zero density results which tell you that that happened very infrequently. But certainly, okay, but these values can take any value,
39:20
any given positive number as a value, or dense in the set of positive real numbers.
39:45
Okay, so that tells you what happens to the right of the critical line. And now for the rest of the time, I'm going to talk only about what happens on the critical line. So now we're going to discuss sigma equals half,
40:06
which is in some ways the hardest case. So you can see that one way in which things fail
40:22
is the random model is no longer meaningful. If I write down a random Euler product of this type, then this series does not have to converge anymore. In fact, it diverges almost surely. So the sum of XP over square root of P diverges almost surely.
40:56
It doesn't quite work.
41:04
And you can see it in other ways too because there are zeros of L functions on the half line. And whenever you have such a zero, it doesn't make sense to approximate it by any kind of Euler product that you have. Okay, and it's also reflected in the fact that some very basic questions
41:22
about the values of, say, zeta of half plus IT are unanswered. So here's a conjecture which goes back to Ramachandra that the values of zeta of half plus IT, as T varies on R,
41:47
these values are dense.
42:04
This is open, but it maybe is not so hopeless that maybe somebody will solve it. So there is work on this by Emanuel Kowalski and Nick Agbali
42:26
connecting this to moment conjectures for zeta of S.
42:53
Okay, so this is in some ways an aside. I'm not going to talk much about this problem.
43:01
So one difference is that while these values to the right of the critical line actually have a value distribution, the values on the critical line don't have a value distribution. What you have is a different kind of result, which is due to Selberg, which we would like to find analogs of this for L functions, which we don't have.
43:25
Selberg has a theorem that says that as T varies, and let's say T to 2T,
43:44
so if you take the log of the zeta function and you take its real part or its imaginary part, so if it's a zero, then the real part of the log would be, say, minus infinity,
44:02
but it happens on a set of measure zeros, so it doesn't affect any of the calculations. So the values of this logarithm are distributed like a Gaussian, so like a normal random variable, so they're approximately normal
44:23
with mean zero and variance about half log log T. Okay, so I'll explain this theorem in a little bit more detail tomorrow
44:46
and also what it means for moments. For the moment, let me just say that what this means is that, okay, this variance is growing, right? So what it means is that if you take a value of zeta of half plus ID and you just look at it, there are two cases.
45:02
Either it's some large number in size or it's some small number in size, okay? And you're never going to see, so the probability that you see a value which is of size 10 is going to be zero. So therefore, you can't make progress on this conjecture that the values are dense in C because that is actually a set which has zero measure.
45:25
So now we can get to the main topic that I'll discuss in the next lectures,
46:14
which is a classical theme that goes back to Hardy and Littlewood,
46:27
which is to try to understand. So their idea was to understand the value distribution of the zeta function by studying the moments of zeta.
46:41
So here k is, so k is, let's say, some natural number or you could also just take k to be some real number, which is positive maybe, but it also makes sense to consider complex moments as well. So in fact, for this application to how much in this conjecture, you need to understand something about complex moments of zeta.
47:04
Now, and one reason why they were interested in it is that it's very easy to see. So this is just the L2kth norm of the zeta function. So of course, if you can understand this for large values of k,
47:20
you can say something about the L infinity norm of the zeta function, which is the Lindelof hypothesis that this is equivalent to knowing that these moments.
47:57
So there's been a lot of work on this question from the 1920s due to Hardy and Littlewood,
48:05
but so far in this case, we still have only asymptotic formulas in two cases for k equals 1 and k equals 2. So for k equals 1, this is asymptotic to T log T, which is due to Hardy and Littlewood,
48:28
and for k equals 2. So maybe I'll write this in anticipating some other conditions.
48:44
Let me write this as 2 times 1 over 4 pi squared, and this is due to Ingham. So actually, Ingham's paper is quite interesting because he considers more general objects like zeta.
49:19
So let's say this may not quite be his notation, but instead of just considering the fourth moment,
49:42
he also puts in these four variables. If you set them all equal to 0, then you get the fourth moment of zeta, but he works out the asymptotic formula in this generality, saying that it's more transparent to see what the shape of the asymptotic formula is when you disentangle the variables like so.
50:08
Now, so for a long time, these were all that was known, and it was not even clear what the right conjectures for these moments are.
50:36
People guessed the right exponent, right? So there was a folklore conjecture.
50:42
I don't quite know who to attribute it to. Maybe Titchmarsh would be a reasonable person to attribute it to. There was a folklore conjecture that the 2kth moment, let me give this a name just to stop writing it all the time, that this is asymptotic to some constant times T log T to the k squared.
51:10
And then maybe in the early 80s, there was a suggestion by Conry and Gosh that this constant Ck factorizes nicely as the product of two constants, Ak and Gk.
51:29
And well, okay, at the moment, of course, this is not so profound to write this down, but the point was that there's a natural thing that you could associate with the kth moment of the, with the kth power of the zeta function.
51:43
That has a Dirichlet series, which is the kth divisor function dkn over n to the s, at least in the range of absolute convergence. Now, you could think of this expanding this moment as being like the mean square of the kth power
52:01
and think of using a Parseval type argument, okay? So a Parseval type argument would suggest that maybe the asymptotics should depend on some object which maybe looks like this, dkn squared over n. So you're on the half line, so there's an n to the half plus IT, and if you take the squares of that, you get dkn squared over n.
52:25
And what is bogus about this is where I have truncated it, choosing to truncate it as n up to T, which is not motivated by anything at all, except that that's the natural scale maybe on which this behaves. Then you can show that this object, it's very easy to get an asymptotic formula for this,
52:43
and it turns out to be asymptotic to this ak times T log T to the k squared. So this is the natural order on which the moments of the zeta function should behave, and then this gk is measuring some kind of deviation from what you would expect, very naively.
53:05
And in this notation, Hardy, Littlewood, and Ingham results say that g1 is 1 and g2 is 2, and then the other values of gk were not obvious for a long time. And then in the 90s, Condrey and Gosch conjectured that g3 is 42,
53:31
and Condrey and Gonick conjectured that g4 is 24,024.
53:45
And roughly around maybe exactly the same time that they conjectured this, Keating and Snape found a general conjecture which predicts what gk should be, and the conjecture is quite nice here, at least for integers, you could write it as so.
54:13
I think if I remember this correctly, then this should work out to these numbers for the small values of k.
54:20
So this is the conjecture of Keating and Snape. And actually, I said this for integers, but you could actually make sense of it for non-integer values as well. You simply have to replace factorials by gamma functions, and you basically get something which is related to a Barnes double gamma function.
54:48
So that's why you need this Barnes double gamma function, some ratio of double gamma functions.
55:01
Okay, so moreover, so this is the case for the zeta function,
55:23
but there are other families of L functions that you could consider as well, and you could consider moments in these families. So let me give you two other examples which are kind of typical. You could look at all discriminants up to size x, fundamental, and look at the average of L half chi d to the k.
55:44
So maybe here k is some natural number. These values are all expected to be non-negative, but we don't know how to prove that. That will be an important result, giving you also lower bounds on L1 chi d.
56:00
But anyway, they should be non-negative if you're willing to assume the Riemann hypothesis. There are no zeros between half and one. And here, the Keating-Snade conjecture is a little bit different. This is asymptotic to some constant, so the conjecture would be analogs of Keating-Snade, some constant ck, which can also be specified nicely in terms of factorials,
56:23
but which I don't remember offhand, times... But what's interesting is that you get a different power of log x here, k times k plus one over two, and then to give one more example,
56:48
let's say you fix your favorite modular form f and just take quadratic twists of it, and then here you get some constant ck, maybe a different constant from here,
57:05
k times k minus one. So maybe in the last five minutes, let me just tell you what I plan to do in the rest of the lectures. This is your race in the Selbeck theorem.
57:28
He's just naively speaking of Selbeck's theorem, to try to guess what the response would be if you just did the eight days. So you would get, well, it's not clear how exactly you would formulate it.
57:42
You would get the right power of log, and that's something that I'll talk about. So Selbeck's theorem should predict that you should get some constant times t log t to the k squared. And that will also be the key to trying to think about what the distribution of these should be, that we would like analogs of Selbeck's theorem in these contexts,
58:01
which we don't know how to prove. So the first point is what Terry just said. I'll try to make a, I'll explain the link between moments and this value distribution to things like Selbeck's theorem and expected analogs for L functions.
58:34
But maybe this won't come first. So the things that I want to explain are this,
58:40
to also figure out something about where the conjectures for moments come from. So there's a particularly nice way to formulate these conjectures, which is due to Connery, Farmer, Keating, Rubenstein, and Smith.
59:09
And the heuristic is very simple and it gives a very elegant answer for what all the moments in all the families you can think of should be. But on the other hand, even, so you can,
59:21
and this has been verified in many small cases, which I'll mention some of these small cases tomorrow. But what's maybe unsatisfying is that there is a very nice conjecture,
59:41
but all the proofs in which we can check the conjecture in the small cases are very unsatisfactory in that they involve, you have a huge mess and then you check somehow that the mess that you get matches up with this nice conjecture. It's not very illuminating, the proofs.
01:00:01
And then we have general techniques which allow us to give lower bounds for all higher moments. So in other words, if you know some moment and you have a little bit to spare, then you can get lower bounds
01:00:21
of the right order of magnitude for all higher moments than that. And the fourth one would be that complementary principle to this, that if you have upper bounds for some moment, then that implies automatically that there are upper bounds for all smaller moments.
01:00:48
So of course, this is obvious if you don't get the right power of log by a holdess inequality. So the point is you can get the right power of log in both these cases. And the last one is related to this kind of argument.
01:01:02
Now we know that on GRH, you can very generally prove upper bounds of the right order of magnitude. Of magnitude, and essentially any case that you can think of.
01:01:25
So I'll stop. Thank you. Questions?
01:01:41
How do we guess that what are the coefficients of the power of the log x? What is the? Yeah, so I'll explain that tomorrow. Can we guess for a bigger class of L-function, like for any automorphic L-function, or a silver class in fact? So it depends on what you're using. So in the t aspect, everything will have the same log t
01:02:02
to the k squared type phenomenon. And if you vary the family, then you can still guess what the answer is going to be. In t aspect, yeah. Primitive. Primitive, OK, primitive, yes. Right.