1/4 Trace functions over finite fields
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 36 | |
Author | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/17022 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | |
Genre |
2
3
4
6
13
14
17
18
19
22
23
24
27
28
30
33
34
35
36
00:00
Function (mathematics)Körper <Algebra>Analytic number theoryNumber theoryTheoryAlgebraic structureWorkloadDiagramRecurrence relationGeometryHarmonic analysisMathematicsPolynomialPrime numberNoise (electronics)Game theoryNetwork topologyThermal conductivityNumber theoryFunction (mathematics)Set theoryModel theoryPositional notationFrequencyProduct (business)Wave packetQuadratic formShooting methodCategory of beingAverageVector spaceINTEGRALPhase transitionWell-formed formulaAlgebraic closureDimensional analysisIntegerComputabilityFreezingInfinityReduction of orderFlow separationUniformer RaumDecision theoryAnalogyEnergy levelState of matterAlgebraArithmetic meanMorphismusSeries (mathematics)Expected valueFunktionenalgebraLine (geometry)Directed graphGrothendieck topologyGroup actionHydraulic motorComplex (psychology)Maxima and minimaExtension (kinesiology)MereologyMoment (mathematics)Orientation (vector space)TheoryPhysical systemPotenz <Mathematik>Prime idealRankingResultantSpecial functionsAntiderivativeTerm (mathematics)Thermodynamisches SystemThermal expansionCountingScaling (geometry)OvalLogical constantNichtlineares GleichungssystemDivisorFamilyFormal power seriesDependent and independent variablesMatching (graph theory)Range (statistics)Rational functionOperator (mathematics)Basis <Mathematik>DistanceParameter (computer programming)Normal (geometry)Perfect groupConnectivity (graph theory)Presentation of a groupFood energyResidual (numerical analysis)Musical ensemblePolymorphism (materials science)Square numberSummierbarkeitAdditionPoint (geometry)Social classCuboidSpecial unitary groupDiscounts and allowancesVariety (linguistics)Proper mapOpen setArithmetic progressionCartesian coordinate systemMany-sorted logicCoefficientObservational studyDirection (geometry)Exponential functionEqualiser (mathematics)Module (mathematics)Insertion lossFocus (optics)Student's t-testEstimatorChemical equationReflection (mathematics)Euler anglesSign (mathematics)MetreEvent horizonIdentical particlesClosed setCondition numberSelectivity (electronic)Different (Kate Ryan album)Contrast (vision)Object (grammar)Archaeological field surveyIsomorphieklasseElement (mathematics)Classical physicsDegree (graph theory)Multiplication signRule of inferenceRotationStandard deviationConjugacy classSpacetimeRight angleGroup representation1 (number)Analytic number theoryDifferential equationEigenvalues and eigenvectorsElliptic curvep-adische ZahlTrigonometryAlgebraic number fieldBessel functionModulformInvariant (mathematics)Variable (mathematics)FinitismusFiber bundleL-functionOrthogonalityAutomorphismFinite setGalois-FeldSheaf (mathematics)Gauss sumHomomorphismusContent (media)Power (physics)Linear mapRepresentation theoryLogarithmMultiplicationNormal subgroupProjective planeModulo (jargon)SkalarproduktraumSubsetTheoremTrigonometric functionsCommutatorLinearizationArrow of timeSubgroupLinear subspaceRootHelmholtz decompositionBound stateRandomizationExistenceKörper <Algebra>ConvolutionChi-squared distributionTeilerfunktionThree-valued logicComplex numberAlgebraic varietyAlpha (investment)Infinite setFamily of curvesOrthonormal basisBellman equationInversion (music)KozyklusRiemann hypothesisSinc functionPole (complex analysis)Bounded variation2 (number)Inverse elementLecture/Conference
Transcript: English(auto-generated)
00:14
OK, so I'll start my series of lectures. So this is about so-called trace functions
00:22
of a finite field. And so this will cover a lot of ground, but most of it is joint work with, especially, Jean Fouvry and Pete
00:47
Michel. And there are pieces which are joint works also with Satat Al Ganguly, Guillaume Ricotard, and ongoing joint work with Paul Nelson and Florent Joure
01:06
and Will Sowin. And some stuff is in the Polymath 8 project. So a whole bunch of people. So what I'm going to try and describe is, to some extent,
01:21
a little bit a piece of applied mathematics. So the mathematics we apply is the theory developed essentially around here, between 50 and 30 years ago
01:41
by Grothendieck, especially Deling and Katz, Lomo, and a few others, culminating with the Riemann hypothesis of a finite field due to Deling and the applications. So what we are interested in are applications
02:02
to analytic number theory, or to number theory more generally, but most applications that I'm going to discuss are really analytic number theory. And what I'm trying to do during these lectures
02:21
will be to present this part hopefully for the largest possible audience so that you can also apply it to your own problems in an efficient way. So the idea is that we're going to look, especially at the Riemann hypothesis of a finite field,
02:43
and at first begin by seeing it as a black box. And then very naturally, we'll see that this black box, to apply it, to use it efficiently, one needs to know a little bit of what's inside. And over the course of the four lectures, the goal will be to introduce more and more of what's
03:03
inside the black box so that the applications become more and more refined. And in the end, what turns out to be the case is that one reaches quickly, or not so quickly, but one reaches points at where you cannot just take,
03:22
let's say, especially the works of cats and apply them immediately. We reach open questions, very interesting open questions, even from the point of view of algebraic geometry, where hopefully insights from trigonometry theory are useful. So this will come towards the end. But today, I'm going to begin by trying
03:41
to present a version of the Riemann hypothesis of finite fields in an extremely general version, but as hopefully a very applicable one as a black box. And we'll learn a bit of what's inside the black box to be able to apply it from this point of view. So as a first kind of reference for what I'm describing,
04:07
so there's a trace function survey from Pisa. So the Pisa trace function survey that Philippe and Frouvian and I wrote, which is available on my web page, for instance.
04:29
So the main objects will be these trace functions. So we'll be interested in a prime number p. And so I say prime over finite fields. Really, what I'm going to describe
04:41
certainly applies to any finite field. But because our applications are mostly over q, the finite fields will be just, for the most part, just prime fields. Fpz, not pz, the prime fields. And we are looking at certain functions.
05:01
So trace function will be certain functions, complex value functions defined on Fp, which have very algebraic flavors. So trace functions, so algebraic structure.
05:32
And the actual precise definition is quite involved. So I think that by the end, I should be able to give a full definition. But I want to try and show that it's
05:40
possible to begin a little bit like when you study automorphic forms. At the beginning, first time, I saw at least myself the Peterson formula. Well, there was a Bessel function. I certainly didn't know anything about Bessel functions. But I just started, I knew in such and such a place, I found some bounds. And I found some identities in Gaussian analytic. And you can build a lot just by knowing that.
06:02
At some point, you want to know a bit better what is a Bessel function. But suddenly, you don't need to start by studying differential equations abstractly to study automorphic forms using the Peterson formula. So I'm going to take a little bit the special functions approach at the beginning. I'm going to give examples.
06:21
And at some point, we'll see that we need to know a bit more of where these functions come from. Otherwise, we're kind of stuck when applying the Riemann hypothesis. And then I'll open the box further and further to the end. Hopefully, there'll be a full, strict, rigorous definition of trace functions. So what are the examples? The basic examples, so the first examples
06:43
we'll see are additive and multiplicative characters. So first, just additive characters. Ep of n. So ep will be exponential 2i pi n over p.
07:01
Chi is a multiplicative character of fp. So these are the more elementary ones that come in. But then in additive number theory, very quickly, you reach things where you replace the argument
07:21
by a polynomial or rational function. So in both of these, I think that f is f1 over f2. f1 and f2 are, let's say, polynomials
07:43
with integer coefficients. f2 is non-zero, and they are co-prime. And f2 is non-zero mod p. So we can divide at least for most n. We can compute f2 of n and compute f of n modulo p
08:04
and compute the inverse and so on. So these are the simplest examples. And then one very quickly learns that it's useful to be able to mix them, taking multiplicative characters times an exponential.
08:23
And these objects in additive number theory, they don't arise usually just like that. We define this function, and we want to say something about it. They come out in a natural way. And what one is interested in, very often, maybe most often, is not so much one value, but average value.
08:41
So sums of these things, meaning exponential sums. So these come usually in applications in the form of sums.
09:12
Sums, the first one, maybe, that one encounters is following Gauss, something like Ep of n squared,
09:22
n ranges of fp. This is a quadratic Gauss sum. Quite quickly also, also called Gauss sums are Gauss sums attached to characters. So as long as one is interested just in upper bounds for this, so it
09:46
might be that you want something more refined. So for quadratic Gauss sums, the most important application doesn't just want the size. It wants to sign the argument of the Gauss sum. But in many applications, it's really the upper bounds. And as long as you just have these two
10:01
and you only want upper bounds, you're in very good shape. These are one of the few cases where the modulus of the sum is known. So here, everything is nice, meaning if I call the first one s1 and the second one s2,
10:20
then s1 and s2 have modulus code of p. And s2, this is when chi is non-trivial. And then you might want to tweak these things. Sometimes you see that extra parameters come in, a and fp.
10:44
And this would then still be true for a non-zero. But basically also one quickly realizes that these are essentially almost the only ones where you can actually compute the modulus exactly. So I guess maybe an even more trivial case is just the full complete sum of just a character
11:02
over a group, in which case when the character is non-trivial, it's zero. So almost any other sum does not give simple formula, either for the sum or for the modulus.
11:27
When you take a sum n, one of these special functions are already wrote down. So I guess maybe the other nice exception are Jacobi sums. Sally sums even do have some formula,
11:40
but it's not actually such a nice simple formula. And here the kind of basic case is the Clusterman sum. So classical Clusterman sum in one variable, which I will denote by KL2. So I think this was the notation implicitly used
12:00
by tau also, which for us it is convenient to normalize by its code of p immediately. And then this is something which sums over invertible elements mod p, and ep of nx plus inverse of x computed modulo p, often denoted x bar.
12:31
So Clusterman sums were introduced somewhat implicitly by Poincare when computing the Fourier expansion of Poincare series, and explicitly and studied
12:42
already quite deeply by Clusterman to study representations of integers by quaternary quadratic forms, integral quadratic forms. And for these, so they have many, many, many applications in arctic numbers theory, and there is no explicit simple formula to tell you what this is
13:04
or what the modulus is. You need just estimates. So here one can only hope, and it's a generic situation.
13:29
Now one of the important features of this example, which already throws you way beyond the theory of Andrei Weyl. So the best estimates for Clusterman sums,
13:42
individual Clusterman sums are due to Andrei Weyl. But now if you start thinking of this as a function of n, so an important feature is that these sums themselves as functions of n are kind of nice.
14:05
And so it's important to define it also at 0, which I will do by sending it to minus 1 over square root of p. That's the right definition.
14:21
Actually, that's the definition that comes in there. Naturally, I don't want to change anything. So it is also such a special function, modulo prime.
14:44
And more generally, many families of exponential sum parametrized by some element in fp will themselves be nice functions in this sense, which immediately will give us a lot more applications of the Riemann hypothesis than just playing around
15:01
with the first example. So from the point of view of more classical things, you should think of this like KL2s are analogs of Bessel functions in a vibratively precise sense. And so saying that we want to see these as nice functions is like saying that Bessel functions are just
15:21
as nice as the exponential function, essentially, or like power functions. So in classical analysis, the first interesting functions are exponential functions and power functions. And when you start going to Bessel functions, these are just as nice once you start learning about them and so on. And this step of going from characters
15:40
to Poisson sums or such exponential sums is also of this kind. So the crucial feature that the Riemann hypothesis tells you
16:02
is that if you look at these trace functions, so whatever they are, we already have a few examples. I will give some more in a second. Provided you restrict to a certain subset which are defined by some kind of irreducibility or normalization condition, then they behave like quasi- orthogonal vectors in a five-dimensional Hilbert space.
16:22
And this quasi- orthogonality gives you code cancellation in a generality which is way, way beyond what the theory of Andrei Weyl can give you for just one variable character sums. So the RH will be a statement.
17:06
So now, I have to say a little bit on where the functions come in. But then this almost a definition but not quite a definition will be replaced by, again, a series of examples and then the statement of the Riemann hypothesis.
17:21
So where if you're speaking, what happens is that you have certain algebraic geometric data which can be defined in different ways. So it could be Galois representations of the rational function field in one variable, which is not quite the same, but you could also think of this as automorphic forms for this field.
17:53
But automorphic forms of arbitrary rank, not just classical GL2 things. Or you can think of this, so the geometric point of view
18:01
is something called L-adic sheaves or certain type. And it doesn't really matter whether you understand any of these words for most of these lectures. So we'll see that it's a question at some point that you can only answer if you really have an idea of what one of these means. The most convenient, I find, is the L-adic sheave point
18:22
of view. But the easiest to define would be the Galois representation. So that's probably what will emerge as a definition as time passes. Now these are algebraic geometric data. And you can think of them really as analogs of automorphic forms. So you know an automorphic form of a number field is a relatively complicated object
18:41
but with many nice properties. Or a Galois representation is a nice object, which is highly algebraic, highly structured with many invariants. And we will send these to the space of functions. And we call it C of fp, which is the space of functions k defined on fp with values in C.
19:04
So we have some highly complicated objects. With proper normalization, these typically define countable infinite sets. And we send these to this five-dimensional vector space of dimension p over the complex numbers.
19:22
So the trace function is an invariant, which, again, if you think of analogies with automorphic forms or even with Dirichlet characters, a large modulus Q, you should think of this maybe as being the data of the first values
19:41
of the character or the first three coefficients of a modular form. So it's not full information about the modular form. If you think of the classical case of GL2 forms, you just take, let's say, the first Q to the 10 echelon values or something like this,
20:02
or Q to the 1 of echelon values. But this is enough to get a good handle on the algebraic object as well.
20:26
So there is one very important fact that is analytically completely crucial. So from the point of view that we have some complicated algebraic object, and then we just take this invariant, which is the trace function.
20:40
The trace function only a priori contains some of the information. And the map is not injective. So in particular, there are literally infinitely many objects which have the trace function identically zero modulo p. So if you just think of the trace function by itself as a function defined modulo p, it doesn't tell you that
21:01
much, because it comes from infinitely many sheaves or representations. And some of these are highly complicated, and some are a little bit less complicated. So we need a way of measuring the complexity of a trace function so that we can work with objects with small complexity as a way of saying
21:20
that we have a control on the function we get and that the function gives information. So it's like saying that if you just look at the value at two of the Dirichlet character, it's not telling you that much. But if you know that the conductor of the character is extremely small, then it's giving you a little piece of information. So there is an invariant, which
21:46
is quite close analog of the analytic conductor for L functions of either of these objects or sheaf representation, automorphic form.
22:08
And we will usually work as p varies with.
22:22
So for every prime p, you would have a trace function Kp, which as p varies. So they vary really with p, but they come from sheaves or whatever, which have complexity bounded uniformly in terms of p from objects with uniformly bounded complexity.
22:54
And so by abuse of notation, I will write c of K for maybe the smallest
23:00
complexity of an object, which gives rise to a trace function K. So let me say that I will fix. So these objects, you don't necessarily need to know whatever they are at the moment. Let me call them representations. Let me call them rho.
23:22
And T rho will be the trace function. And assume the trace function is K. And I will use c of rho for the complexity of representation.
23:46
And as you will see, just like you can say a lot about Bessel functions, or you can use Bessel functions quite efficiently just by knowing that they occur in certain formulas and satisfy certain bounds at infinity and certain integral identities, one
24:01
can say quite a lot about trace functions. One can apply them quite successfully, just knowing many examples, bounds on their complexity, and the Riemann hypothesis, which will come in a second. So first, more examples with examples of the complexities.
24:26
And so whatever I'm going to say later, if it seems abstract, you can always go back to these examples. And usually, even with just these examples and a few basic operations, you already have lots of the interesting features, phenomena,
24:42
and what you need for many applications. So the abstract theory is useful, but it's not necessary to already say quite a lot. So first, there are the examples I already gave. I have to tell you something about the complexity of these objects. So we take a rational function as before.
25:05
So now I'm going to take the polynomials in fp of x, co-prime, f2 non-zero. And so in the theory, because we are looking at objects which are a little bit like primitive Dirichlet characters,
25:23
in contrast with just all characters, so the primitivity means we have to be very careful with the value at all, which would be the prime numbers or integers in the classical case. So we cannot just say that, let's say, at the points of indeterminacy, we don't say what happens. So where f2 has a pole or f2 has a zero, so f has a pole,
25:41
we have to say precisely what is the right value of the function. This is important for some statements to actually work. So precisely, so we define k of n to be ep of f of n when f2 of n is non-zero, not p. So in this case, f1 of n is non-zero
26:01
because two polynomials are co-prime, so this thing makes sense, and zero otherwise. And for multiplicative characters, we can take chi of f of n if f2 of n is non-zero
26:21
and f1 of n is non-zero. So for multiplicative characters, I clearly also have to say what happens when we evaluate a multiplicative character at zero. And there are two further cases. So if chi is non-zero, sorry, if chi is not the trivial character, and f1 of n is zero, f2 of n is zero,
26:43
we take zero. So the classical extension of a Dirichlet character at zero is chi of zero is zero if chi is non-trivial. But if chi is trivial, it should be one. So it's one if chi is equal to one and f1 of n or f2 of n is zero.
27:04
So to a certain extent, this is technical. So in eigen numbers theory, if you happen to have a different convention for zero, you would just switch the value at this point and do trivial estimates after the switching. But for the algebraic statement, it's crucial to have the right conventions and normalizations.
27:25
So that's one. So we have to say what is the complexity. In both cases, the complexity is bounded. I mean, I didn't define it, but I can tell you this is going to be bounded by the maximum or the sum, let's say, of the degree of f1 and the degree of f2
27:44
with an absolute implied constant. So one can actually be more precise when one has a correct definition of the complexity. But for basic applications, you just need to know.
28:01
So how is this useful? It's going to be useful by saying you take f to be a ratio of integral coefficient polynomials, which are fixed. And then you take p to be a variable prime going to infinity. And if f1 and f2 are fixed, the conductor of whatever you have as a trace function modulo p will be bounded independently of p.
28:21
That's the crucial aspect. So that's the first example. Second examples are hyperclusterman sums. So let's pick an integer r at least 2. And klr of n and p is r.
28:42
OK, so here I have to be careful. So 1 over p to the r over 2, r minus 1, sorry. Sum over x1 up to xn in fp, such
29:05
that the product is equal to n of ep of the sum. And that's when n is non-zero.
29:20
And when n is equal to 0, again, one has to be careful. And if I remember things right, it's minus 1 to the r divided by p to the, Philip, do you remember?
29:48
I think it's that. Anyway, so it's important that it's there when n is equal to 0. So there's a way of computing these values right. So here, it doesn't have to be r equals 2.
30:02
What? It doesn't have to be r equals 2. So r equals 2, I had written 1 over p. Yeah, so is it r minus 1 over 2? I always forget this one. OK, I'll write this for the moment so it coincides with the case n equals r equals 2.
30:23
And I'll correct that in the next lecture because this is something I always forget. OK, so for r is equal to 3, these are the three variable custom and sums, or two variable custom and sums that Thao spoke about when
30:43
explaining how one handles the so-called type 3 sums in John's work. Or the ternary divisor function in arithmetic progressions following Friedlander and Ivan yet.
31:07
Now, you can write it this way, so if n is 0, I mean, if n is a multiple of p, that means n equals 0, then you have to use the second formula anyway. And if n is non-zero, then the xi have to be invertible.
31:24
Yeah, yeah, or everything is defined over fp. All my functions will be defined just over fp. OK, so you could actually write fp star and then I'm done. Yeah. But I mean, I always forget how you,
31:41
so I mean, Deling defined is the first. And if you write it correctly, then the n equals 0 case is given by the same formula. I'm not sure if the way I wrote it down will actually give the right formula for n equals 0. I think it should be r minus 1. The exponent? 1 to the power r minus 1.
32:03
Well, yes and no, because I mean, for r is equal to 2, I switched the sign. I mean, the right definition of klusson and so on is minus 1 over square root of p times the sum for i equal to 2, which I didn't do, so I just, anyway, so it's not an important thing.
32:21
I'll clarify this the next time. But for i equal to 2, you should have, yeah, you're right. It's probably r minus 1. Anyway, OK, so this is a deep fact is that these are trace functions,
32:48
and this is due to Deling. And I can show you that this is indeed a deep fact by the following first, again, another fact
33:01
about the complexity. So I didn't tell you about the complexity too much, but there are two properties of the complexity which are, maybe I should say this after the next example, but I might forget. So let me say it now. So it's a little bit, this is kind of a semi-digression, but it comes at a good point because the strengths of these three marks will be clear.
33:22
So two facts about the complexity are important. So the first, it's a little bit like an height in the sense that when you look at the set of trace functions with a suitable normalization, which I will discuss later, with complexity bounded by a constant, this is a finite set.
33:40
So the whole set of trace functions is infinite, just like the set of Dirichlet characters is infinite. But when you bound the conductor of Dirichlet character, you get a finite set, and this is a similar feature. So this is a rough way of saying it because we need a little bit more information before we can state this precisely, but roughly speaking, the set of k with complexity less than or equal to x
34:02
is finite up to trivial modifications of the trace functions. So this is a relatively rough statement, but it gives an idea already of what is the content. And so, as you maybe know, is if you take the analytic conductor
34:21
as defined by even yet and sound like, the same statement is true for automorphic forms over a number field. Okay, and the second remark is explanation of why this is really deep. A basic fact about the complexity, so one has already a first way of showing
34:42
that the complexity is interesting. The way it's defined, it has the property that the L infinity norm of k is bounded by the complexity. So whatever complexity you have, it should measure how complicated is the function. One way that a function can be complicated is by taking large or very oscillating values.
35:03
And here we're saying that the L infinity norm is bounded by a complexity. And here, what I didn't say, which I should have said, sorry. So with the complexity of kL R mod p is bounded by one with a constant depending only on R.
35:25
So let's apply this, this fact. So, Krusselman sum that trace functions with bounded complexity, and the fact that the L infinity norm is bounded by the complexity, whatever it is. So corollary immediately is that there exist constants a R non-negative,
35:43
such that the hyper-Krusselman sum and p is bounded by a R for all n mod p and all primes p.
36:00
Now because I divided by, let's say for R equal two, I divided by square root of p, this is a weak form of the very bound for Krusselman sums. So if a two was equal to two, then this would be the very bound for Krusselman sums, which comes directly from knowing that it's a trace function with this type of properties. And so it's not surprising then to see that,
36:22
to prove this, that these are trace function in the sense I'm mentioning. So Delin used already the Riemann hypothesis of phi fields in a non-trivial way. How complicated is the zero function? Bounded complexity. I mean, it depends the way you define it, but I think with our precise definition, it would be one.
36:41
Oh no, I thought you said it's possible to have a trace function which is identically zero. Yeah, with large complexity. But the smallest complexity would be one. And actually it's an interesting question to find the smallest complexity of a non-zero function, non-zero object with trace function zero. And this I actually don't know.
37:01
So the lower bounds, I mean the upper bounds we know is p, or lower bounds. So we know trace functions with complexity roughly p, which are, sorry, we know objects, representations with complexity roughly p with trace function zero. And what is the actual threshold, I don't know.
37:25
So already this is, you see that just knowing that an object like hyperclusonism is a trace function without knowing anything else. So this is the type of information that maybe in 10 years there'll be an analog of Grashin-Rizhik for trace functions. And this is the type of thing you would find in the first two pages of this hypothetical
37:43
Motivik-Grashin-Rizhik. And you wouldn't need to know anything about veil bound or hyperclusonism bound. You would just say this is a trace function, so it's bounded by complexity. Complexity is bounded, and then you apply that, okay? So this is somewhat similar to the fact that Bessel functions decay
38:01
like one over square root of x at infinity. In a very rough analogy. It's a much deeper fact, but. Okay, so now let me go back to, after this small digression, to more examples to show you the variety of type of things that can arise. So another fairly general class of trace functions are those which you obtain by counting solutions
38:21
of certain polynomial equations. So first example would be you fix a non-constant polynomial f mod p. It could be a rational function also, but I do it for polynomial. And you count for a given n how many x there are in f p,
38:41
with f of x is equal to n, and you just count them. So this is integral valued. It's only bounded by the degree of f, because f is not constant. It could be zero, and it will be zero fairly frequently, but it is something. So this is a trace function, and its complexity
39:03
is bounded in terms only of the degree of f.
39:37
So then, so k is a trace function
39:47
with c of k, again, up to an absolute constant bounded by the degree of f. And therefore, if your polynomial is, let's say,
40:02
x squared, which is an integral polynomial, then you get something which has complexity bounded independently of p. And in this case, of course, it's not hard to compute. The number of solutions to x squared is equal to n.
40:20
It's one plus n over p. And the last example, which is, again, just a very special case of something more general, is if you try and count not the solutions to this type of one variable polynomial equation, but number of points
40:42
on the family of curves. And the simplest example is, simplest, really interesting example is probably the Legendre family of elliptic curves. So you take k of n to be p minus, in this case, one has to normalize in this way. So the number of solutions in f p squared
41:00
of y squared equals x, x minus one, x minus n. So this is done in such a way that the formula is correct also for n equal one or zero. So that works for every n. So is a trace function.
41:21
Sorry, I have to divide by square root of p with my definition. So it's the ap of this elliptic curve divided by square root of p. So it's a trace function with complexity bounded as p varies.
41:42
And you can basically replace this family of elliptic curves by almost any family of algebraic variety depending on one parameter that you can think of up to technical issues. But by technical issues, you mean what exactly? Any of the dimensions? Well, I mean, you might have to extract the right.
42:00
At the bad points. Well, first there's the bad points and also, let's say, if you take a surface and if it has, let's say, an h3 instead of, this would only be the right thing if there's just a middle dimensional convergy or whatever. But for families of curves, except for doing the right thing at the bad points, this would be the right,
42:21
this would almost always be a trace function. Okay. So this already gives you some insight. And so one of the things you can see is that once you have this, already just with these examples, any statement which is valid for any trace function, which there will be such statements, will immediately give you many, many
42:41
different types of statements, which very often, the number theory would otherwise be kind of treated one by one with maybe different ad hoc methods. And we'll see examples of some of the things we did with Fourier and Michel where we have statements which are valid for every trace function. So that looks a bit abstract at first. But once you have a list of examples,
43:01
then you see we have actually lots and lots of theorems. Okay, now I can state the Riemann hypothesis. Right, I guess I need a bit more room for such a big theorem.
43:20
So I think I'll keep the examples visible. At least the definition, if I do it this way. So, because you will see that I'm going to state the Riemann hypothesis. I cannot quite state it just with using only the words I've already introduced.
43:42
I will need to introduce at least one extra property of trace functions, which I will then interpret in a concrete way to see that the statement at least implies almost immediately the
44:00
equated version of Vale's estimates for exponential sums in one variable. So I keep this definition because I want to be able to refer to them. So because the extra word I'm going to introduce would be a conditional trace function, it would be always satisfied for the example one,
44:20
which is the classical exponential sums in one variable case. Okay, so let's see here. So this is essentially the lean, but it uses also work of cot and D can show other people, but these are the two main ones. And the really essential work is that of the lean.
44:43
Okay, so P is a prime. And we take K one and K two trace functions modulo P with complexities at most C one and C two respectively.
45:06
So I could say equal to C one equal to C two. I want to emphasize that usually complexity you don't really know what it is because it's not such a very meaningful number. What is meaningful is being big or being small. And you have to think let's say of C two equals 10,000, C one equals 10,000,
45:22
and there's already lots and lots of such things as the examples above already show. Okay, now we need an assumption on K one and K two. So the statement would be a quasi orthogonality statement, but for that we will need some assumption. So we assume K one and K two are what is called
45:41
geometrically reducible. So I will have to explain what it is and I will do it afterwards. You can already take as information that in case one this is always true. So in case one if you have K one and K two both of this type, so one additive, one multiplicative
46:02
or both multiplicative, both additive, whichever way, then it is always automatic. Then there are two cases. So if row one, okay so, and they are associated to representations that hold them, whatever these underlying objects
46:22
which I call row one and row two. So if row one and row two are not geometrically isomorphic, so it's another piece of information I will have to explain and I will do it afterwards. And in case one, in the case of one variable character sums,
46:43
it will again be quite clear what this means. Then we have scored cancellation when we correlate K one with K two. So we sum over FP K one times K two bar. So it's an inner product.
47:01
And we get scored cancellation with implied constant depending only on the complexities. And one can compute that the upper bound can be three C one squared C two squared with the definition we've taken of complexity. Okay, so this is uniform scored cancellation
47:21
as long as you have the assumption and as long as C one and C two are bounded. I mean it's even uniform in such a strong way that you don't need to have C one and C two bounded. They could grow a little bit with P and still you get extremely good bound. So that's the first statement. Let me put the second part here so I still have the examples on the blackboard.
47:42
Is this condition independent of the row one and row two are sort of not unique? Right, but the condition to have this is that row one and row two are, it's a condition about row one and row two. So what this will reveal is that
48:02
once we have the second version, what this will reveal is that there can only be one row one with small complexity and with a given trace function. So there's a gap between the complexities of the trace function. That's a little bit what actually I was asking. Okay, so second statement is what happens in the other case. So if row one is geometrically isomorphic to row two.
48:26
Okay, so first thing is we don't know what that means but first a consequence which we might be able to falsify, meaning that if we falsify this condition then of course we're in case one. So then first information is that k one
48:41
is proportional to k two for some alpha with modulus one. It's really proportional everywhere. So including possible bad points that has to do with the fact that we're using implicitly correct definitions of the values at every point.
49:02
And then it's not orthogonal but quasi normed meaning that the sum of k one of x, k two of x conjugate, well of course it's alpha times the sum
49:20
of modulus of k one of x squared. So we cannot expect cancellation. And in fact what we have is more like an asymptotic formula. This is alpha times p. So in particular if k one is k two this is exactly p. And up to an arrow which is against code cancellation and the same bound as in the case here.
49:54
So I have still about 10 minutes.
50:03
Okay, so once we understand a bit better what these conditions mean and we have tools to handle these conditions, checking them and so on, then this is really an extremely general black box version of the most general form of the Riemann equation of finite fields. So it's not quite equivalent because we're still working
50:23
with just prime fields and sums with one variable but it's basically the whole thing. At least from the antigonometry point of view. There are other consequences of Deline's work which are more algebraic geometric which this is not really trying to catch.
50:41
Okay, so now, yeah. If you take the product of two trace functions I guess maybe it's not quite a trace function because of that. Right, yeah exactly like products of two DHL characters which are primitive might be primitive you might have to change at the bad points the value to get something primitive. But there's a unique underlying kind of primitive object
51:04
that arises from this. Yeah. Is twice the character also a trace function? Yes, but then it's not geometrically reducible. It would be chi plus chi, two different components.
51:21
You're only allowed to multiply, if you multiply chi by a complex number of modulus one, you can say that it's still a trace function that's geometrically reducible. But if you multiply by two, I mean there are different ways of doing it but a loud way is to do chi direction with chi and then the trace. And this is not reducible anymore.
51:40
Yeah, because otherwise it wouldn't work. How about the k1 equal to k2 here? How about the k1 be equal to k2? So k1 equal to k2, you either, you're in this case, and then you... You say the rho one and jump to chi is more equal to rho two, then k1 and rho two, but how about the k1 not equal to rho k2?
52:04
If they are geometrically reducible, then this is true. And otherwise this is going to happen with situations where the complexity is so large. Yeah, it certainly can happen that they are not geometrically reducible, they are not geometrically isomorphic, but k1 is equal to k2, this can happen,
52:22
but then the c1 or c2 will be extremely large so that this bound will be trivial. So you see this bound is very powerful when c1 and c2 are small enough that essentially the product is less than code of p. Because what is the trivial bound? Maybe I should have said that, but...
52:40
So the trivial bound for this sums, k1 of x, k2 conjugate of x. So trivial bound is p times the l infinity norm on both and we know this is at most c1, c2. So you see that if you allow complexities to large, then this is meaningless, gives no information, so there's no contradiction.
53:05
Okay, so what does it mean to be geometrically reducible? So geometric irreducibility and isomorphism. Yeah.
53:20
In the second statement, the height of that c1 is equal to c2. Yes, right, so the invariant is an invariant of, so these objects one and row two, they're really meaningful up to some isomorphism class and in particular, c1 equals c2 in that case.
53:48
Okay, so what does this condition mean? So here we have to enter a little bit into where these functions come in. So if k is a trace function of a certain object, row,
54:02
this implies there exists homomorphism, so row is really homomorphism of a certain group pi, pi1, to a certain five-dimensional vector space. So it's a five-dimensional representation. You can think of V as a complex vector space.
54:22
This is algebraically not the right thing to do. It should be an analytic vector space, but it's only fine algebraically to think of it as a complex vector space of some finite dimension such that k of n is the trace of row of certain elements, Frobenius elements,
54:44
depending on n, acting on a certain subspace, Vn. So Vn is the invariance under inertia, which I don't want to go into right now, but it's a subspace, okay?
55:02
So at first you can even ignore this second information. So there is certain object. So row is really a map from a certain fixed group. So this depends only on P, into a five-dimensional vector space. So GL is a five-dimensional vector space.
55:21
So if you have a representation, you can ask that the representation is irreducible, and this is almost what we want. So precisely there exists a normal subgroup pi1 G of pi1.
55:42
And to say that row is geometrically irreducible, will mean that there is no, this does not exist, so means first V is non-zero, and there does not exist W inside V, not equal to V, non-zero,
56:03
stable under the action of the subgroup. Okay, I'm still trying to keep all my examples, but it's becoming difficult.
56:22
What? Is this for every normal subgroup? Is there one? No, there exists such a normal subgroup. So, okay, more precisely. So I can tell you what is pi1, so that there's no ambiguity. So pi1 is just the Galois group of a separable closure of FP of T.
56:41
So it's a group that depends only on P, up to isomorphism, and the normal subgroup is the Galois group of the same big field. So algebraic closure of this function field, modulo the smaller group of rational functions with algebraic coefficients. So it's not, I mean, in some sense, these are not such complicated objects.
57:01
They are quite mysterious from the group's theoretic point of view, and often one doesn't need to know really what they are, but that's what it is. Okay, so we have this condition, and therefore, in particular, so suppose that the dimension of V is one. So this happens in case one, in the first example.
57:27
So meaning character values at a rational function, additive or multiplicative. So one has to know that this is the case, but this is the case. Then, in this definition, you don't need to know anything.
57:40
V is of dimension one. You're looking for, so it's non-zero, you're looking for a proper subspace that's non-zero. There are no such things. So it is automatically geometrically reducible. So that's an example, okay?
58:04
Next, geometrically isomorphic. So rho one into GL V one and rho two into GL V two,
58:21
geometrically isomorphic, if and only if there exists a linear map, isomorphism, linear isomorphism from V one to V two. So they are the same dimension as a linear map, such that, okay, I have to write this diagram correctly.
58:41
So for every element in the group, whatever this group is, this diagram commutes. It's just been the action for every G in the subgroup. So gamma, I should call it, pi one G.
59:03
I mean, as a first instance to begin with, what you should think of at the moment as what does it mean to be geometrically isomorphic means trace functions are the same. That's what you should think of, and we'll see more refined versions of this in a second. Okay, now, it's becoming harder and harder to,
59:24
okay, I'm starting to get a bit over time, but since I'm an organizer, I'm allowed. Ah, that's not the right one.
59:40
So I want to show how this immediately tells you a qualitative form of Vales estimates. So just by knowing this, you already can apply it well enough to recover Vales estimates for one variable character.
01:00:01
OK, I guess I have to erase that. So example. And I'll just do the Clusterman sum case. For simplicity, that's the most interesting one
01:00:22
from many applications. So r equal 2. So in that case, you take k1 of x to be the Clusterman sum. Sorry, you don't take Clusterman sum. You take Ep of nx plus x inverse with the usual convention that it's 0 when there's no n going too fast.
01:00:50
So you have a non-zero element which is fixed. Let's call it A to avoid confusion. And k1 of x is Ep of Ax plus x inverse for x on 0.
01:01:02
And it's 0 for x equals 0, module B. For k2, you take the function 1, which you can see as Ep of 0 if you want. So it is a trace function. And by example 1 and by what I just said, so complexity of k1 is bounded in the importance of p
01:01:24
and complexity of k2 also. And they are certainly geometrically reducible because of rank 1, as I said. And therefore, now I have to erase that.
01:01:40
Start with these examples again tomorrow. So we did use form RH in the form I stated that the Clusterman sum with parameter A of p
01:02:03
is of modulus bounded by its code of p with an absolute implied constant, provided these two things are not geometrically isomorphic.
01:02:22
So whatever row 1 and row 2 are not isomorphic geometrically. So can it be that they are geometrically isomorphic? Suppose they are. Then what happens? That means in particular by case 2,
01:02:41
so if they were geometrically isomorphic, k1 would be proportional to k2. k2 is a constant, so k1 would be a constant. And as soon as you have one element non-zero, this function cannot be constant. This has modulus 1, this is 0. So the constant would have to be 0, but then it doesn't work.
01:03:00
OK, so as long as you have a field with at least two elements, you're fine. OK, but k1 is not constant because p is in case 2. So we get kind of a qualitative form
01:03:20
of the estimate without having to know anything at all except a black box. OK, so I'll stop here. And tomorrow I'll start to try and do more complicated examples. So I'll try to start doing the sum that comes up in Zhang's work, and see where we get stuck and why we need a bit more information.
01:03:56
We're buying a version of RH, so we could take this inner product of k1 and k2
01:04:01
and say scale that x and k2 by some n, and then normalize by one of the repeat. Would that also be a symmetric? What do you mean? So I'm trying to generalize the construction of Clusterman sums here. So they're sums of some k1 times k2, where one of them is with a scaled argument.
01:04:22
So by scaled argument, do you mean? So I want the sum of k1 x times k2 n x. Right, yeah. So this will be no problem.
01:04:40
So we'll see. So trace functions, not only we have these first examples, but they also have a rich formalism. And in particular, if you replace x by n times x for any non-zero element of the field, then you get another trace function with the same complexity. Or you could replace it by 1 over x works exactly the same. But now this sum is a function of n. No, now you vary n.
01:05:00
And then you want this as a function of n. Then this will often be a trace function. There has to be conditions. But for kind of standard random examples, this will be a trace function. And the complexity will be bounded solely in terms of complexity of k1, complexity of k2. This is highly non-trivial, so this will require very recent work of cats
01:05:22
on multiplicative convolutions. But this exists. In that example where f is x squared and the k1 is 1 plus the logarithm symbol, this doesn't cancel an average. Yeah, because it's not geometrically reducible.
01:05:41
As you can see, so one of the consequences already of what I said is that for small complexity, the two estimates I wrote are incompatible. So the fact that there's cancellation if you take k2 equal to k1, if sum of k1 squared,
01:06:01
modulo squared is essentially p, this is equivalent to geometrically reducibility for small complexity. So one quick way, I'll say it tomorrow, but one quick way to, if you are given a function, you know it's a trace function, you want to know whether it's geometrically reducible or not. Compute the mean square. If the mean square is close to p, then it is geometrically reducible.
01:06:24
And we'll see examples. So finally, you can check that Clusterman sums are geometrically reducible this way. The mean square of Clusterman sum is easy to compute. It's essentially p. Just by rh, actually. So I'll say this tomorrow in more detail. Yeah? So if you are given something that is not irreducible,
01:06:41
is there some natural way to break it up into pieces? So we know there exists a decomposition into sums of geometrically reducible pieces. There's a little technical tweak, but essentially it's true. It might not be easy to find them. So in principle, you find them by taking inner products. It's like a little bit like an orthonormal basis of Hilbert space, but it's, I mean,
01:07:01
there are infinitely many basis vectors. So we have a finite dimensional space, but somehow with infinitely many basis vectors. So it's hard to actually compute all of them to determine the decomposition. So if you cannot compute it because you have extra information, it can be very hard. So a number of the open problems we still are fighting with in various applications
01:07:21
is because we have something. We don't know that it's geometrically reducible, and we don't know exactly what are the components. So it's one of the fundamental problems that can arise in applications.