We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Simple characters and ramification

00:00

Formal Metadata

Title
Simple characters and ramification
Title of Series
Number of Parts
17
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Let F be a non-Archimedean local field of residual characteristic p. For anyinteger n more than 1, one has the detailed classification of the irreducible cus...
Musical ensembleEquivalence relationSocial classCharacteristic polynomialFlow separationProjective planeStaff (military)Set theoryFunctional (mathematics)Local ringAlgebraProof theoryLattice (order)ExistenceMultiplication signRight angleGlattheit <Mathematik>Group representationEuler anglesLanglands-VermutungExplosionObject (grammar)Phase transitionMany-sorted logicLokaler KörperArchimedische SpiraleResidual (numerical analysis)Pauli exclusion principleAlgebraic closureGroup actionOrientation (vector space)IntegerCorrespondence (mathematics)Lecture/Conference
Glattheit <Mathematik>Group representationObject (grammar)1 (number)EndomorphismenmonoidPoint (geometry)Correspondence (mathematics)Social classResultantConjugacy classDiagramFilm editingPositional notationDimensional analysisDirection (geometry)Group actionGeneral relativityWell-formed formulaMereologyElement (mathematics)Set theoryEquivalence relationLanglands-VermutungTime zoneNatural numberLocal ringMoment of inertiaLogical constantHeat transferCalculationAlgebraic structureTerm (mathematics)Canonical ensembleP-groupGroup theoryConnectivity (graph theory)TheoryInfinityFood energyDegree (graph theory)FamilyCompact spaceOpen setRule of inferenceFree groupDescriptive statisticsMathematical analysisSubgroupMany-sorted logicMultiplication signFiber bundleGoodness of fitRepresentation theoryBijectionLecture/Conference
Term (mathematics)Group actionDimensional analysisSubgroupGroup representationMany-sorted logicAlgebraic closureDuality (mathematics)Normal (geometry)RamificationCategory of beingMereologyDistanceFunctional (mathematics)Compact spaceConvex setMetric systemLinearizationPoint (geometry)Moment of inertiaDerivation (linguistics)Algebraic structureHydraulic jumpContinuous functionInequality (mathematics)Right anglePosition operatorRational numberSocial classUniqueness quantificationEndomorphismenmonoidSpacetimeSequenceSigma-algebraConnectivity (graph theory)Lecture/Conference
Körper <Algebra>Well-formed formulaMaxima and minimaDistanceTerm (mathematics)RamificationGroup actionGroup representationGame theorySigma-algebraAnalogyDegree (graph theory)Multiplication signPoint (geometry)Symmetry (physics)Langlands-VermutungInductive reasoningMultiplicationPositional notationFunctional (mathematics)Field extensionSocial classUniqueness quantificationLimit of a functionRepetitionResultantAntiderivativeGoodness of fitAlpha (investment)Hydraulic jumpContinuous functionRight angleCategory of beingCorrespondence (mathematics)Film editingProof theorySimilarity (geometry)Conjugacy classVector spaceInverse functionEnergy levelEndomorphismenmonoidMoment of inertiaCentralizer and normalizerLocal ringTheory of relativityGeneral linear groupSequenceHelmholtz decompositionIsometrie <Mathematik>Dimensional analysisGraph (mathematics)Slide ruleConnectivity (graph theory)DivisorBijectionLecture/Conference
Group representationKörper <Algebra>CalculationMultiplication signProof theoryMusical ensembleQuotientExtension (kinesiology)Position operatorWave packetAlgebraic structurePositional notationPoint (geometry)Category of beingJunction (traffic)Sigma-algebraRight angleMoment (mathematics)Energy levelChi-squared distributionField extensionDivisorFlow separationArithmetic meanCorrespondence (mathematics)SubgroupDistanceThermal conductivityGroup actionPower (physics)Mathematical inductionEndliche GruppeMathematicsTheoryGeometryHydraulic jumpElement (mathematics)Set theoryNormal subgroupCycle (graph theory)Process (computing)Functional (mathematics)Dimensional analysisCausalityMany-sorted logicMathematical singularityPiPopulation densityFiber bundlePressureDifferent (Kate Ryan album)Grothendieck topologyResultantSocial classRamificationFinitismusKritischer Punkt <Mathematik>Goodness of fitSubsetFinite setRange (statistics)Numerical integrationReflection (mathematics)Rational numberLanglands-VermutungTheoremAntiderivativeSequenceWell-formed formulaMoment of inertiaLecture/Conference
Functional (mathematics)Category of beingPoint (geometry)LinearizationCorrespondence (mathematics)Sigma-algebraMoment (mathematics)Graph (mathematics)ResultantConvex setCalculationNumerical analysisSkewnessTheoremAlgebraic structureGradientLine (geometry)Group representationInverse elementMany-sorted logicPairwise comparisonSymmetry (physics)Hydraulic jumpDiagonalDimensional analysisConvex functionMusical ensembleFunktionalgleichungLecture/Conference
Group actionPoint (geometry)Maß <Mathematik>Order (biology)Endliche GruppeArithmetic meanProof theoryRight angleSymmetry (physics)Fiber bundleHydraulic jumpIntegerKörper <Algebra>Dimensional analysisTheoryMereologyGraph (mathematics)Ring (mathematics)Valuation (algebra)Algebraic structureAssociative propertyMoment (mathematics)Well-formed formulaMatrix (mathematics)Logical constantLocal ringLinearizationChi-squared distributionTerm (mathematics)Degree (graph theory)Field extensionSubgroupRepresentation theoryDiskreter BewertungsringMaximal idealElement (mathematics)DiagramAlpha (investment)Standard deviationCategory of beingFilm editingSeries (mathematics)Group representationSigma-algebraEnergy levelMany-sorted logicWater vaporDirection (geometry)Line (geometry)CohomologyAnalytic continuationLecture/Conference
1 (number)SubsetThetafunktionAlpha (investment)Film editingClassical physicsAxiom of choiceRange (statistics)Functional (mathematics)Element (mathematics)Condition numberExponentiationExtension (kinesiology)Equaliser (mathematics)Körper <Algebra>Set theoryDiagramRight angleWell-formed formulaTheoremMaß <Mathematik>Group actionAlgebraPoint (geometry)Valuation (algebra)INTEGRALMereologySubgroupPrime idealMany-sorted logicSigma-algebraRandomizationResultantCompact spaceOpen setGroup representation2 (number)Associative propertyRamificationArithmetic meanDifferent (Kate Ryan album)Lecture/Conference
Hydraulic jumpSocial classRepetitionKörper <Algebra>Functional (mathematics)Group actionSigma-algebraGroup representationField extensionArithmetic meanSummierbarkeitP-groupDegree (graph theory)RamificationDiagonalMultiplication signCanonical ensembleExtension (kinesiology)Stability theoryFinitismusPresentation of a groupCubeMoment of inertiaReal numberEnergy levelSkewnessAlpha (investment)Proper mapThetafunktionProof theoryLanglands-VermutungSpacetimeLocal ringMultiplicationMaxima and minimaDistanceInductive reasoningAntiderivativePoint (geometry)Power (physics)Symmetry (physics)Algebraic structureLecture/Conference
Group representationMultiplication signTheorySocial classHecke operatorPressureEndliche GruppeRamificationReflection (mathematics)Lecture/Conference
Bound stateMany-sorted logicPower (physics)Process (computing)Chemical equationDimensional analysisResultantDifferent (Kate Ryan album)Lecture/Conference
Transcript: English(auto-generated)
Anyway, it's a particular pleasure and honour to be here for this occasion.
It's, I think, fairly obvious that the book on zeta functions and simple algebras, which was discussed yesterday, set me on the road that I'm still treading. And I'm particularly happy that this meeting's at IHES because, I hate to say this, sorry, but 25 years ago,
Guy Enyar and I spent a semester together here, in which he persuaded me that it really was a good idea to do tame lifting. And that's underlying what I want to talk about today. Right, so, this you may think of as a report on the latest phase of a long project with,
joint project with Guy, to really elucidate the local Anglands correspondence. Please don't think I'm searching for a local proof of the existence or anything like that. My attitude is, it's been done, get over it. So let's try using it.
Okay, so let me start with a very broad brush. Does this thing actually do this? It seems very reluctant. Ah, okay. Account of the general background. I'd like to leave plenty of time to do something concrete, which is the object of the whole thing.
So, usual sorts of picture. A non-Archimedean local field. All that matters about it is that it's residual characteristic is P.
P is always that. And I'm interested in, as usual, WF will be the V group of some chosen separable algebraic closure. And I'm only interested in usual staff set of equivalence classes of irreducible, complex, smooth representations.
Right. I also want, on the other side, so let's start getting some sort of orientation here. For an integer n at least 1, I'll temporarily write an of f for
the set of equivalence classes of irreducible, complex, cuspidal, smooth representations of gln f.
All adjectives are assumed to be held constant in this and will not be used again. I really want to compare that thing with glf hat, which is to be the union of these spaces.
And if I have to refer to n, I'll use the notation. It lies in anf. I'll use a notation. That's pronounced degree equals n. I'm trying to avoid that as much as possible. This is meant to be a fairly dimension-free analysis.
I can't see the eraser. Where is it? Ah, right, fine. Good heavens. The big one's a good one, is it? Right, fine.
I have many blackboards, yes. Okay. Now, over here now, what was I going to say here? So, the Langlands correspondence, let's stick this in the middle, of course is canonical bijection,
for which I'll use the notation in this direction, pi goes to l pi, and that I want to analyse. In the following sense, the old work with Kuzco gives you a uniform, complete and systematic classification of the elements of this.
It has some unpleasant features of its own, it can sometimes be difficult to work with, but it is systematic and highly structured, whereas this is a jungle. So, I'm trying to actually see how this structured nature of this thing transmits to here.
And the way that this thought goes is the following. I'll let, I'll take the wild inertia group in WF.
I've then got a restriction map, which I'll denote sometimes RF plus. If I restrict a representation to wild inertia, I get a semi-simple representation, all of the irreducible components being conjugate.
So, I get a well-defined map to conjugacy classes under WF of irreducible representations of that. Now, on the other side, this is the bit of general theory I want to keep completely broad brush.
If I take, let's be honest and put it in, say where it comes from, it's coming from GLN, the cuspidal representation, this contains a simple character.
These are the important part of the classification theory. I mean, it's a carefully constructed family of very special characters, of very special compact open subgroups of GLNF.
At least one of these occurs in any given cuspidal representation. In fact, there's only one up to conjugation. I want to put something in here to make a nice diagram.
The thing is, to do that, I have to consider simple characters in all dimensions simultaneously. There is a machinery for doing that, a machinery of transfer between GLNs, but the class of all simple characters in all GLNs admits a canonical equivalence relation.
It's an equivalence relation which reduces to conjugation in the right circumstances in the same group.
It's a very tight equivalence. And the set of equivalence classes here, it's called the set of endo classes of simple characters.
I don't want to go into this, it takes forever. You let this stuff out of a cage, it just takes forever to get it back in. And when I come to the thing I want to do, all of this stuff sort of disappears in triviality. But anyway, this fits in here. One takes a representation down to the endo class of the simple characters contained in there.
Right, old result. The Langlands correspondence induces a bijection there. This is what I want to study. No, this is in a paper with Gee.
I think it's called something helpful like local tame lifting part 4. I've got to sell a lecture note so I can sell you. Now, the point here that's in my mind is this first PF is a humongous pro-P group,
and ask any group theorist, forget it, you're not going to say anything about its representations that's useful. That's encouraging. But, if I start with one of these things, it's not terribly difficult to parametrize quite exactly, not completely,
but quite exactly the irreducibles of the vague group which contain it. And the quite exactly comes from the fact that you've got a rather boring, unramified uncertainty which has to be eliminated using a local constant calculation.
But in structural terms it's insignificant. And guess what? On the other side you can do exactly the same. If you start with a simple character, or an endo class, and take an appropriate representative of it in a GLN, you can work out the cuspitals of GLN which contain that.
They're very parallel, the descriptions, and the parallel reflects reality. Gee and I proved some time ago that you can set this up very tightly. All of this is a roundabout way of saying that all the trouble is with this map and this map, and with that object and that object, and the map between them.
So, how does one deal with that? Starting from here, the first thing you can use, well, the key point is that from,
and in another old paper with Gee and with Phil Kutzko, we can use this stuff to work out a given explicit formula for the conductor of a pair of representations, a pair of cuspital representations. And that provides a tool for getting across there.
Provided, on this side, you take a deep breath and use ramification groups. And this is the one thing that you've got working for you on that other side. So, that can stay up there, right. So, if I take X greater equals zero, RF of X is what one habitually calls, whoops, X.
That's the ramification subgroup, or as you say, at X.
All right, so this is a filtration of the V group, it's all sitting inside. RF of zero is the inertia group. Otherwise, beyond zero, they're all contained in the wild inertia group,
they're normal in the whole V group, and they're closed subgroups. There's a sort of dual version, I don't know, this doesn't seem to have ever been given a name. RF plus of X is the closure of the union.
Overall, things foot strictly further down. Right, likewise, closed subgroups normal in the whole V group, and in fact, RF of X is RF plus of X, if and only if X is rational.
But one has to use both of these. It's not rational, it's not rational. Oh, sorry. Okay.
Now, using this, there's a trick I learned from a paper of Volker Heilmann, though part of it I'm sure goes back further. I can put, let me call it briefly, a distance function, delta on WF hat,
delta of sigma tau is the infimum of X such that HOM RF of X sigma tau is not zero.
When you restrict to this compact subgroup, they've got a component in common, and you take the infimum. You put a min there if you want to put RF plus there, it's up to you. Right, that looks like an ultra metric, but it doesn't separate points.
You can have two distinct representations, distance zero apart, but delta induces, in fact, an ultra metric. It's a metric with the ultra metric.
Inequality on that set, of course, is not complete with regard to that metric, and completing it seems a fairly strange thing to do, so we don't bother.
Right, to go across here now, a fairly awful definition I shall not write down. There is a similar function, which I call A on GLF hat.
It takes values and positive rational values as much as that one does, and that induces an ultra metric.
The endo classes. The distance in this sense between two cuspidal representations depends only on the simple characters they contain.
And then you go, oops, how did I do that? It depends only on the simple characters they contain, and that's reflected by the fact you've got an ultra metric there.
These things are horrible to define, but very easy to use. Those are easy to use, but correspondingly rather unpleasant to do. These are easy to define, but rather unpleasant to use. Let me continue this parallel development.
So back on the Galois side, what makes this tick is, so let's look at this space. This, there's a unique continuous function.
Alright, let's take sigma in. Let's do it in terms of representations of the Weil group to start with. Continuous function that I'll call sigma sigma of x, x greater or equal zero, with the following property.
If I take another irreducible representation of the Weil group, then if I evaluate this function at the distance between sigma and tau, I get the swan conductor of sigma ches tensor tau over dimension of sigma, dimension of tau.
And this is a rather interesting function. It's obviously, it's positive, strictly increasing, convex, which really matters, and it's piecewise linear.
And I have to say, I think if I take the derivative, sigma sigma prime has only finitely many discontinuities.
And the discontinuities, which I will call jumps, reflect the internal structure of the representation. x is a jump of sigma sigma, if and only if, if you take the endom-
I don't want harm, I want end. If you take the Rf of x endomorphisms of sigma, and take its dimension,
that is not the same as the dimension of the endomorphisms over the slightly smaller ramification group in that dual sequence. Something happens to sigma as you restrict down the ramification sequence, this is where something happens.
Ultimately, perhaps should have said that, if x is greater than the swan conductor of sigma over the dimension, the slopers one says, sigma on Rf x is trivial.
Can't say it equals one, because its dimension is not one, it's a trivial representation of the right dimension.
So, that's a reasonably easy function to define, you can write down various explicit formulas, but the fact of the matter is you're not going to be able to say anything about it unless you are on really close friendly terms with the representation sigma. Now, the analogue on the other side is the complete opposite.
It's a nightmare to define, but actually very easy to use. So, having said that, on this side, if I get, and let me do it again just in terms of representations rather than endo classes in the notation,
let's take pi in f hat, there exists a unique continuous function, phi pi, such that if I evaluate this function at the distance between pi and rho,
I get the swan conductor of the dual of pi cross rho over the degree of pi degree of rho. Right, and it's the same properties.
Now, the parallel is awfully nice, and it remarks that the Langlands correspondence gives you a bijection at the level of endo classes to conjugacy classes,
such as representations of wild inertia. And we've got this relation here on the swan conductors preserved by the Langlands correspondence. The Langlands correspondence is not an isometry on these vector spaces, we could all go home if it was.
It's modulated, it has to work through these two functions, the sigma and the phi. Right, but because the correspondence preserves the conductor of pairs and certainly preserves degrees, I can get something out of this.
Let's see. If I take a cuspital representation of something, let me put sigma is the corresponding representation of the Weil group.
I'll define c pi, I might equally call it c sigma, or anything else that I might choose to label it by. That is the inverse function of this one composed with that decomposition function
we call it on the Weil group, on the Weil represent Weil group side. Now, this thing is again continuous, strictly increasing, piecewise linear, its value at zero is zero,
it's equal to x for x, at least the swan conductor over the degree.
So it's only interesting in a particular interval. And the preservation of the conductor of pairs gives you straight away the following result.
Let's take pi in glf hat, let's choose delta greater than zero and define another thing, define that. And then, if I take another cuspital representation of some other general linear group, I get the distance between pi and rho is less than delta
if and only if the Galois distance between the corresponding representations of the Weil group is less epsilon.
And that is the same as saying, if I look at these things on the epsilon ramification group, they've got a component in common.
And this has a similar interpretation in terms of the simple characters inside pi and rho. But the point of this is, you can actually spot, maybe if you're dead lucky and you are up to a point,
you can spot the way the Galois representations decompose on the ramification sequence purely in terms of what's going on on the gl side.
This is all very well, but can you actually calculate anything? Now what are the things I really must say about this function?
First, this, as we call it through lack of imagination, Herbrand function, c pi, at first it behaves well with respect to this nice relative.
It's a tame base field extension. Realising that is a happy moment, because the two factors of it,
the phi pi and the sigma sigma are not well behaved with regard to tamely ramified base field extension. But their nastiness cancels out in this quotient. So that's nice, so you can reduce to what I'll call the totally wild case.
From the Galois side, sigma is totally wild means that sigma restricted to the wild inertia group is irreducible.
And it's easy to translate that to the gl side, because basically it says that the dimension is a power of p and if you twist with a non-trivial unramified character, you change the representation.
So that goes across to the gl side perfectly well. So just to simplify matters, let's take pi totally wild and e on f, a finite tame extension.
Then if I lift pi to a representation, so if pi is in ap to the r of f, I can lift it.
I'm cheating here, I can't lift pi, but I can lift the simple character that's in it, so forgive me that.
I can define it, but it's not necessarily very good. But this thing, c pi evaluated at x, is e of e on f, c pi of x over e of e on f.
And you can't complain about that formula. So you can always change the base field through a tame extension if you really want to, and you usually do. To carry on with this train of thought, or train of celebrations really,
this thing has another property which is extremely useful, though perhaps not as useful as you would ideally like. Right, and that is, let me just write it down and then excuse myself slightly.
If I twist pi with a character of f cross of, say the swung conductor of chi is x,
I'll get this right, for some reason I have, yup. Right, then c pi of x is the distance between chi pi and pi. Now, warnings usually.
First you have to avoid the jumps of this function, and you will observe that this only works if this thing x is an integer.
You've got to find a character with conductor x. But it does tell me, with a bit of luck, the value of this at a finite number of integral points. But, of course because of this, I get the same property over e, where e on f is finite time.
So this property applied across all these extensions, and making sure that the singularities don't get in the way, this defines this function at an everywhere dense set of rational numbers,
which since it's continuous is good enough. So that is in principle, this gives an algorithm for getting c pi.
It's very powerful, but it's not as powerful as you'd quite like. It's very good at actually working this thing out over crucial subsets of its range. The other point, the more philosophical point is, I don't actually have to invoke the Langlands correspondence in all of this.
It's got nothing to do with Galois representations whatsoever. It's purely matters of going on inside calculations inside GIN. The fact that the calculations can be quite hard is, well, a bit of a problem. But, that's the general stuff.
Now, where am I? Just where I want to be. I want to talk about further a rather special case. There's always a good reason for a special case, because it's something you can do,
but this one occupies a rather particular position in the structure of the subject. On the Galois side, so let's take, and I'm going to strict with my notation for something wild.
I'll say for sentimental and historical reasons that sigma is of carryall type if p doesn't divide its swan conductor.
At which point you shrug your shoulders and say, well, what does that mean? On the Galois side, it doesn't seem to mean anything very much. But, the corresponding thing for a, on the GL side, wildly ramified, totally wild,
conductor not divisible by a p. This is, this case plays a crucial role in the general structure theory. The classification of these, of the elements of this set GLF hat,
that's a rigidly hierarchical or inductive process. Virtually any proof involving this starts with representations of carryall type.
They're special, the methods you have to use for them are eccentric, and you generally have to work out rather more about them than you do in general just to keep things going. So they're an absolutely critical case to do first. Well, you wouldn't be able to do anything without doing this first. Whether you can do anything having done this is a bit of an open question at the moment.
Right, now, what I want to do with these is first, so for sigma totally wild carryall type, I want a general property,
rather surprising general property, of the associated Herbrand function. I want to then show you the results of calculation of what happens when you calculate it from,
from the GL side, you go back and you look at a particular sort of simple character and work out the Herbrand function. You can calculate it directly from the Galois side and then compare.
Let me pause a moment because any reasonable person now will be saying,
well, what's this all about, how excited can you get about a piecewise linear function? The thing is, once you calculated it from here, from the GL side, the Galois side calculation doesn't just give you this,
it actually gives you the structure of the Galois representation in extremely fine detail. But then when you make the comparison to make these two calculations compatible, you get some rather nice properties of the Lagrange correspondence.
Right, so how does this go? First, so let's take sigma is L pi, sigma is one of these totally wild things,
and let's, because I shall forget myself, M always means the Swan conductor.
Here, phi pi, oh, let's have its dimension, the awkward function that I didn't define actually is rather dull. It starts here at a point I know, it goes up with gradient p to the minus r until it hits M over p to the r,
and then it goes, bing, with gradient one. So beyond there is boring, and within there it is usefully and constructively boring,
because it says that, because the Herbrand function starts with a convex function, and you're then composing it with the inverse of that which doesn't have any jumps,
so it stays nice. All right, and so it looks something like this.
I'm going to draw the graph of y is at c sigma of x. So we've got this diagonal here, that goes on up there.
So I know it's, I can't draw piecewise linear functions with a convincing number of jumps, so let's just do this, whoops, can't even get them in the right place, there we are. It's a convex piecewise linear function which goes from there to there.
Right, theorem. The graph is symmetric relative to the line x plus y is M over p to the r.
This point here of course is M over p to the r. So that says, if we take the other diagonal, we've got a sort of functional equation, which you can write down, but it's much better to think of it as a symmetry.
The interesting thing which I can draw very well, you can't if you're going to do the general case, that point where the graph crosses that skew diagonal, there are two possibilities.
You might get a nice clean crossing with the two lines perpendicular, that's the way it has to be by symmetry, or you might get something like this. There's a jump in the graph where it crosses. This is actually quite a significant structural feature.
Think of it as a very large and unpleasant spider sitting there, which is governing everything in a rather indirect sort of way. But, you think about it for a moment, you see we're really interested in, this is the piecewise linear thing, we're interested in the jumps. So the jumps, you might or might not have one there, but otherwise they come in pairs.
So, the other point about this diagram is that if I take a general totally wild representation, so if I take tau in WF hat, totally wild, then C tau has symmetry if and only if tau is chi tensor sigma,
where sigma's of chi all type in chi equals one. Okay, so there's nothing awkward can happen anyway. It eliminates various odd possibilities and it's also nice to know that this is characterised,
this key case is characterised by something which is intrinsic in this setup. Right, so that's the first thing. The proof is largely an exercise in very, very fussy Galois theory and finite group representation, really finite group representation theory,
until at some point you have to invoke a conductor formula. You have to know what this is from the point of view of the Galois side. That's the thing that comes originally, that comes out of the conductor formula from Bushnell-Enyaar-Katsko.
I believe there is a direct Galois proof now, is there? No, not quite. Whether there is or not, it's rather interesting that this is a very standard sort of thing which comes out of the GL side, but on the Galois side it is at least fairly hard.
But that's a key point in that, making that proof work. Right. Okay, so now I have to take a deep breath and talk about this. Let's get that up there, and I actually have to talk about the GL classification.
Right, so to do this I'm going to work in the group G, which is GLp to the rf.
r is an integer, and to be honest I always want r at least 1, I'm not interested in one dimension also. And a is the associated matrix ring. Right, I want the following data.
I want a field extension e on f, totally ramified of degree p to the r. I want to think of it as sitting inside the matrix algebra a
and normalizing, I can either say, a minimal hereditary order or if you want to go multiplicative, who or whory subgroup don't care.
Whichever makes you feel happier. Right, that's the start of it. I want an element alpha in e with the property that it generates e and not just that it generates it, I want its valuation to be minus m
where m is positive and not divisible by p. Right, okay, the list continues.
I want to fix a character, which all my life I've called upsie f, but I've run out of upsies so this has to be eta f of f of level 0
meaning it's trivial on the maximal ideal of the discrete valuation ring in f but not on the discrete valuation ring itself or if you want to use the other thing, the version beloved of local constants peoples.
It's there, it's whatever it is to keep the book straight. Right, I'm now going to define a group H1 alpha, which is not a cohomology group the next one in the series is J1 alpha.
That will be the group of one unit of e and here the hereditary order normalized by this field is going to be a so here I have to put in the part of the units of i corresponding to that.
ui to the k means 1 plus the radical of i to the k. If you want to think in terms of your hoary subgroups, these are the standard filtration subgroups. Right, so that's an open subgroup of a compact open subgroup of G
and c of alpha, that's the set of characters theta of H1 alpha
with the following condition that theta of 1 plus x for 1 plus x in this awkward bit of the unit group, the group that is equal to my character of f evaluated at the trace in the algebra A of alpha x.
Right, and it can be anything on the u1 provided it's compatible with that. Right, these things are simple characters and the point of doing this, these are the simplest sort that you can actually write down.
If I take a random cuspidal representation, pi is totally wild of cariol type
if and only if it contains theta in c of alpha for some alpha.
Right, these are exactly the ones which pick out the totally wild things of cariol type. And I think you'll agree they're not too terrifying.
Okay, sorry I have to change that. Alright, so if I let theta range over a c of alpha and consider how the associated Herbrand function varies
that means the Herbrand function of a cuspidal containing this one. All these things, labels are equivalent. Two comments here. If I just fix the alpha, this set c of alpha is finite but it can be quite large and this thing is not constant on this set.
Which is a bit of a nuisance when you're at the discovery stage. The other thing we've known forever is that the thing that's interesting here is not this element alpha
it's the bunch of characters and the group that you get. And you can, in this wildly ramified situation, often change this alpha quite violently without changing either the group or the characters that you get. This has always been a minor administrative problem but it actually saves the day here.
And you get around this now. This is where I am in this second result.
Let's take theta in c of alpha. I'm going to want to subset c star of alpha as follows.
First I need to invoke this thing, that's the wild exponent. Exponent of e on f.
Meaning it's the different in the exponential version. Plus one minus the ramification, the wild thing. Right, and then I want to put l alpha is m minus w e on f or zero.
If m is less than w e on f. Right, now I say that theta is in c star of alpha
if theta of one plus x is eta f of the trace e to f of alpha x.
Familiar formula. That's to hold for x in e of valuation greater than the integral part of l alpha over two.
Right, so that cuts down the choice of a simple character that I'm going to look at for a fixed alpha. It doesn't really do that actually because if theta prime is in c of alpha
there exists an element alpha prime generating a field of the right sort and in the right place such that c of alpha prime, that set of characters, is the same as c of alpha and theta prime is in c star of alpha prime.
So I've taken a finer dissection of this set of simple characters here. Right, and once I've done that I can tell you if I've got theta in c star of alpha
then c theta of x is p to the minus r then this is the classical Herberon function of the extension e on f
is that provided x plus p to the minus r c e on f of x is less equal m over p to the r. Right, remember we got this magic diagram and we only care what happens in here
this is the region x plus y less equal m over p to the r. So that tells me that it looks like that and then I've got a theorem which tells me what it looks like on the other side. Right, okay, and these things, the fields vary and the elements alpha vary
and these Herberon functions can vary quite a lot within things which are quite close. Right, now where am I? Ah yes, on the Galois side, let's take sigma
and define c sigma by c sigma plus c sigma over c sigma is m over p to the r
Now that's the place where the Herberon function crosses this skew diagonal what you get is this, if I restrict to the ramification group there
as a sum of characters, call them xi, they're all conjugate
so define an extension l xi is the stabilizer of xi then you can take r xi is the natural rep of this way group
on the xi isotypic space in sigma and then what you get is that sigma is in fact the representation induced by this
in particular this representation is irreducible
if c sigma is not a jump, the picture's gone, we get the Herberon function crosses cleanly here then r xi is a character
if c xi sigma is a jump then this representation r xi
I put it in, it's of Heisenberg type as one sloppily says what it really means is that the finite p-group rho xi evaluated on wild inertia that's extra special class two
right, and that means that you're inducing field here when you're inducing a character, your field is p to the r
a degree p to the r, here it's a smaller degree Now, you get quite a lot, well you get an awful lot of information out of this this is a canonical presentation of sigma as an induced representation the field here has intrinsic meaning
these various subfields and the various subfields attached to the jumps I'm running out of time now, but when you've got through this and worked out
various other details, you can see that all theta in C star of alpha, they give the same field L x i. In the diagram, in the graph, all the thetas, you can see how close they
can get together, the maximum distance apart, the Galois representations
that they define are then all the same out there, so it's a question of where this point lies relative to the symmetry point, and you get everything under control beautifully, including this wretched extra special class 2 representation, by slightly modifying this and doing everything you can except in
the following case, where you get a real problem of historical interest at least, if P is 2, if L alpha is even, and the last jump of E on F, the field on the
GL side is equal to C sigma, you can't do anything. This is, you get several
different row x i's on the group, the first place it can go wrong is the group, the ramification group attached to this central jump, and it does. I mention this for people in the audience who are as old as I am, if not more, this particular case
turned up where P to the R equals 2, in Kaczko's proof, the original first proof of the local Langlands correspondence, which was GL 2, for a dyadic local field, this one is an absolute nightmare, what it actually comes down to in that case
is where you have M is 3 W E on F, so just that one messes up the pattern entirely, otherwise you can fix things so that you can't say the fields on the two sides are the same, but one field gives you one field, and that's it, which is as
good as you're going to get in this game, I think, unless you're prepared to spend a very long time on multiple inautomorphic induction arguments, which I wouldn't guarantee will actually come out in finite time, perhaps somebody might like to try it anyway. Okay, thanks.
I remember something, I never really read it, but is there a paper in the 1970s, maybe by Koch, somebody where he classified, represented some of these bad, maybe...
Koch's paper is in, yeah, it's 1777, it's on primitive representations, people have always said it's a classification, it's not, it's a very powerful structure theorem, yeah, okay, there's a lot more to them than that,
I mean the primitivity depends very much on where you are in a base field, this doesn't care, if you see a primitive representation you do a tame base field extension, get rid of it, and you've still got problems, these are, these is not induced from any proper subgroup, so that was, when you're trying to do
things by induction, that's rather important point, I don't think that's the way we'd like to get tangled, for something we'd like to get tangled up with these days, it's, this operates first at a much more detailed level when
you actually do this, these things, these, the primitive representations would, some of them, would turn up here as things with just one jump, these carry all things, that's where they would intervene there, but primitive or not, those are quite difficult to actually pin down in enough detail, we don't really
know what we're doing with those yet, though we've done quite a lot of time on them. Would it be fair to say that whatever is done there would be not easy to connect to the automorphic side? Well, the trouble is the automorphic side has got a heck of a lot of information in it, which goes
across somehow, and yes, it's when you try to take it across, you can't, the problem's the Galois side, you don't even have a language in which to think about it, if you start using finite group representations, forget it, you just get in an appalling mess, that's apparent here, because if you're a
finite groups person, first you try to induce off a character through some normal subgroup, this, the natural thing, is not a normal subgroup, it might be a character, it might be extra special a class too, finite group theorists in fact, when they're dealing with peer groups, like to have their extra special class two stuff at the top, not in the middle, and a more
reflective finite group theorists would say, well, you're trying to do the representations of this humongous pro P group, this is impossible, and what you've got working for you is the automorphic side, and you've got the ramification sequence, which in within which everything we use is pretty much
expressed, so whatever we can't find actually we didn't know about anyway and don't know to ask questions, but it is, you know, the lack of insight to this Galois jungle is what really holds you back here, I think that's fair isn't it Guy, you've been in that jungle longer than I have, yeah, is that
last example, is that unique to people too, or is there a whole class? No, no, when P's odd, now what happens when things are not carry all and all, I just don't know, but here when
P is odd, there's a trick of merglands, which enables you to squeeze my C star a little bit smaller, but it takes a bit to write down, so I didn't do it, so you can get around this, this junction, you can then move it past the critical point, but when P is two, that trick don't work, structurally you could say
that's no big deal, you know, because you've got several things here which have to be parametrized by several things up there, you just don't have the imagination to write them down, but the P equals two thing, yes, the fact that it happens for PT, the R equals two indicates quite correctly that it propagates through all powers of two value, you can't get rid of it, the
bound changes, but it's worse to P equals two, because the real bound here is, that causes trouble, is this three is P to the R plus one on P to the R
minus one, and of course that's a rather feeble sort of bound in general, but when P is two, it's rather large. P is three, you get two there, which is actually a different sort of bound, which makes, when you get beyond that, makes that a lot easier. There are a lot of these mysterious bounds popping up in
this, which don't quite do the job. Very much. Could you recall how far or near you are to the explicit balance constant? I think we're talking about Mars rovers here, it could be a very long
way in a very long time, I don't know. This is, I emphasize, exploratory, this is still, we can get results in this case, and we've got some examples of cases beyond this that we don't really understand, but that's the way it goes.
Which case, this Carayol case? The Carayol case we said more or less all we can do, when it's not Carayol, the first thing that comes up is dimension P squared, where we do have some quite systematic examples, which are sort of comprehensible, but they don't don't see quite how to make them mesh together.