Character formulas in the modular representation theory of algebraic groups
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 23 | |
Author | ||
Contributors | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/38623 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
5
10
12
14
15
23
00:00
AlgebraAlgebraic groupp-adische ZahlPolynomialSet theoryBuildingMatrix (mathematics)Variable (mathematics)Category of beingWell-formed formulaReduction of orderAlgebraConnected spaceFrobenius methodFunctional (mathematics)Group actionTensorTheoremThermal expansionRange (statistics)Regular graphComplete informationHelmholtz decompositionSummierbarkeitKörper <Algebra>CoefficientModule (mathematics)Characteristic polynomialClosed setDegree (graph theory)Standard deviationSpacetimeGroup representationPosition operatorModulformPlane (geometry)Term (mathematics)WeightObservational studyBlock (periodic table)Multiplication signLecture/Conference
06:34
Mach's principlePolynomialWell-formed formulaDimensional analysisPhysical lawAxiom of choiceSeries (mathematics)Group actionHyperplanePrice indexSphereRepresentation theoryAreaDistanceWeightOrbitTranslation (relic)SummierbarkeitKörper <Algebra>Social classModule (mathematics)Reflection (mathematics)Chi-squared distributionExpressionSign (mathematics)Alpha (investment)Condition numberGraph (mathematics)Model theoryAffine spaceConnected spaceLocally convex topological vector spaceMereologyConditional probabilityPresentation of a groupHelmholtz decompositionPoint (geometry)CircleInsertion lossGreatest elementElement (mathematics)Multiplication signRight angleConcentricLecture/Conference
13:09
Valuation using multiplesPolynomialTensorproduktBuildingWell-formed formulaBlock (periodic table)Group representationSocial classAlgebraic groupLie groupNumerical analysisFrequencyLie algebraArithmetic meanHyperplanePower (physics)Representation theoryQuantumTerm (mathematics)Thermal expansionCovering spaceDirac delta functionWeightHelmholtz decompositionModule (mathematics)Correspondence (mathematics)Numerical digitCharacteristic polynomialCondition numberDifferent (Kate Ryan album)2 (number)Model theoryGroup actionProjective planeDivisorDistanceQuantum groupPredictabilitySummierbarkeitStress (mechanics)Object (grammar)Multiplication signLecture/Conference
19:29
FrequencyWell-formed formulaConfidence intervalGrand Unified TheoryTheoryPhysicalismStandard ModelSocial classModule (mathematics)SpacetimeShift operatorWeightPerfect groupLattice (group)Rule of inferenceStandard deviationLecture/Conference
21:37
Algebraic structureNumerical analysisOrder (biology)PolynomialValidity (statistics)Model theoryFrequencyMatrix (mathematics)Well-formed formulaPhysical lawState of matterConnected spaceGroup actionPrice indexRepresentation theoryMereologyMoment (mathematics)TheoryPrime idealProjective planeTerm (mathematics)DivisorFactory (trading post)PredictabilityPressureSummierbarkeitModule (mathematics)Root systemCorrespondence (mathematics)Element (mathematics)Degree (graph theory)Multiplication signRule of inferenceInverse elementGroup representationCombinatoricsTheory of relativityTorsion (mechanics)Category of beingMultiplicationModulo (jargon)Goodness of fitBasis <Mathematik>Dirac delta functionRootTranslation (relic)Social classLengthLecture/Conference
29:41
Numerical analysisOrder (biology)Number theoryPotenz <Mathematik>Bound stateSquare numberDifferent (Kate Ryan album)SummierbarkeitLecture/Conference
32:02
Formal power seriesBound stateLecture/Conference
32:54
Numerical analysisRepresentation theoryIndependence (probability theory)Selectivity (electronic)Lecture/Conference
35:11
Genetic programmingCohomologyCombinatoricsNumerical analysisPolynomialShift operatorTorsion (mechanics)INTEGRALWell-formed formulaState of matterProof theoryGroup actionComplex (psychology)Projective planeTerm (mathematics)TheoremRegular graphWeightRootBound stateParity (mathematics)Sign (mathematics)Different (Kate Ryan album)Object (grammar)FlagModulformSheaf (mathematics)Universe (mathematics)Vector potentialRoot systemExpressionAlgebraic varietyRight angleLecture/Conference
40:58
Moment of inertiaMorphismusMaß <Mathematik>GeometryCohomologyCombinatoricsLinear algebraOrder (biology)PolynomialLie algebraWell-formed formulaAlgebraMorphismusSheaf (mathematics)Group actionComplex (psychology)Moment (mathematics)Projective planeQuantumRankingTerm (mathematics)TheoremWeightQuantum groupHelmholtz decompositionComplete metric spaceSummierbarkeitModule (mathematics)CalculationFrequencyAnalogyEnergy levelDirected graphUniverse (mathematics)Many-sorted logicAlgebraic varietyMultiplication signRight angleLecture/Conference
46:44
Fiber bundleCoefficientMass flow rateCone penetration testQuadratic equationLecture/Conference
47:56
Order (biology)Sheaf (mathematics)Point (geometry)CoefficientCharacteristic polynomialLecture/Conference
48:46
CombinatoricsNatural numberPolynomialMathematical inductionSheaf (mathematics)Complex (psychology)MultiplicationPhysicalismTheoremHelmholtz decompositionSummierbarkeitPoint (geometry)Variety (linguistics)CoefficientDirection (geometry)ExpressionObject (grammar)Element (mathematics)FlagImage resolutionBounded variationRight angleCohomologyShift operatorDerivation (linguistics)Group actionAdditionGoodness of fitLogical constantSchubert-MannigfaltigkeitCharacteristic polynomialLecture/Conference
55:18
Genetic programmingPolynomialTorsion (mechanics)Category of beingFinitismusEnergy levelAlgebraArithmetic meanGroup actionMereologyTheoryPrime idealQuotientSocial classVariety (linguistics)Direction (geometry)Parity (mathematics)Characteristic polynomialObject (grammar)Image resolutionMultiplication signRight angleNetwork topologyDuality (mathematics)Ring (mathematics)Sheaf (mathematics)AdditionTheoremLogical constantHelmholtz decompositionSummierbarkeitGlattheit <Mathematik>Körper <Algebra>Proper mapPartial derivativeIsomorphieklasseSinc functionLecture/Conference
Transcript: English(auto-generated)
00:00
Thank you very much. I have enormous respect for Ofer's work, and it's a great honor to speak here.
00:23
Also what I will be talking about at the—so basically I want to kind of give an overview about what we know and don't know about the irreducible representations of reductive algebraic groups in characteristic p, so as algebraic representations. And if I haven't completely poorly planned this talk, then at the end I should get
00:43
back to a connection with the decomposition theorem, which is probably the one theorem that I've thought about the most, and this of course Ofer had a lot to do with. So the setting is the following. We take g should be a reductive algebraic group over an algebraically closed field
01:14
k and the characteristic of k is p, and I'll assume from the outset that it's positive.
01:26
So we're looking at RepG, which is the category of algebraic representations of g. So if you want, these are those representations of this abstract group in which the matrix
01:43
coefficients are regular functions on g. And so given lambda a dominant weight, we can associate to this simple highest weight module.
02:15
So this is a SOQL of an induced module, and all—so all irreducible representations
02:28
are of this form, so this I guess is the theorem of Chevalier, and then the question that we want to study is the character of this lambda.
02:44
So just to give you some idea of what is known about this, so if g is SL2, then this acts on nabla m, which is polynomials in two variables of degree m, and in this
03:18
one a simple, so this you can do as an—it's not a difficult exercise, sorry, these are
03:36
all simple, and any Lm is Lm0 tensor, Lm1 for venous, tensor M L, L for venous.
03:52
It's actually a general theorem in this context, though. Yes, so I'm just trying to give this as an example of a general theorem. So we have these building blocks, finitely many building blocks, where finite depends
04:04
on p, and we can write—so this is a Frobenius twist, so pulling back this representation under Frobenius. So this SL2 you can do as an exercise, and then—sorry, ah, sorry, this is p-adic
04:26
expansion of m, so m is some m i, p to the i, 0 is less than.
04:47
And so—and then for SL3, so George Lissick explained to me that this was solved by Braden
05:02
in 1967, and then SL4, SP4, G2, these were handled by Janssen via the sum formula
05:24
in the 1970s. So roughly speaking, what Janssen does is he puts a filtration on these nablas and has some formula that gives you some incomplete information about this filtration, but this formula is strong enough to handle these cases.
05:43
And then SL5, SP6, SP7, here there's a handful of missing cases, but not very many.
06:02
So somehow with algebraic methods, you get somewhere to this range and then beyond that you need something new. And what is—so I just want to give this kind of standard picture, this alcove picture. So we consider—so here's our space of dominant weights, and then inside this
06:26
we have some arrangement of hyperplanes, so this affine arrangement. So here's minus rho. So people have explained to me that you're an expert in the field if you can draw this
06:43
picture correctly, so I will try. And so here we have—this area here is the dominant weights, and then for each of
07:08
these alcoves we make a choice, so we assume that p is bigger than or equal to h, the Cox-Heter number, so that says that our weight zero here is not on any hyperplanes.
07:28
And then we just make an arbitrary choice inside every alcove of a weight, so these are the—a is the alcoves, so the connected components of the complement of this hyperplane
07:41
arrangement. I should say this is precisely—these walls are precisely the case where the Weyl character formula—the Weyl dimension formula is divisible by p. So it's somehow quite a natural picture to draw. So we consider these alcoves, and then for any alcove a, I choose a weight inside that. So if I have an alcove a, I choose a weight lambda of a, such that whenever two alcoves
08:04
are related by a reflection, the corresponding weights are related by a reflection. So the recent thing is adding the law here, that you have some affine Weyl group and those are fundamental— Yeah, exactly. It's kind of for lambda plus two because the zero is not in the interior, so it's
08:24
somewhat okay. So I take the affine arrangement, I dilate it by p, and I shift it back by minus rho, so that—and so the distance between these hyperplanes is p. But then the chi plus the dominant—
08:41
Yeah, so I mean— The dominant, this is chi plus minus rho, no. So, yeah, I should, if I—thank you. Yeah, so this picture is slightly inaccurate, namely one off this wall is the dominant weight.
09:09
So concerning this, I believe that there is a general fact called the vintage principle that tells you that whatever you do, you only have to look at the orbits under the— Exactly.
09:30
So basically, any—so the linkage principle and the translation principle tells us that if we want to answer questions in the representation theory of g, it's enough to answer it for
09:40
one of these orbits, and I'm making some random choice of these orbits. So for example, if we want to understand how some indecomposable module that's corresponding to this weight decomposes, then the only possibilities in its composition series are weights in this orbit. That's what— In the translation cases, but also the general cases where you translate to a word?
10:01
Yes, but somehow these are always simpler than the regular cases. So the regular case is the hardest, and there's precise ways in which you can say it's simpler when you go to a wall.
10:37
And so now we have this Lustig character formula, 1979, so I'll call this LTF, which
10:55
is the statement that the—so I'll say the class and the growth and the group, you could also say the character.
11:03
So I want to rename—so LA is by definition the simple module with this highest weight. Yes, so now I'm indexing everything via alcoves instead of via highest weight, and
11:22
it's the statement that this is the sum and there's a sign that I won't go into. And so this is a so-called Weyl module with character given by Weyl character formula.
11:56
This is what we would like to understand, if we're interested in this question.
12:02
And then this is a so-called spherical casualty polynomial. So this is some polynomial that's defined entirely in a combinatorial way starting from
12:25
this picture, and we take its value at one, and this is the conjectured expression. And there's some—so the first thing that one should say is we—so assume—so if alpha
12:44
check a plus rho is less than—so this is a Janssen condition. So we only expect this character formula to hold for simple modules that aren't too
13:02
high out, but if p is bigger than—yeah, so basically this is enough. To know this formula, and so in this example of SL2 you see that there's these p minus one building blocks from which you can get all representations. This is the case here also in general, Einstein big tensor product theorem, and so if we
13:24
know this formula then we're okay. So mb is evaluated at one? Yes, the value of a casualty polynomial at one. Any positive root? Sorry, so you just said—so the generalization of this kind of decomposition with written
13:43
yes twists, have you said that like how some of the deltas or some analogies— So there's a region here which is the restricted weights, which I'll actually need in a second. So you're summing up the which bits? Hang on a second, one question at a time, please.
14:01
So this is the restricted weights. So these are those—so I write these in the fundamental weights. All of my digits should be less than p minus one. And it turns out that I can—sorry, for any highest weight I can do a periodic expansion
14:21
in terms of the restricted weights, and then if I form the corresponding tensor product that will be simple. That's Steinberg's theorem. So it's enough to know the characters of these guys, and then it turns out that for p not too small, this region actually lies inside this Janssen condition. And so then I'm okay.
14:41
Yeah, so what's the other question? Yeah, summing up the which bits in this formula? Summing up some bs, not bs. Yeah, so these will just be non-zero for finitely many bs. So finitely many bs which are in some sense less than a in simple order?
15:01
Exactly. So after all bs, because they give you zero zero? Plus or minus what do you get? Plus and how do you get minus? I mean, it's the number of hyperplanes separating b and a. It's like minus one to the power. But somehow this is—sorry, for the corresponding quantum group, this is a perfect conjecture.
15:27
There's none of this Janssen condition, and it holds always. But this is somewhat problematic for algebraic groups. I mean, even in its formulation, it's somewhat difficult to get your head around. And there's a different version which is called Lustig periodic formula, which is better,
15:44
which I'll explain now. Will I ever be able to get that backboard back down?
16:28
So a basic fact is that if lambda is restricted—so the restricted weights in this example of SL2 are precisely this zero up to p minus one.
16:42
Then this L of lambda is simple as a G equals Lie G module. And somehow the Lie algebra is much simpler than looking at modules for the Lie algebra is somehow much simpler than looking at modules for the algebraic group.
17:03
And so it's natural to consider G modules with compatible T-action.
17:22
So T inside G is a maximal torus, so the meaning of this should be clear. So I consider a module with a T-action such that when I differentiate the T-action I get the same as the T inside here. And these are called— That's the compatibility, like for GK modules, right?
17:43
Exactly. So you have the two modules. It's like GK modules for— So you have G modules of characteristic p. Yes. And so there's a real—I mean, if you're used to thinking about characteristic zero, which I guess is less normal in this audience, then yeah, you should be aware that there's
18:04
a big difference between—yeah, but I don't need to tell you this—between the big G modules and little G modules, between algebraic group modules and Lie algebra modules, basically. But here there's a big difference.
18:22
But for simple modules, it's okay. There's essentially no difference. And then so this is the world of G1T modules. And so here we can—so this is a—I'll write hat for G1T modules, so this is a simple highest-weight module.
18:45
And so in the world of representations of G, it's not—so for example, in the representation theory of G, the projective modules are kind of pro-objects, and so they're very large and you don't usually work with them. Whereas in the world of G1T modules, you have these nice finite-dimensional projective
19:05
covers, so this is projective cover as a G1T module. And then the Lustig periodic formula— And this is explicit with the known character?
19:23
This guy? Yes. No. So knowing this, the character of this, is basically the same question as knowing the character of this. Okay. So very—yeah. So then we have Lustig periodic formula. This is something like the reciprocity, like the general model, the P over—
19:42
Exactly. Okay. That's exactly what it is. Perfect. Is this Lustig periodic formula, which tells us that the class of P of A-hat is the
20:01
BA-hat-1 delta B-hat. So before this was a—in the original formulation, this is a viral module, so it's got character given by a viral character formula. This guy is something called a baby verma module, and so this guy, up to shifts in the weight
20:27
lattice, always has the same character. It's finite-dimensional. So delta B-hat is the restricted Liard rule of G, restricted Liard rule of B.
20:50
So this is a finite-dimensional kind of standard module, and this is periodic, periodic
21:16
Again, evaluated at 1, and so the beauty of the theory—so in some sense, why do we
21:23
work with G1T modules rather than G modules? The reason is that modules for G, the grading by weight spaces is by something like the character lattice tensed with Z mod PZ, and so it's kind of—and that's very annoying,
21:43
and so you kind of unwrap this grading so that it becomes a genuine T grading, but now you can tenser by P times an element of the character lattice, and so the representation theory is periodic. So the representation of G1T modules is periodic, and this is the corresponding theory
22:04
of Cauchonistic polynomials for a periodic situation. So is this a finite or infinite sum? This is also a finite sum. So I'll give you some examples, and one should keep in mind Ofer's remark from before, that knowing this—so this fault remark—is that this Lustig periodic formula implies a character
22:30
formula. Those were originally conjectures? Yes, so I'm just stating—so I'm stating them as formulas at the moment that may or
22:40
not hold, and I'll discuss their validity in a second. Could you already give previous talks about the torsion and— Yes. Okay, so I know this. Okay. So just some examples. So for SL2, here the picture looks like this, and then the periodic pattern just
23:09
always—like I'm just giving you the value at one of these periodic polynomials, and so this is just—this is telling you that no matter where you are, your projective has two baby verma module multiplicities in there, like this, and so in SL3, there's
23:24
two cases. So let's say for SL2, when it's low enough so that the baby will be reducible— Yes. —what is the P and the baby and how they are—what do you—I didn't understand
23:44
the drawing part, so you have— So this will be some alcove. Okay, some alcove. Yeah, and let's say that zero's inside here. Okay, good. Okay, good. Take the basic—by the way, this is just for the basis for any alcove.
24:03
Yes. Okay, good. But it's all periodic. It's completely periodic, the situation. And then there is only one alcove left when you're in SL2? So here, the projective—so this would be 2P minus 2, and then the projective would
24:22
be—look like this. And the relation of delta to L is the same as the relation to delta?
24:44
Yes. Okay. So essentially, in order to write the deltas in terms of the L's, you transpose this matrix, and then to write the L's in terms of the deltas, you take its inverse. So in principle, it's—I mean, so this looks like the cleanest answer.
25:01
It gets very messy when you do this in practice, but I mean, you know, you can do it. So now you see that—I'm not an expert. Okay, and so this is representing the structure of certain projective G1T modules, and this
25:23
is—you can calculate these pictures in some simple algorithmic way, but they're not usually—this is somehow deceptive because they look reasonably simple. So you don't need to go very far before these pictures become very, very complicated. In the translation, there are several—under the translation that you said, there are
25:44
several equivalent—several—so how many classes modulo translation do you have in the category versus three? Two. It's always what we call—what do you call it—P mod R, the index of connection.
26:00
No, sorry, sorry, sorry, that's wrong. So the number of cases you have to do is file group modulo index of connection. So it's n minus one factorial for SLM.
26:30
Okay, so now we'll talk about the validity of these statements.
27:04
So one remark is that these two are equivalent. And then the status.
27:21
So one thing that should somehow motivate this is some old conjectures of—basically it was Verma that I think first said that this kind of alcove picture should be behind the representation theory, and that it starts looking independent of P in some sense. So we fix a root system, and then we can consider primes, and the combinatorics
28:00
is much more complicated below the Coxeter number.
28:02
So here's the Coxeter number, e.g. n for SLN. So this is what we call small primes.
28:20
And for many—since the—for a long time we've known that there's behavior here that's somewhat mysterious. And what the hope was was that these formulas would hold for some reasonable bound on H. So for example, maybe for all primes larger than H or for all primes larger than 2H or something like that.
28:43
And then in the 90s—so there's too many names for me to—so I'll just write initials. So this is Kajdan Lustig, Kashiwara Tanisaki Lustig, Anderson, Janssen, Zergil, prove
29:00
that there is an N non-effective such that this holds for all P bigger than N.
29:33
But this N was not known in any case. So even, for example, SL5, there's something like one number that Janssen would like to know.
29:42
So there's no way you can even check on computer whether this one number is one or two or something. And then in 2008, Fiebig gave an explicit enormous bound.
30:06
So e.g. it's LCF is true for P bigger than N to the N squared.
30:21
So for example, 10 to the 100 for SL10. So a very large number. And then—so based on work with a number of different people, so Elias—for SLN, sorry.
30:48
So based on work with Elias, Shuhua He, and also some number theory with Kontorovich and McNamara,
31:09
there exists—so LCF does not hold for many P up to an exponential in N for SLN.
31:42
So basically what we do is construct examples up to some P of the order of C to the N for some C, for some C bigger than 1. And so somehow there's this place here that's exponentially far off.
32:05
Well, at least exponentially far off where LCF is valid. But there's this whole world here of medium primes, medium primes where LCF doesn't
32:27
necessarily hold. What do you mean by for many P?
32:41
So basically—yeah, I think that we show that it's basically for all P up to some exponential bound. And how C and N relate? So C is just—so C is some number bigger than 1 and—
33:06
Independent of N? Independent of N. So an example is that if—so just an example of this phenomenon is that if P
33:21
divides the Nth Fibonacci number, then LCF fails for some LA for SL, something like
33:42
4N plus 5 in characteristic P. So it seems extremely interesting that there's certain—so this is a very arithmetic question, and that this is somehow happening in the representation theory of SLN is maybe surprising.
34:08
Well, at least for me. Okay, so I was thinking what on earth can this be for, but then I realized.
34:48
Sorry?
35:03
Oh, I'm sorry.
35:22
When do I finish quarter two now? Is that right? Okay, so the theorem that I want to state today—so the theorem—so this is a recent
35:48
work with Simon Riches from Clermont-Ferrand, but it's based on a long project with Atchaa. Somehow where all the work is done is in this long project with Atchaa, Makasumi,
36:05
and Riches. So the statement is that basically this formula holds, so for all P bigger than 2H minus 2—I'll comment on this in a second—this formula holds, so it bears a very—it looks
36:34
almost the same. These are so-called P—these are periodic P-cajunlitzic polynomials.
36:54
So I'll explain in more detail what these are in a second, but roughly speaking, cajunlitzic polynomials are measuring the stalks of intersection cohomology complexes
37:02
on the flag radium. These are measuring the stalks of some objects called parity sheaves on the—and so the remarks are—so firstly, there's essentially finitely many of these polynomials, so for
37:22
a fixed root system, these P, D, BA, RDBA, for all P bigger than some non-explicit bound.
37:43
So it's something like, if you have a—imagine you have finitely many algebraic varieties, then their integral intersection cohomology will have no torsion above some bound, but you won't necessarily be able to say what that bound is. Yeah, so I mean, we can't re-derive phi-big's bound.
38:04
We can't re-derive phi-big's bound from this. So this is worse than—so this implies L-to-C-F for large P. There's now about three different proofs of loosely-character formula for large P,
38:21
but this gives another one. There's no sign in this formula? There's no sign, no. So signs are whenever you're expressing symbols in terms of something, and no signs are whenever you're expressing projectives in terms of something.
38:40
So we conjecture the formula to hold for all P, so it should be completely uniform, and it seems to check out in small examples that we can calculate.
39:01
But there was a problem, you said, for sufficiently small P, the zero is not in the right place in the diagram, so— But there's some simple modification that seems to work, so with appropriate—with small modification.
39:21
So all P you mean without these restrictions on P? Exactly, yeah. So this is—that's a theorem, and then we conjecture that actually you can just cross this out. With appropriate modifications for the fact that there won't be regular weights for P bit smaller than the coxettia number. And so just to give you some—so these P DBA are much harder to calculate than DBA,
39:59
so you can ask is this any—you know, does it improve the situation at all?
40:06
Have we just given something—are we just expressing something uncomputable in terms of something else uncomputable? I mean, not uncomputable in a formal sense, all these things are computable in a formal sense, but just very difficult. But this should—so, experiments suggest—
40:24
So you just have done these formulas here, polynomials are given by some inductive procedures you can calculate in every given case. Yes. And these things, the parity shifts which are led by geometries, is it still given
40:41
by some— It's given—it's not entirely—it's not given entirely in terms of the combinatorics of—coxettia group combinatorics, so you need to know something about the root system, but it is computable in the sense that I can type it into my computer and press enter and it gives me an answer. So I can compute these polynomials in many cases, but I can compute them nowhere near
41:05
as efficiently as cachellanic polynomials, and it's unlikely that we ever can compute them as—is this an answer? No, the question is whether there is a formula like that for the cachellanic lipstick polynomial, or it involves something defined as algebraic geometry.
41:23
So there's a formula that involves only linear algebra and combinatorics of coxettia groups. It doesn't involve some calculation in algebraic geometry that I may or may not be able to do. A quantum analog?
41:41
Yes, so somehow the quantum group just—universally you just ignore the p and it seems to be fine. I mean, that's also theorems. No, with one. The experiment suggests formula should provide a complete answer in ranks less than or equal
42:18
to six. So before we knew—at the moment we know A1, A2, A3, B2, and G2, and it's more effective
42:31
in the sense that we can go from these cases up to rank six, but probably not beyond that.
43:00
Using the fact that you know this in order to calculate the characters of the L's,
43:06
this is—so I don't understand which sense it is not—why it is not a— I mean, computer—so, I mean, I should say, experiments suggest we should be able to do the calculation of these polynomials in ranks less than or equal to six.
43:23
Ah, okay, this is the—so you don't regard it here? No, at the end of the day I actually want to know what the character is, and that's what I'm asking about. Is it not clear?
43:40
That all these experiments suggest that the PDBA should be computable in ranks less than
44:10
or equal to six. So, I mean, another example is imagine that I ask you for simple highest weight modules over a Lie algebra for SL400 or something. We have a formula in terms of Cauchonistic polynomials, but we can never carry out this
44:23
calculation. The calculation is just too large, and so there was this Atlas project that carried this out for E8, and this was some enormous calculation. And what I'm saying is that we should be able to carry out this calculation in ranks less than or equal to six. But that's still not done.
44:44
So I want to explain what these P polynomials are, at least roughly.
45:20
So if we have f from x tilde to x, a projective morphism of complex varieties, then a special
45:56
case of the decomposition theorem is that f lower star—so this is R f lower star—of
46:24
the constant sheaf on x tilde is a semi-simple complex in the sense of perverse sheaves.
46:40
So it splits as a direct sum of its perverse cohomology groups, and each of these perverse cohomology groups is a semi-simple perverse sheaf. And the fact—so a remark—is that the fact that we use Q coefficients is essential.
47:06
So of course x tilde is not singular. Ah, I'm sorry. Of course. Yes, this is smooth. Yes. So just a simple example, which I love, is if we take x to be some quadratic cone inside
47:34
A3, and then we have x tilde, the blowup in 0x, then f lower star—so this is isomorphic
47:56
to the total space evolved minus 2 on P1.
48:03
And the kind of absolutely essential point in this example is that this 0 section here, which is contracted by this map, has self-intersection minus 2. So f lower star of the constant sheaf on x tilde is semi-simple.
48:24
So in this case, it's always a perverse sheaf, because this is a semi-small map. But it's semi-simple if and only if the characteristic of k is not equal to 2. So in characteristic 2, you get some interesting, indecomposable—so I'm always talking about
48:43
the characteristic of the coefficients. So as I said before, the key point is that if f is the 0 section, then the self-intersection
49:11
of f is minus 2. So in general, for any p dividing this, you'll have problems.
49:47
And so now we apply this to the flag variety. So let x be the complex. So now I change and consider complex flag variety.
50:08
And we consider xx to be a Schubert variety. So that's a Schubert variety.
50:29
And then what are ordinary casual under-sick polynomials? So this is defined to be the intersection cohomology of this guy with q coefficients.
50:44
So the stalks are given by casual under-sick polynomials. So now, for any reduced expression, we can consider—so x in the Weyl group—we can
51:32
consider this Bott-Samuelson resolution.
51:43
And this has a natural multiplication map to g mod b. So at some point I stopped writing subscript c's, but I hope it's clear. So this is a Bott-Samuelson resolution. So this is just any element of the Weyl group.
52:01
This is Bott-Samuelson resolution, which are some very useful kind of combinatorial resolutions of—sorry, this should go to the Schubert variety inside here.
52:25
And then the decomposition theorem says that ic of xq appears as a summand inside the direct image, q of this.
52:42
And this leads to a—so if you imagine what's happening here, you have the decomposition theorem tells you that you have this intersection complexity supported on the open orbit, and
53:02
then you have stuff supported on smaller orbits, but there you know all the stalks by induction. And so this gives the combinatorial expression for casual under-sick polynomials.
53:22
Derived, yeah. I think from the—yeah. Always derived. And there's another kind of way of looking at this, which is just consider this whole world of kind of Schubert varieties and their partial resolutions, and imagine that you're allowed to start with constant sheaves on—constant sheaves on their shifts
53:40
on things that are smooth, and then you're just allowed to push forward and take summands. Then all you'll ever get is intersection cohomology complexes, which is kind of remarkable. And so now you can ask the same question with coefficients of characteristic p. So if you don't want to mention perverse sheaves, you can still characterize this object
54:01
as—well, you can still attempt to give a definition of it as being the unique summand inside this that's indecomposable and has support on the open locus. So by the way, can you remind us of the kind of polynomials in 1790, slightly before the decomposition theorem? So what was the original intuition for—
54:21
So I can use this question to advertise some wonderful notes on George's website, where he gives some notes to his papers, and there's a very, very nice explanation of various things that led to kaschanzik polynomials. Okay, three minutes. Very good. Five minutes.
54:55
Ah, okay. And then a kind of somewhat surprising fact, which was first noticed
55:23
by Zögel in certain cases, and then generalized by Jotour, Mautner, and myself, is that there exists—so the summand inside such a direct image, in any characteristic,
56:01
is well-defined up to isomorphism. The indecomposable summand is well-defined. So this means that if I take two different resolutions,
56:22
then this direct image sheaf will be very different in principle, but if I just look at the unique indecomposable summand that has open support on the Schubert variety, then this will give me a well-defined object up to isomorphism. So in general, both semi-simplicity aspects fail.
56:41
It is not a direct sum of the purpose sheaf homologies, and each of those is not semi-simplicity. And so here you are just using the Kull-Schmidt— I mean, to know that there is some decomposition in the composable sheaf, you define up to isomorphism, and this is what you are looking for. So I'm very heavily using the Kull-Schmidt theorem here,
57:04
in this derived category of constructible sheaves. It's well-defined up to isomorphism. And this is an example of what we call a parity sheaf. So this depends on the field K. And then we have E X.
57:22
K is isomorphic to E X. Q, in large characteristic. Sorry. Isomorphic to— Yeah, you're right. I mean, what I want to say is that
57:42
I see X L K if P is for P large depending on X.
58:00
So once we fix X, there is a P above which there is no torsion in the stalks of— or costalks of this thing. And once we reduce that mod P, we get this parity sheaf. But this example up here is kind of illustrative of what happens in general. So there will be some small primes which this agreement does not occur. And then we get this new and genuinely very interesting object.
58:25
And I'll just say as a remark that I find this a very interesting question. So moreover, you can show that if you consider all kind of partial resolutions of Schubert varieties of Bot-Samuelson type, and you allow smooth things on—
58:41
so you allow constant sheaves on smooth things, and then you allow yourself to push forward and take some ends, then you only get a finite list of objects that are parameterized in the same way as I see sheaves. And I've often thought about what could happen in general, but I seem to get stuck with curves. In general, you mean for—
59:01
I mean, is there some class of maps and some class of proper maps that I can fix? Another case where this is true is, for example, all toric maps between toric varieties. So here, this is self-dual by construction.
59:21
Yes. And the IC is not quite self-dual because of the torsion. Exactly. So this is— Okay, so this is closely related to the question of torsion in IC, but it's—apriorally, it could be that although IC is not self-dual, it may be the reduction, but it could still be intercomposable.
59:44
Yeah, but it'll never—I guess in this setting, it'll never be this guy if it's not self-dual. Well, obviously, because— Well, the self-duality is given by a certain map. Okay, but since on the open stratum, you know it.
01:00:03
Let me write one more sentence that finishes my talk, and then so p-cush under sic polynomials are defined to be the stalks of given by the stalks of
01:00:23
These exk and there's a completely different way of understanding these these guys Via some diagrammatic algebra, which allows you to compute things with them, but I won't go into that. So thank you very much
01:00:50
The intercomposable one, so the intermorphism ring of the intercomposable, the prior dimensional algebra, is not really a logical thing, but this
01:01:01
So my question is, because you can have some phenomenon, k is not exactly right, it's closed So the question is whether the intercomposable are the same, or it comes from a tree bar, and whether the intercomposable ring is actually, let us say, just small k In all cases, yeah
Recommendations
Series of 19 media
Series of 19 media