Moduli Problems in Symplectic Geometry - Polyfolds discussion with Nathaniel Bottman
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 36 | |
Author | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/16316 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | |
Genre |
6
14
17
18
19
23
24
25
36
00:00
Physical lawAlgebraic structureDifferentiable manifoldSequenceForceCurveManifoldMathematicsNumerical analysisOrder (biology)Theory of relativityNetwork topologyFunction (mathematics)Set theoryModel theoryFrequencyProduct (business)Wave packetIncidence algebraModulformPerturbation theoryCategory of beingVector spaceConservation lawDimensional analysisFinitismusFiber bundleInfinityUniformer RaumInequality (mathematics)Energy levelConstraint (mathematics)Equivalence relationState of matterArithmetic meanAxiom of choiceReliefProof theoryDifferential (mechanical device)Series (mathematics)Division (mathematics)Forcing (mathematics)Fundamental theorem of algebraSheaf (mathematics)Line (geometry)Directed graphLimit of a functionGrothendieck topologyGroup actionPrice indexLimit (category theory)Compact spaceKontraktion <Mathematik>Cylinder (geometry)Loop (music)Extension (kinesiology)MereologyMoment (mathematics)Normal-form gameTheoryPhysical systemPiPrime idealProjective planeResultantSigma-algebraMilitary baseTerm (mathematics)Stochastic kernel estimationInverse functionVektorraumbündelHeat transferScaling (geometry)Goodness of fitFamilyTrajectoryDependent and independent variablesRegular graphOperator (mathematics)Presentation of a groupNeighbourhood (graph theory)Square numberAdditionExistencePoint (geometry)Körper <Algebra>CuboidVariety (linguistics)Open setInterior (topology)Cartesian coordinate systemMany-sorted logicObservational studyDirection (geometry)Latent heatInsertion lossFocus (optics)Process (computing)RadiusSign (mathematics)Event horizonTrailCondition numberFiber (mathematics)Canonical ensembleArchaeological field surveyPrincipal idealIsomorphieklasseElement (mathematics)Classical physicsImage resolutionMultiplication signBoundary value problemRule of inferenceCharge carrierSolution setTangent spaceForestSpacetimeRight anglePosition operatorTransverse wave1 (number)Fraction (mathematics)Differential geometryHomotopieNatural numberHamiltonian (quantum mechanics)Matrix (mathematics)Time domainDerivation (linguistics)Fredholm-OperatorComplementaritySpherePhysicalismRankingSlide rule7 (number)HypothesisLocal ringTheoremNear-ringConfiguration spaceLinearizationCovering spaceParameter (computer programming)WeightNormal (geometry)Linear subspaceConnectivity (graph theory)Vector potentialCross-correlationExterior algebraGlattheit <Mathematik>Spherical capEqualiser (mathematics)Surjective functionRamificationImplicit function theoremClosed setCartesian productCue sportsDifferent (Kate Ryan album)Sinc functionStandard deviationBounded variation2 (number)Inverse elementTangentLecture/Conference
Transcript: English(auto-generated)
00:00
Yeah, so the plan had been for Helmut to do a discussion, but that's been shifted to Wednesday,
00:22
because the organizers felt like we should make sure that everyone is on the same page from last week so that people aren't just not getting anything out of this week of polyfolds. So the main point of this is to take questions from you guys, but I thought that I'd start off by recalling for you
00:42
the definition of polyfold Fred Holmness mostly as a way of putting a dictionary on the board between some polyfolds concepts and classical concepts. Okay, so here's the definition of polyfold Fred Holmness,
01:01
and I'll underline the polyfolds specific terminologies. So a scale infinity section F of a strong bundle Y over X,
01:35
this is over X, a tame M polyfold is scale Fred Holm
01:53
if the following conditions hold. So the first condition is F is regularizing,
02:06
and the second condition is that it has a filled version, which up to a scale plus perturbation is of the following form.
02:23
Actually more precisely at each point it has a filled version, so the filled version might be actually quite different, might be different, different. Yes, thank you, Helmut. So let's say. At each smooth point.
02:42
Gosh, okay, yes, thank you. So at each smooth point, or nearby each smooth point. Actually, it's a journal. Guys, trying to give a slightly imprecise version
03:02
of this theorem. I'm trying to recover it, so. I'm sorry. This is where you turn the weight when you get to it. Yes, you can show a bigger thing. I have a picture on the next slide. Okay, so which up to a scale plus perturbation is of the following form.
03:32
Okay, so our bundle is going to locally look like the following. So W is some scale Banach space.
03:48
Mumble is locally trivial. So the base is R little n plus W, and the fiber is R big N plus W. And the section is supposed to be, well it's supposed to have principle part,
04:02
something called G. And on the right board I'll tell you what property G has to satisfy. So this is like the meat of scale Fretfulness. So that property is that if we look at little G,
04:23
and we subtract off G at the point we're centering at, and then we post compose that with the projection down to W, okay, so that's the W in the fiber.
04:42
And then we apply it to a point V comma U. So little V lives in R little n and U lives in W. Then it's of the form U minus W. This B of U comma V, where B is a, you can think of it as a family of contraction mappings parametrized by V.
05:03
Let me, sorry, let me switch the order of U and V. So V is living in this finite dimensional space. Family of contraction mappings parametrized by V. And specifically what I mean by that is that if I make any choice of M and epsilon
05:23
the following inequality holds for V, U, and U prime close to zero,
05:45
where the notion of close to zero can depend on the choice of M and epsilon. So it's this ginormous definition, but the point of me writing on the board is not for you to completely understand it right this second.
06:02
The point is to remind you of these words, which were introduced last week, and let's see if I can get all of them. Tame and polyfold, regularizing, filled, scale plus,
06:20
I think that's everything. And now I'll recall for you the dictionary between those terms and concepts you're used to. Yes. You don't have conus appearing here because you're placing the sevens on step two. This is stated in the boundary list context. I should have said, let's...
06:40
I think it's a version. Yes, there's a version of this with boundary, so we don't have too many concepts at the same time. Let's assume that there's no boundary. And Nick, can you ask me about that inequality on B? You haven't, B, the element of B, U, B, U,
07:02
so it's inside W, right? Yes. It's inside Rn plus W. Rn plus W. So then the norm, should it have an n term? Yes, yes, it should, it should. Quantifier attached to the epsilon, I mean, you really mean for every single epsilon it's gonna be, as epsilon goes to zero? Yes, but this will be true
07:21
on smaller and smaller neighborhoods of zero. It will be true on smaller and smaller neighborhoods of zero. So when you have this scale structure, then when you go higher up, it just takes smaller and smaller neighborhoods. So it's really some kind of a joke on this. I see. So the closeness is allowed to depend on epsilon and n.
07:44
Yes, yeah, otherwise it's definitely wrong. Yeah, well, that's what I was gonna say. Otherwise there's no application for this. I was trying to understand what it said. Yeah, and if there's time at the end, I can motivate this because there's a similar property satisfied by classical Fred Holm maps.
08:01
And in that context, you can think of B as something whose differential vanishes. And that's why in that case, this inequality is satisfied, though on small balls, where the smallest depends on epsilon. I think that's what it's going to explain is if you have classical Fred Holm theory,
08:21
you could give an alternative definition. And if you take this, you'll see that that has to be a definition. Okay, before I move on to the dictionary, any other questions about this statement or complaints? Okay, great. Dictionary.
08:47
All right, so here's the scale setting and here's the classical version. So the first word is SC plus. When you see SC plus or more generally when you see SCK,
09:01
you should think of C infinity and CK. When you see the word strong bundle, you should think of a bundle
09:22
where the notion of compact perturbation makes sense.
09:45
Okay, let me not put tame into this dictionary. When you see M polyfold, you should think of a Banach manifold,
10:02
not a orbifold in any way, okay? When you see SC plus, you should think of a compact perturbation.
10:32
And the only thing I left off is filled because it's not exactly a classical notion. Let me recall for you that what does filled mean?
10:40
Well, a priori, the section F isn't defined on an open subset of a scale Banach space. It's defined on a retract sitting inside such a thing, which is not such a nice space as far as we're used to thinking of spaces. The dimension can vary locally and so forth. And therefore, in order to have a meaningful Fredholm theory, you have to beef it up to a map
11:00
that actually goes between open subsets of scale Banach spaces. And that's what this filled section is. It really, yes, question? Oh, no. Oh, yes, I should put that in here. Regularizing means you should think of elliptic regularity.
11:21
So if our section comes from the Delbar operator, the regularizing property comes from elliptic regularity satisfied by that Delbar operator.
11:44
Can you say something about Tame? If you don't want to put an elliptic term. Tame, yeah. Yeah, so I don't know if I remember the second condition of Tame-ness, but the first condition of Tame-ness says
12:00
that if you look at your retraction, let's say that R which goes from X to X is scale infinity and it's a retraction.
12:20
Then the first condition of Tame-ness is that the degeneracy index of R of X is equal to the degeneracy index of X for all X big X. So I don't want to really get into this
12:43
because I think it's a little bit more technical than the rest of the stuff on the board. Have I stated that first condition correctly? And what's the second condition? That at a point in the retract, the tangent space has a complement
13:01
in the reduced tangent space of the column, which actually in this case, actually there's no boundary ever since Tame-ness came. Right, yes, yeah. So at the last minute, I changed this theorem to the boundary list setting and therefore I could have erased the Tame-ness hypothesis. Yeah, it's sort of,
13:20
when you have a retract and you have a boundary of the ambient space, you could have a lot of retracts which don't show you the typical thing of the boundary structure of the last group of you. So you want to have them that they show somewhat that there was actually a boundary, these corners and so on. So you have to force it.
13:42
So for example, when you have a quadrant and you take out the diagonal, it's not a Tame retract. Because when you retract on it, near zero you have some problems. You have to retract at some point that is not viable. Okay, so the picture you're thinking of is that
14:03
this is your C. Just text the argument. So if you retract there, so near zero I think, I mean near zero you have to leave. I mean, if you take an open neighborhood around zero, then it contains points that you,
14:21
it's there, then you have to write it. Right, so the point is that if you look at this point right here, then the degeneracy index of the retract is one, but the degeneracy index in the ambient space C is two. I guess so, yeah. So but then nevertheless this line has an induced structure
14:41
which is taken, but it doesn't come from the ambient structure. Okay, so I suggest that we leave it there. Great, so you're welcome. So we have this dictionary just to bring everyone back to speed. And the last thing that I wanna say before I move on to whatever questions you have is that
15:05
the most basic reason, I mean the real reason for using this wonky definition of Fredholm-ness is that in the classical setting, the reason that all of the theorems you're used to from finite dimensions are stated in the Banach setting for Fredholm maps
15:21
is that they satisfy this contraction normal form, and therefore things like the inverse function theorem hold. You wanna use that in a polyfold setting, but there's some problems with the levels so that if you just assume that your linearized operators are all Fredholm,
15:40
those theorems that you wanna be true, like the implicit function theorem would not be true, and therefore we build this directly into the definition. Like I said, I'll come back to this at the end if we have time, but I wanna stop talking about this Fredholm-ness property now. Can I ask another question? I think you just asked it already, and you can get, when you say close to zero,
16:03
you mean that there is a fixed open set, does it depend on the level, or? It does depend on the level, so for every M at epsilon, there's an open set, so that this inequality holds on that open set. Because if it didn't depend on epsilon, then you could put a zero in there. I mean, if it was the same set, and it was fixed,
16:23
then it was true for all epsilon, then you could then have it done, so clearly it has to depend on epsilon. Sure, thanks. Okay, so let me open up the floor for questions, which I will either answer or deflect.
16:44
Yes, I think, okay, it's not a question or a comment, so this means that if you, so besides being regularized, your Fredholm sections cannot make two rapid moves locally. It's a regularization property
17:02
which constrains and how it moves. Whereas if you have a general SC smooth section, first of all, for this SC smooth section, the linearization usually does not depend continuously as an operator at the point where it takes a linearization, so that's one of the features.
17:20
Like, when you are near nodes or near broken orbits, there's something rapidly changing which makes the linearized operators usually not continuous as operators. So the linearization then can be very moving depending on which direction they go very rapidly, and that kills the implicit function theorem in general,
17:42
unless you have a taming device saying it has a little bit more regularity, which is this one. So you have, this is a germ condition in this SC world. And so if you have this germ condition, then it turns out you have implicit function theorem in the usual way. If you linearize and it's onto,
18:01
then nearby, you have a solution manifold. So that means you have nearby the solution space, which actually, from the ambient space, gets a structure which turns it into an auto-smooth manifold. Can we actually have that written down? Yes. What the implicit function theorem says, that would be helpful. Yeah, sure.
18:25
Presumably, full operators have indices and index, right? Presumably, yes, that's right. Whoops, I should, gosh.
18:54
All right. So if you wanna make some money, challenge me to shuffleboard this week.
19:02
Right, so here's the implicit function theorem in the polyfold setting. Okay, so let's say that Y over X
19:22
is a tame, strong bundle. And let me just point out before Helmut does that this is the boundary-less setting, but for some reason, you put the word tame there. That was a common case, though. All right, so it's a tame, strong bundle. It's a strong bundle over X and M polyfold
19:44
with no boundary, okay? And F is a scale Fredholm section
20:07
with the property that all of the linearizations are onto, so such that for every X and the zero set. No, no, you only need it at one point. Then you, ha, okay, so the diff, oh, okay.
20:21
So you want to give the, oh, it's a, but you want to give the goal. So what about you give the version of the solution, F of X equals zero, E of Y. Okay, I can attempt to. You can correct me. So let's see, so let's say that, such that at this particular point X naught,
20:43
the linearization is surjective. Just a second.
21:01
So this is supposed to go from the tangent space of X to the fiber over, or excuse me, the fiber over X naught. Okay, so then the theorem is that
21:20
there's gonna exist an open set. Here's where I might make a mistake with topology. So there exists U and open set in, let's say the zeroth level of X containing X naught with the property that if we look at the zero set
21:41
of F intersected with this open set, we get a finite dimensional sub M polyfold. And then it's a theorem in HWZ's papers
22:01
that it automatically inherits the structure of a finite dimensional C infinity manifold. So it's a little bit stronger than sub M polyfold, so. So it's a sub M polyfold which is so good that it alters the infinity space.
22:21
Is X naught a smooth point? Yeah, I would imagine. Yeah, by the regularizing property it's a smooth point, because it maps to zero. Oh, I'm sorry, so you're saying that. Yeah, so it's actually, so the sub M polyfold is actually a rather strong retract.
22:40
But it's correct, so if you just say it together, it's a sub M polyfold which in addition has an equivalent structure on a smooth manifold. This could end the linearization that any other solution set is also onto in the neighborhood. Yes, which is how you're gonna prove that, yes. So it's an open condition that it's... Yeah, so there is a small open neighborhood
23:02
that the full solution set carries the structure of a smooth manifold in the classical sense, ends the linearization that every solution is a U is surjective as well. Yeah, so let me add on that point, so. So it's basically what you would expect from your piece of context.
23:23
And for all other X in U intersect the solution set, F prime at X is onto. Oh yeah, here's a good exercise which you can write down for everybody here.
23:42
So if you have a retraction from U to U, which is SC smooth from U into the index lifted by one, then the image has a natural smooth manifold structure. So just to give you some idea,
24:01
so if you find a good proof for this, I would be interested. So I've only got one moment, Ochs Grauenfelder wanted to prove it. He at some point started to throw stuff around and Peter Ivers had to stop, so. But it's just the implicit function, some implicit, some honest implicit function.
24:23
So tell me if this is what you just said. So it's, yeah, so R is a retraction, but as a map from U into U upper one, it's SC infinity. But U is sitting inside some scary bottom space here? Yeah, so. Yeah, so U is in E.
24:42
Yeah, so just, okay, so let's say we have a retraction here as usual. And I told you to not let him get to the chalk. And R from U into U one is SC infinity as well.
25:03
Isn't that the original statement? No, it's limited. I think, yes, I understood what you said, but I think it's clear because I made a claim to choose something. So this is a retrack, everybody understands this, right? But if you lift the index by one, it means you go into the more regular space.
25:21
It's still S infinity, so that's the strong condition. Then R of U is a C, maybe for the natural way and used from the element space. That's a nice exercise. Okay, great.
25:40
So Dusu, does that answer your question or? Yeah, that is very helpful. And the proof is actually to construct such an R such that F composed with R is zero. And the tangent map of R maps at each point onto the kernel of F prime. So you have to construct an F, you said construct an R. No. R is given, construct an F.
26:01
No, no, no, given F. Now we go back to the implicit function. Oh, the proof of the implicit function. Yeah, so the image of, what we actually construct is that the image of this R is precisely the solution set. So you're saying that the idea of the proof of the implicit function theorem here is use the exercise.
26:21
So construct such an R. So that F composed with R is zero and that the tangent of R has the size of the kernel of F prime at that point is the image.
26:45
So the image of the tangent of R is equal to the kernel of F prime. Okay. I have a question about the implicit function. Can I think of this as implying various
27:02
ordinary gluing theorems that we know? Like if I have a configuration of only one of the spheres where I can verify some transversality conditions and I don't want to read that chapter of Dusan Bitmar's book. Yes. Can I just imply this theorem? Yes, so when you set up this,
27:22
so in the previous week we had this discussion how to glue out nodes and so on. So if you set this up you would get, say, the retract X. So we have the nodes here, we look at what's nearby. We get the retract X. We construct this bundle. And then we look at this Kuche-Riemann section.
27:41
And then in the nice case where the linearization is surjective. Then you have this implicit function theorem and the nearby solution has a glued solution. So what's the precise transversality condition that needs to be verified? So in this case our broken thing would be actually for each bit the classical surjective.
28:03
So it has two spheres as a node. I think what I remember in their book says a little bit more. I think there's also a condition of transversality of evaluation maps. Yeah, okay. So, well, you need the right index.
28:21
I'm sure you need the same correlations. It's just that you have to check them in a policy. But you have to check them as you try to operate. It's precisely the condition of what they say. That is what guarantees the surjective. I can just translate that. No, sure, but whatever classically is true,
28:40
it's truly, whatever good thing you can say classically you'll find here as well. It has the same ramifications. So Helmut, I didn't understand what you said about how the transversality of the evaluation map in that case would get built into. Well, when you set up, the things are, so the two operators don't move independently
29:06
because they are defined on spheres which have common points. So that gives you some algebraic obstruction. So you have to verify, then that is surjective. So if you have transversality of things, the cross terms are precisely so. So that's gonna change the scale bonnet spaces
29:22
that you're working with, right? The condition that your nodal point has to be in common between the? No, that is in the set up, so just. Can I try to explain? Precisely this, cut it. So precisely, which I think you explained or Joel, that is why you have the negative glue
29:41
in this average interval. Yes, right. So in this case, this is precisely the maps from two spheres with two distinguished points and the nodal value coincides. There it's already built in that they actually coincide there. So because of this term, so you cannot look at the two operators
30:01
completely separately because you have this constraint that when you linearize, that you only look at things which coincides over this point. And that transversality condition precisely then means that it doesn't matter. Okay. Okay.
30:21
Chris understands the answer, I don't need to. I'd be happy to hear whatever you are about to say. Okay, so broke down sort of the putative filled Fredholm section in the Fleur case. And you might remember that it was just sort of D bar, D bar in both components.
30:43
Right, so in that case, you really just, you know, it's a Cartesian product of two classical Fredholm operators and those both are transvers, you could. So in the Fleur setting, you don't have to agree about the answer. Yes, so that's a weak product. In this Fleur setting, however, I already assumed that, and I didn't tell you,
31:02
but I was implicitly assuming that the Hamiltonian trajectories are cut out transversally. Right, right. So if you now did the same shenanigans in a more support situation, for example for Gromov-Witten, then the pre-gluing map, right,
31:20
even the chart map for the ambient polyfold is not just defined on the product of maps from spheres, but on the fiber product, where you have to take the variation maps at the nodes. And so this fiber product is gonna sort of go all the way through to what your D bar,
31:41
what your filled section then is. So you can either put that fiber product into your operator or into your domain, and you can pick, but essentially exactly what Helmut says come out, but it's clearly to see already
32:00
from the pre-gluing construction, and you have to require equality at the node, and so if you went through my whole setup, you would get the two operators, D bar and D bar, but not on the Cartesian product, only on the pairs in the Cartesian product, which have the same value at the node.
32:21
So that's the thing that will need to be transferred. Or alternatively, you would say, okay, I take the, what is it? This, or you could also say, let's say the big modulized, this is equivalent to the big modulized space being cut, or the pair being cut out transversely,
32:42
and then from the pair of modulized spaces, the variation map to the nodal thing needs to be, or the variation map needs to be transferred to the diagonal, which is exactly the classical. Okay, so I strongly suspect that someone
33:01
besides Helmut, Chris, and Katrin has some basic confusion from last week, so I wanna encourage questions like that. Anyone? So what about the dimension of, in that setting, the dimension of the curve? Yes, good question. Right, so that's correct.
33:23
And this is gonna sound trivial, but nonetheless, I think it's useful to note that if you, let's say that we start out with a,
33:41
whoops, hang on, let's chalk. We start out with a scale Fredholm section F, which we don't assume to be surjective anywhere. Then it's a theorem that the,
34:01
filled section. So the filled section means put together F with the isomorphism that you assume you have between the sort of complement of the retract, cutting out X and that of Y. So then we get this filled section.
34:21
Let me call it capital F. So then it's a fact that this filled section, which is now going between honest scale Banach spaces or open subsets thereof, has classically Fredholm linearizations.
34:42
Yeah, isn't, no? I'm sorry, yes, thank you. But, right, yeah. Scale Fredholm linearizations at every X in X infinity,
35:01
and then Helmut correct me if I'm wrong, but the Fredholm index of, well, but it has a Fredholm index, okay. And let's say that the index of this linearization
35:24
at some point X nought is equal to, you know, I, then you can first apply a theorem saying that you can always perturb using scale plus sections to get this transversality satisfied at this point.
35:56
So such that, I think I might be mingling this,
36:15
so hold on for a second. I don't like to say, so usually when we start with a filled section,
36:21
it comes from having chosen a point for X. So there's usually one, because when you look at this condition there, when you go higher and higher up, the points where it can only be defined, the filled section basically only exists if you go to higher and higher levels near the original chosen point. You're complaining about the fact
36:40
that I should have been clear about the localness of the filling. For every X, because usually it makes only sense at one point. Yeah, so the thing that I wanted to get across, and clearly I'm not saying this correctly, is that there's a notion, thank you Helmut, there's a notion of Fredholm index, and you know, even if you don't assume surjectivity,
37:00
and you can perturb to get surjectivity, and then the dimension of the finite dimensional C infinity manifold that that will cut out is equal to the Fredholm index of the original thing. So rather than try to make that precise, let me just erase this. I think what Helmut is saying that this filled section, you fix the X and X infinity first, and then you look at the filled section to the whole thing.
37:20
Because by this theory, every point might have a completely different, up here it could have a completely different filling. Right, so it's just the order of the three. Yes, think of a really wide set, and at each point, when you have to fill, you have to put something else. So the filling is actually only of auxiliary nature. It doesn't have any intrinsic structure.
37:41
It just exists, and that's it. If it exists, then it's Fredholm, and it doesn't. And the nice, of course it looks like a complicated definition, but the nice thing is in applications, once you see one example, I think all the others follow. It's almost, almost canonical in applications with what the filler actually is.
38:03
Usually, the Hessian at a node, of the linearized operator at the node, or at a periodic orbit or so. So it's illusion standard stuff. Right, so the idea being that, like I mentioned on Friday, if you're studying Flir cylinders or something, the asymptotic operator, if you look at your operator,
38:24
and you take the limit as you go off to plus or minus infinity, that asymptotic thing is gonna be an isomorphism. And that's what you use to define the filler. But the filler's defined on the whole of this infinite cylinder that you sort of lost, and we need to do, so it's a negative find on the cylinder coming from the node.
38:43
It's actually not, in general, not the standard cylinder. If you look very precisely, these cylinders that you take depend on the gluing parameter. Namely, there are three. Because the identification depends on the gluing parameter. You slide them over, and further and further, it always looks like a cylinder,
39:01
but it's not coming out. But you've actually come out and you're coming out. I mean, you're just saying that there's sort of canonical choices of coordinates, but those coordinates depend on what your gluing parameter is. Yeah, there are actually two choices. For each cylinder, two choices are canonical coordinates, which depend on the gluing parameter. Okay, so any more questions?
39:22
Yes. So is the application's clear that this property of G is such that it has this form, which is something you can do? Well, my understanding is that the easiest or most natural way to prove it in applications is to use this alternate definition that Katrin came up with. So she came up with this definition, which sort of looks more complicated,
39:42
but in fact is easier to use. But I'm not the expert on this stuff. Do you two care to comment? Is that correct? Well, I saw it all in Katrin's estimates again, but maybe her is a little bit easier. Yeah, so the thing is- I would say it's the oddiest way to prove these things is to use the fact that they're smooth in all directions
40:01
other than gluing parameters. And then, yeah. And some uniformity of the derivatives in the good directions disrespects the bad directions. So that is what it means. So it's actually not that difficult. I think to put it in this framework is sort of on the level of proving
40:21
some gluing theory in a simple situation. Like two caps or two spheres. What Katrin is saying when she says that differentiation is- I think that the key of what Katrin said is that when we're looking at this reparameterization action
40:42
it was not differentiable, but it was not differentiable exactly because of what was going on with the gluing parameters. So you could differentiate in the function direction as much as you wanted, but the bad stuff was happening in the gluing parameter direction. Yeah, but the reparameterization is there in all directions.
41:00
I mean, if you slide the- But I mean, if you fix the gluing parameter, then it's- Yeah, but the thing is when you look, of course it has something to do with the domain, but when you look at the change of coordinates, it's usually by a different physical depending on the gluing parameters. So the gluing parameter enters over
41:21
a family of different physicals on the domains. And that's it. Okay, so. My impression was that philosophically that was built directly into Katrin's equivalent definition of Fred Holmness, this thing I just said about the reparameterization action. But not quite sure.
41:41
And there's a really great write-up of the proof of polyfold Fred Holmness of the Delbar operator in the Hamiltonian Flir case in a paper of Katrin's on the archive. Don't remember the title, but it was 2012. Is Katrin's alternate definition for splicings only or for retractions only?
42:00
I think it's for splicings only. The definition is- Here is your equivalent definition. The equivalent definition is for maps between open subsets of Skirbana spaces. Once they're filled, so. And so I then just, I don't explain the filling and just write down the filled version in that paper.
42:23
Do you expect any applications for virtual need retractions and not just splicings? I think that the answer is no. Why? Well, what's an example? How do you predict the future? Yeah, I'd say it's splicings. In all current applications, my understanding is that splicings are necessary.
42:40
I don't think any are known at the moment for which the fractions are going to be necessary. In mathematics, predicting the future, I'm fine with that. Usually you won't. I stopped doing this. But anyway, in- What is the argument? What's the difference between a splicing and a retraction? Okay, good question. Great.
43:01
Right, so first, retraction. So a scale retraction is a scale infinity
43:22
map R which goes between open subsets of scale monic spaces with R composed R is equal to R.
43:47
So scale retraction, that's it. And it has this really simple definition. The definition of splicing is, takes slightly longer to write down, but it's a special case of scale retraction.
44:00
So a scale splicing is a map following form. So let's call it P going from, and let me write down the case
44:21
without boundary, R D plus E to itself of the following form. So it's gonna send a point V comma, V comma E
44:41
to V comma pi sub V of E with the following properties. So the first thing is that these pi sub Vs are families of linear retractions.
45:03
For all V, pi sub V, which goes from E to itself, is a, let's see,
45:20
is a, what do you wanna call it? A scale projection. Yeah, so let me just say, so it's a linear scale zero map from E to itself, which squares to itself.
45:47
And then the second property is that P itself is smooth. I said that correctly? Yeah, okay.
46:01
So, right, so. That's not the kind of smoothie I'm, it's a little, that's not the kind of smoothie I'm currently doing. That's right, so. No, no. It's not even continuous. That's good. That's the whole problem. Not standard smoothie, it's LC smoothie. But it's smooth in the sense that
46:20
when you put them all together, capital P is SE smooth, but I think your question was, if you look at these operators, yeah, it's not even gonna be continuous as a map from RD to L of E comma E. By the way, Joe reminded me that there are splices, that there are retractions which are not splices
46:40
coming up in good applications. Which applications? Like, construct the manifold of maps from one manifold into the other. So, you can construct it basically in 60 seconds.
47:01
Maybe that's good for Wednesday. All right, yeah, that's your hour, Helmut. Right, so, and let me just remind you that there's this example that we went through on Wednesday. Sorry, just before you read it. Is it that the word linear is the crucial thing that distinguishes the splicing from the retractions?
47:21
I would say that V is the first component is the crucial. I mean, both of them. So, it's a family of linear projections, whereas this R a priori, like you have no idea what kind of form it has. Yeah, there's not this RD that's gonna split off of U that parametrizes some kind of family of maps,
47:40
even on linear ones. Is the splicing not to maintain or be? Yes. Yes? Yeah, and I think, I haven't double checked this, but I think it's the crucial element is the fact that the first component is V. It's sort of an identity. Because V here, in this case, is kind of acting like a boundary-defining function,
48:02
and it's not mixed in with the rest of what's going on. And so, when you can kind of separate it out, I think that is the essential feature which guarantees sort of tameness in a lot of these other properties, which follow from, for splicing, is relatively easy and not necessarily a fraction so easily. Because the boundary is all seen in RD.
48:21
You take some of the property of an RD. So that's the only thing where you get a bit of sense. Exactly, that's why I would think. So let me recall for you that this example that I talked about last Wednesday, where the retraction was homomorphic to this set
48:44
inside of R2, is an example of a splicing. So in this case, V is running in the horizontal direction.
49:03
And for any positive V, the projection pi sub V is projecting onto a bump function, or projecting onto the one-dimensional subspace spanned by a bump function centered at E to the one over V. For V less than or equal to zero, pi sub V is zero.
49:20
And as was alluded to, every single retraction that has come up when constructing modulus faces of the holomorphic curves has been a scale splicing. So the big one is when you're projecting onto the kernel of anti-gluing. That's gonna give you a scale splicing.
49:40
So whoever asked that, are you happy now? Okay. When you said this, I remembered, but you know. But you might need a retraction that's not a splicing. In theory, yeah. I guess that Joel actually has an example that he just recalled when you're constructing a manifold of maps.
50:07
That there might be a scenario, apparently they have a scenario where you need to consider retraction, that's not a splicing. Need is a strong word, but very useful. Will be shown on Wednesday.
50:22
Think about the following, how it seems. Or now. No, no, no. So if you look at a differential geometry, you have a little bit harder time already to find some book which actually talks about manifoldous boundaries. But if you then want to look at somebody talking about manifoldous boundary corners, I think it's basically impossible to find such.
50:42
And why is that? Because you also doubt, before, without going to talk about sub-manifolds which are causing problems, but if you come now to boundary corners and you want to talk about sub-manifolds, it gets a little bit of a zoom. What you can say. Now, here, if you just define a sub-manifold
51:01
with classical differentiability, as a set which is locally a retraction, then in the interior, it will be a real manifold. And near the boundary, you have a tangent space to the set. And you can say more about the boundary behavior if you know all the tangent space lies with respect to the nice.
51:20
So my proposal is, in differential geometry books, you shouldn't actually build everything on the retracts. Because it's just absolutely easy formalism, much faster construction of the manifold of maps grows like a breeze, everything. So that's my proposal to get from there. Very good. OK. Great. You still have to.
51:41
Right, so I think one more question, then I want to say something. What's that? Can you say, you wanted to say something about the classical novel of the Fred Holm condition? How it looks like in the classic? Yeah, so I think I have something more important to say. If you want to read about that, it's like half a page long, it's super easy, and it's nice, and it's in a paper titled A General Fred Holm Theory Number
52:01
2 by Hofer Wiesatzkinzender in the introduction. Any other last question? OK, so what I want to do in the last 10 minutes, I hope I can fit it in, is prove the easiest possible
52:20
version of regularization theorem. And the reason that this is relevant is that the polyfold version of this has exactly the same proof, basically with scales stuck in front of some words. Right, and I should say that this is lifted from caption's course a couple of years ago.
52:41
Right, so here's the idea. So let's take a finite dimensional vector bundle, E living over B, and S is a section of it. So here, B is a finite dimensional manifold, E is a finite rank vector bundle, S
53:03
is a C infinity section, and the zero set is compact, which turns out to be a finite rank vector and turns out to be crucial to the proof of this theorem. So what's the theorem say? It says that you have perturbations.
53:20
So conclusion is there exists a set called P sitting inside of the compactly supported C infinity sections with the following properties. So OK, the first one is saying that there are elements of P, and they, in fact, you
53:43
can find arbitrarily small elements of it. So there exists a sequence P sub i such that P i goes to 0 and C infinity loc. The next one is transversality, so that's pretty important.
54:00
So for every P in this curly P, if we perturb S by P, then that thing intersects the zero set transversely, which is to say that for every B in the zero set,
54:23
the linearization is onto. So DB of S plus P is onto as a map from the tangent space of B at little b to the fiber. Yes, thank you.
54:46
OK, and then the last one is that compactness is preserved. So the proof is short, and it's pleasant.
55:02
And like I said, if you know the proof of this theorem, then you also know how to prove it for polyfolds, essentially. OK, so let's fix B0 in the solution set. So while our solution isn't necessarily
55:24
transverse at that point, we know that the co-kernel of the linearization is finite dimensional since E is finite rank. So then let's choose a basis, E1 through EM,
55:45
for the co-kernel of the linearization at B0 of S. OK, then let's extend these guys to compactly supported sections.
56:11
OK, and so then using these finitely many sections, let's soup up our original vector bundle. So let's look at the following thing sitting over B times Rm.
56:31
So the projection is the obvious projection. And now we get this new section called S tilde. And it's defined by setting S tilde applied to B,
56:44
x1 through xk is defined to be S at B plus x1 t1 at B through xm tm at B. OK, and then we're just, of course,
57:03
doing the trivial thing in the R direction. Now, the point of this is that now we've killed off that co-kernel at B0. So we know that S tilde is transversed to 0
57:26
at the point B0, 0. OK, and it follows from that that there exists delta
57:41
greater than 0 and u, which sits inside of B, which is supposed to be a neighborhood of B0, with the property that S tilde is actually transversed to 0 on all of u times the ball of radius delta centered at 0.
58:07
OK, and then the point of this is we're now going to exploit the compactness of the original solution set to say that we can more or less cover the original solution set by finitely many of these sets, u.
58:32
Any questions about this so far? So let's cover B, which is compact,
58:46
by finitely many of these open sets, u, since we can do this original process at any B0 in the solution set. OK, so then what that allows us to do is we can
59:01
have B was not necessarily compact, but S inverse 0 is compact. That is not a thing. Cover B0 is compact. S inverse of 0 is compact. I think you mean instead of B as being compact. Great. So then what this allows us to do,
59:22
if you write down what this implies, is construct a fattening up E times Rk living over B times Rk, and a section S tilde with the property that, let me say this correctly, S tilde is transverse to 0
59:53
on S inverse of 0 times the ball of radius delta. So it's now crucial that.
01:00:01
Yeah, there were only finitely many of these sets, so I could choose the uniform delta. OK, so now we're almost done. So let's set sigma to b, right?
01:00:27
b times b delta of 0 intersect the solution set of s tilde. Oh, and I'm sorry, this should have been a neighborhood
01:00:42
u of the 0 set. So u is supposed to contain the 0 set. So now it makes sense. OK, so sigma, because the 0 set of s tilde is cut out transversely on this guy here,
01:01:00
sigma is going to be a finite dimensional manifold. Thank you. OK, and now we're essentially done, because we can consider what happens when we include sigma
01:01:25
into the base, and then we project down to rk. So let's call this map q. So then we can apply Sard's theorem to q.
01:01:41
Note that we're in the totally finite dimensional setting. No problem with Sard's theorem. So Sard's theorem tells us that there exists a point, let's call it y in rk, which is as small as we like, though I won't say that.
01:02:01
So that's what's going to allow us to prove the first part of the theorem. So this is a regular value of q. And so it follows that if we look at s plus y1 t1,
01:02:24
all the way up through yk tk, this guy, which is the section now of our original level, e over b, is transverse to 0, which is all we wanted in the first place.
01:02:40
So that proves the theorem. And let me tell you what you need to do to put this into the polyfold setting. So the first thing is that you need this contraction part of the definition of scale Fredholm in order to be able to say that solution sets of things transverse to 0 are finite dimensional smooth manifolds.
01:03:03
And then the other thing that you need are, you need your scale Banach spaces to actually be scale Hilbert spaces in order for bump functions to be defined. Yeah, you just need, yeah, you need SC smooth bump functions, but they exist also. Not all Banach spaces, but they exist. OK, so anyway, you need bump functions.
01:03:22
And besides that, the rest of the theorem carries through. Any questions about the proof? And then your s, your t i's would be SC plus 1. Yes, right, t i's are now going to be.
01:03:42
Yeah, so you're going to conclude that you can get transversality by perturbing only with scale plus sections. Yeah, Felix? You call this a regularization theorem, but it's not going to do with the regularizing property of f, it's just that you can make compact perturbation. It's just.
01:04:00
Yeah, I guess it's a different kind of regularizing. It's a regularizing in the sense that you end up with a smooth manifold. Yeah, it's very confusing, the language. Because there are two kinds of regularizing. OK, I will stop now.
01:04:22
I suppose we had a full hour of questions, but is there any chance there's some last minute ones? Yeah, could you say one word about how I get co-bordisms for different perturbations in this fire potential case?
01:04:43
Let's see, can I say anything sensible? So I haven't worked it out, but I think that you're going to start off with these things that are transversely cut out, and you're going to basically need to extend whatever perturbations you made to get transversality into this whole co-bordism. I mean, you just need to prove a version of this theory
01:05:03
where on some close, in the neighbor of some close set, you already have a regular perturbation. And then you want to extend it as a regular one. But you see immediately that that works as well. Because then in the other case, you have two boundary components which are already regular, and then you just extend that in regular fashion.
01:05:23
It's the same idea. You just add one parameter, but you use the same reason. So the one thing I wonder about is, the initial step, when you take your perturbations and you want to extend to something which is not necessarily transverse, but you certainly at least need thread form. So yeah, so you need, of course,
01:05:41
if you had that problem, this is still thread form, but that is not an issue. And then you're already regular near the boundary, because there are two regular perturbations. And then you have just to do this thing inside to fill up the co-coordinate. And then you just need to perturb it with your parts.
01:06:02
So it's basically the same. Well, the one thing that's confusing me is certainly if you've gotten this homotopy through Fredholm operators, you'll be okay. But why is that immediate, that you can do that? Well, the generalization of this... Well, if you look at your principle, you just add, this v has one additional parameter, t,
01:06:24
which doesn't affect... Oh, I see, I see. Okay, right. So key is that we use this contraction form for Fredholm-ness, right. I mean, it goes immediately into this.
Recommendations
Series of 36 media