2/3 The rise of sc-retracts
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 36 | |
Author | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/16304 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
6
14
17
18
19
23
24
25
36
00:00
Observational studyQuiltStatisticsMany-sorted logicFunctional (mathematics)Scaling (geometry)Set theorySpacetimeLogical constantFunktionenalgebraAbsolute valueAlgebraic structureGoodness of fitScheduling (computing)Open setRight angleNeighbourhood (graph theory)Glattheit <Mathematik>Chain ruleNatural numberAsymptotic analysisNear-ringParameter (computer programming)Proof theoryIterationFinitismusCylinder (geometry)Well-formed formulaNetzplanDifferential calculusBeta functionTerm (mathematics)Compact spaceForcing (mathematics)Point (geometry)Modulo (jargon)Mathematical analysisPrime idealTheoryCoordinate systemMathematicsTime domainEnergy levelThetafunktionModulformSequenceCondition numberLine (geometry)Multiplication signAxiom of choiceNetwork topologyFiber (mathematics)GravitationAlpha (investment)Standard deviationOrder (biology)Transverse waveGroup actionTrailApproximationCorrespondence (mathematics)CurveObject (grammar)Numerical analysisVarianceDifferent (Kate Ryan album)State of matterWeightSphereManifoldHomologieMatrix (mathematics)Stochastic kernel estimationMereologyProcess (computing)SubsetBijectionAreaInfinitySpherical capEinbettung <Mathematik>Potenz <Mathematik>Symmetry (physics)Vector spaceIdentical particlesDirac delta functionMoment (mathematics)Dimensional analysisTheoremSquare numberLocal ringModel theoryArithmetic meanRegular graphAutomorphismOperator (mathematics)Projective planeDivisorParametrische ErregungValidity (statistics)CuboidInsertion lossComplex (psychology)CalculusShift operator2 (number)OrbitEquivalence relationExponential smoothingDifferential (mechanical device)Holomorphic functionInclusion mapAssociative propertyHelmholtz decompositionDistanceBound stateSummierbarkeitINTEGRALNormal (geometry)BuildingFiber bundleAdditionCategory of beingDeterminantInterior (topology)1 (number)Standard errorPerturbation theoryDifferentiable manifoldCircleSinc functionModule (mathematics)Heegaard splittingInterpolationTangent spaceModal logicCurvatureThomas BayesLimit (category theory)ForcePolymorphism (materials science)LogicFilm editingTopologischer RaumGreatest elementMechanism designAverageFrequencyConstraint (mathematics)RiflingClosed setSeries (mathematics)Direction (geometry)Student's t-testCartesian coordinate systemDrop (liquid)Protein foldingStability theoryInversion (music)AnalogyResultantFree groupFocus (optics)Water vaporAnalytic continuationBoundary value problemFisher informationPhysical lawEnumerated typeMaß <Mathematik>Extension (kinesiology)Sheaf (mathematics)Universe (mathematics)Flow separationArchaeological field surveyRegulator geneMusical ensembleDecision theoryEntire functionCentralizer and normalizerPerspective (visual)ExplosionPower (physics)Positional notationDirected graphSampling (statistics)VotingForestFigurate numberBasis <Mathematik>EstimatorLecture/Conference
Transcript: English(auto-generated)
00:15
Very good. Thank you. So, yeah. So, first, right off the bat, I want to apologize for
00:20
this juggling of the schedule. I have some personal obligations, which are just making it difficult to schedule. So, hopefully, this will be the last of these switches, but who knows, I suppose. Okay. Right. I want to, so a couple of people pointed out a couple of things to me after my talk yesterday that I wanted to bring up.
00:44
And it was sort of funny, because these people sort of raised exactly the sort of point that I've raised with Helmut on a number of occasions, which is there's oftentimes you start talking about this stuff, and there's just ambiguity in the language. And once you're sort of
01:01
brainwashed enough to sort of understand what's going on, then that ambiguity becomes sort of natural, and then I committed those same sins yesterday. And so I kind of wanted to point out something about that. So there's this ambiguity in terms of how I write, and in terms of how HWZ write to some extent, in terms of E, which is meant to be a scale
01:23
Banach space, and E naught, which is just a level, in particular, the base level, the zero level of that SC Banach space. I mean, in particular, this looks suddenly confusing, right? What does this really mean, right? If E is a scale Banach space, then it's a whole
01:47
sequence of Banach spaces. So what do I mean by, to say there's sort of an open set, right, sitting inside it? And so the ambiguity is sort of always meant that E is both,
02:01
when I write E, it's meant to mean both sort of the scale, the whole set of Banach spaces, but anytime you see a set-wise type statement like this, I'm talking about the zero level, right? So open sets, in particular, in a scale Banach space, always means you take an open set on the base level, and then it has the scale structure induced by taking the intersection of
02:22
that open set with all the other scale Banach spaces, all the other Banach spaces in the scale Banach space. Does that make sense? So hopefully that clears things up, and it's just, I mean, you'd be surprised, you know, the more you go with this sort of stuff, the more you discover that you just need sort of more and more notation, and then at some point
02:43
it just gets too difficult to keep track of all the notation, and you need to allow for some amount of ambiguity to sort of actually make anything understandable. So that at least seems to be my preference. So if at any point you sort of get stuck, even with these sort of basic questions, because believe me, the first 10 times I read this stuff, this is exactly the sort of thing that would drive me nuts, please ask, okay? Yes? So being open in the
03:06
base level E0, is that not equivalent? I mean, so the higher levels are compact embeddings? Mm-hmm. Right? Okay. So shouldn't this be equivalent to U intersects in K being open for all K? That should also be, I mean, that will be the case.
03:22
Okay. Is this kind of equivalent to it? I'd have to think if it's equivalent or not. I'd need to see a precise definition. Any other questions? Okay. Okay, so then the first thing I really want to do during today's talk is just to sort of recap what we did yesterday, because we're going to sort of build on this. So the first, I would say, sort of main thing
03:44
that I wanted to point out was that the action of the reparameterization group is not classically smooth. So we had this sort of toy problem, we showed that it showed up in transition maps and more in setting up, sort of trying to set up monoch manifolds for more somology. And then I sort of, and then I said that the same sort of action shows up in Gromov-Witten and sort of essentially any module-like problem you want to consider.
04:03
Action of reparameterization is not classically smooth, and then it was pointed out on the total space, on the ambient space of functions that you want to work with. It is a smooth action if your module-like space is cut out transversely, and the action is just restricted to that space. So then we said, okay, well, we'd really like to have this action on this
04:20
total ambient space to be smooth in some sense. So we introduced scale binoc spaces and scale differentiability, and as a consequence of that, we had sort of two key facts. One is that now the reparameterization action is SC smooth, and it's also the case of the chain rule holds. Chain rule holding basically means that we now actually have this, a new notion of differential calculus in some sense, right? And then reparameterization acting
04:44
smoothly is nice because that meant that you could build transition charts now with sort of some notion of smoothness between them. So you could build something like a scale binoc manifold and some toy cases. And then the last thing that I wanted to point out, I didn't mention this last time, but, you know, for me, if someone tells me that a function is smooth, that always means C infinity, but I know there's
05:02
some collection of people who also think that smooth should just mean, or rather a smooth function should be C infinity for me, but there's a number of people who think a smooth function should just be C1. And so for all of my talks, smooth is meant to mean infinity, right? And in particular, SC smooth is meant to be SC infinity, not SC1, okay?
05:21
And then you can sort of say, well, did we actually show that the reparameterization action is SC smooth? The answer there is, strictly speaking, no, but this proof carries over, you just sort of iterate it, and then you can actually prove that the reparameterization action, at least in the toy case that I presented, is in fact SC smooth. Any questions about that? So, new material. Today, we want to parameterize a, maybe I'll say it
06:09
this way, we want charts near nodal, let's say maps, and we're thinking in some sort
06:30
of a Gromov-Witten setting. So we're not going to worry about quotienting out by automorphism groups, we just want to sort of understand what charts near nodal maps should be. So the image of your map here we're thinking of is looking something
06:41
like this, right? Two spheres, maps from a nodal sphere into some manifold or R2N, say for instance. And so even to sort of state this, we're kind of implicitly assuming something in the background here that we know what it makes sense to be near a nodal
07:04
map in a reasonable sense. So I'm already sort of assuming that we have some sense of what it means to put a topology on the space of nodal maps in this larger space. In particular, you would like it to be the case that near this nodal map is, say for
07:21
instance, this map, which is not nodal. You've glued it a little bit. We'd like it to be the case that this is sort of close to this. That's what we mean by near. And that in particular is what we want to try and find a chart for, assuming our chart, say for instance, is centered at this sort of map. So that's our main goal for today, and how this appears particularly in the polyfold framework.
07:46
And so the first step, I guess, is to take this picture here. And for simplicity, we really want to cut away as much of the topology of the problem as we can. And if we do that, then that sort of turns it into some problem
08:04
that looks, my drawing on the fly is a little bit poor, so my apologies for that. But I hope, Joe's delirious. But I hope that you, but I, I hope, I hope
08:27
that you at least, okay. Very good. So this is what I want to do. I kind of want to chop away sort of the interesting topology. I want to turn it
08:42
into this sort of problem here. And we're going to see with pictures and then more precise statements how this is going to end up being useful for us. So what I would like to do though is is a little warm up problem, or rather I want to give a warm up problem, which goes like this. Definition for fixed
09:13
delta bigger than zero and k a natural number. Define h k delta r. I'm
09:28
super, are subscripts rather to denote coordinates on these guys, just as it gets convenient to do this if I want to be precise. No. So h k is equal to w k
09:50
two. And I want to define this to be equal to the set of all f in h k loc,
10:08
such that d alpha of f times e to the delta absolute value s is in l two for
10:24
alpha between zero and k. And then the norm in this case is the sum alpha and
10:47
value between zero and k integral r cross s one e to the two delta s d alpha f
11:06
squared d s d t. And what I want to do is say homework for zero strictly less
11:30
than delta zero, strictly less than delta one, strictly less than delta two. And then I'm going to say all of this less than two pi. Show h k, let's
11:46
see, I'll write it this way. Show g sub k defined to be equal to h sub k delta k is a sc Banach space. So there's some hints in the lecture notes that I have online. And of
12:10
course, you can ask Nate. He's worked through this. And I think, yeah, so that's an option as well. That is the key step. So that's the key step. And I guess, let's see, so the
12:27
hint that I provide in the notes is that this sort of exponential decay, in particular the fact that you have sort of increasing weights in terms of your exponential decay, is crucial to guaranteeing that compactness result. However, I would add as sort of an
12:44
addendum, I would, as an open ended question, I would just sort of say, you know, explore what other possibilities, yeah, what other possibilities one might have. In particular, you know, it is, for instance, important that the deltas, you know, go up to infinity,
13:02
but do you need exponential decay? Sorry, the deltas increase what I mean. They strictly increase. Don't go to infinity. Although that would also work. So then that raises also a natural question. Why do I have two pi here? This seems sort of completely strange. And you can choose a different cap there if you like. Because I'm talking about problems in Gromov-Witten, this two pi ends up showing up as being relevant. And for those
13:27
who've done sort of more analysis in the subject, this basically, this number here has to do with the spectral gap for an appropriate asymptotic operator. And two pi sort of shows up in the Gromov-Witten case. And if you're working in sort of SFT or Fuller homology or
13:41
something, then these caps sort of have to be different. And so there will be sort of corresponding changes there. But in any case, this is sort of the prototypical scale Banach space that occurs in a lot of polyfold, yeah, a lot of the polyfold literature, I think. Everything is sort of modifications on this, I think. Good. Okay. So now what
14:03
are we going to do? Is there some intuition as to why that's the norm you write down? What else could it be? Starting from what? Starting from where? Starting from here, I would say this
14:20
is the obvious norm. Okay. So why is this the choice? So I would say that the point is that you have a non-compact domain. And if you want to have a scale Banach space, then you need compact inclusions of higher levels into lower levels. And so you can't just, you need some sort of, you need some sort of, increasing the regularity alone won't do it. You need some
14:43
additional information. So the exponential weights, like the exponential weights that we put on sort of heavily weight things on the outside, and that, well, I mean, you can tinker around with it. Yeah. You can tinker around with it and then sort of see that that guarantees compact embeddings. Another way to think of it is that in some sense, morally, those exponential
15:01
weights kind of force, in many ways, allows you to sort of treat the problem as if you're dealing with instead of an infinite domain, sort of a bounded domain. Well, you wouldn't, I don't think you would want to cut off. Yeah. So again, so my addendum to my homework
15:20
problem was explore the necessity of this sort of exponential, right? And so, I mean, so the solution is basically it doesn't have to be exponential. It could be a lot weaker. It could be a lot stronger. But there's an important key in there. But you choose exponential. I think one of the reasons you choose exponential is that this plays very, very nicely with the corresponding asymptotic analysis.
15:43
I mean, it's a phrase answer. I mean, you want later to do phrase answers. Sure. And then it's sort of nice for that sort of setup. If you had like a double exponential, you'd be filling off some of the things that you want to count as solutions. And that's the solution of exponential, but not necessarily double exponential thinking. But if you just want to construct a space.
16:02
But if you just want to construct a space, you have a lot of freedom. But if you want to construct a space that's useful for applications, then you have to be a little bit more careful. And they all tend to follow this form. But these are all good questions. So it's certainly reasonable, though, to be assuming that ends of pseudo-homework occurs from here. What's this definition?
16:23
For this definition, the most natural thing to think of is, well, let me draw the domains here right now. And then hopefully that'll start to clear things up. So let me draw what I erase. So just to buy time. That's better, Joe. That's what I expect
17:00
from you. It is supposed to be a half cylinder. Okay. So here's what we have. So I said with a picture that I erased previously, we had a nodal curve that you might sort
17:21
of see in Gromov-Witten. And so then I said, okay, I want to forget about the part sort of the topology as much as possible, reduce it to sort of this nodal disk pair, basically. So this is what I have. And so then what you do is you say, well, look, this is a disk. But now I have this nodal point. I want to treat it like a puncture. And so if I treat it like a puncture, then I have a holomorphic coordinates which take me to sort of a positive half cylinder. So this ends up being R plus cross S1.
17:43
And then over here, this is going to be, you do the same thing but in sort of the opposite direction, R minus cross S1. And so now if this is your domain and you have a map defined on this, well, you can sort of pull that back to sort of maps on either of these two, on these two cylinders. And then in particular, you'll have sort of an asymptotic matching
18:03
condition in this case. So then what you want to do, because our goal for the day, although I erased it, our goal is to essentially try to find charts for neighborhoods of nodal curves, we have to do, well, the, let's see, where is this in my notes? I mean, the right thing to do is to, is that that's going to involve the sort of pre-gluing maps.
18:23
So I have to define what those are for you. And so I like to start with the picture. And so, so here's this picture. So the idea is, the idea is, right, if you have sort of,
18:51
if you have the nodal map, the nodal disk map, and then you pre-glue,
19:00
you know you end up with a cylinder of finite modulus. And so that's the picture that I'm drawing here. And in fact, I'm going to name it, and it's going to be called, it's going to be named ZA. And I'm going to define it very precisely somewhere on the board here. ZA is equal to, and this is 0, R, with an S coordinate, cross S1, disjoint union,
19:38
minus R0. This has an S prime coordinate. This has a T coordinate, cross S1. This has a
19:46
T prime coordinate. I want to take that disjoint union, and then I want to quotient out by this equivalence relationship, which identifies these points, ST with S prime plus R comma T prime plus
20:03
theta, where R equals E to the 1 over absolute value of A minus E, and A equals mod A,
20:21
E to the minus 2 pi I theta. So what am I doing here, right? So, I mean, I don't think this is significantly different than what's done in the MacDuff-Solomon book, right? The idea being that, you know, if you have a nodal pseudo-holomorphic curve, and you want to find
20:42
nearby pseudo-holomorphic curves, well, there's this sort of argument that you make. The argument says, you know, you find this pre-glueing map, which allows you to sort of construct nearby maps from nodal ones, provided you've given me this complex gluing parameter, which I'm calling A. Which thing is called the pre-glueing map? The pre-glueing map hasn't been written down yet. I'm about to do that. Okay. Why is it called pre-glueing? So I'm using
21:04
Katrin's terminology. This is where I've sort of acquired this from. The idea being, my understanding of Katrin's idea, she can yell at me if I'm wrong, is that in something like Gromov-Witten or, you know, in the various floor homology or whatnot, there's a
21:21
gluing map, and that the gluing map should be understood as you take a broken solution to your problem, and the gluing map takes you from that broken solution plus a gluing parameter to another solution. Pre-glueing says, give me two a priori non-solutions, but put them in a function space that's close to where the solutions should lie.
21:45
So it's called pre-glueing because usually my understanding is that you take solutions to this nodal problem or broken problem, you pre-glue those solutions together, and then you run sort of a Picard iteration or Bonac fixed-point argument to basically say that there has to then be a nearby solution. Okay. Is that roughly correct,
22:06
Katrin, or are we busy with something else? No, that's correct, and that's not my terminology. Well, that's where I learned it. So what's the... So that's a good reference, I think. Okay. This sort of standard language that I'm just not aware of. Okay, very good.
22:27
Okay, so right, and so what's going on here is that we have to define that pre-glueing map, but in order to define that pre-glueing map, you need a domain for that map, and in order to define a domain for that map, well, you have to do a bit of this type of
22:41
pre-glueing here on your domains, and then we're going to find maps on this new domain. So let me do that now. Okay, so I'm going to say... Well, I can be precise here. Given A and C,
23:04
and we're really thinking of mod A being near zero, and given U plus or minus as maps from R plus minus cross S1 into... For convenience, again, I'm just going to work in R2n and we
23:27
can change things later if necessary, although with retracts you can do some interesting tricks. We have this pre-glueing map of U plus, U minus. Well, let's see. It's defined as a map from ZA
23:49
into R2n. Sorry, you give me a gluing parameter and then two of these maps here, and then I'm going to construct a map from this sort of finite cylinder into R2n,
24:01
and it's going to be given by theta plus sub A, U plus minus is equal to beta S minus R over 2, U plus S comma T plus 1 minus beta S minus R over 2 times U minus S minus RT minus theta,
24:47
where beta essentially has the following form. It's a cutoff function. This is beta of S here,
25:21
so that beta prime has, that's the derivative, has compact support. 2 beta prime is less than or equal to 0, and 3 beta of T, beta of S plus beta of minus S equals 1, identically. So with
25:56
this beta defined, then this pre-glueing map is well-defined. And so then, of course,
26:03
if you haven't done any pre-glueing analysis before, then something like this looks really unpleasant. If you have done glueing analysis, you can at least sort of see what's going on here, I think, hopefully fairly clearly. Basically, what you're doing on this side, you have a cutoff function, and over here you have 1 minus a cutoff function. And so what
26:23
that means then, if you tinker with it, if you haven't done it before, is that you're essentially interpolating between U plus and U minus, modulo these sort of shifts in the corresponding domains. And then in the picture, that's essentially what happened. Katrin actually drew a similar picture during her lecture. The idea basically being you take
26:45
your two domains and then you shift them and you add in this relative twist, and then you want to identify sort of this truncated region which we defined to be ZA, and then in this ZA, say on the far left-hand side, it's U plus, which is defined on the top part, and then on the right-hand side of ZA, it's going to be U minus, which
27:03
is defined on the bottom-most part. Does that make sense? And this is just the pre-glueing part. This is rather smooth. Yeah, so this is just pre-glueing, and everything done here is, yes, this is just
27:23
smooth for A away from zero, I think. No, I'm not going to say that. I have to think about what smoothness means in this case. Smoothness is a little bit strange at least because your domains of your maps are sort of changing, so you would already need to build a space where it makes sense to even compare them. So I won't make any claims
27:42
about smoothness. Good, and so... You mean like the variance in A might not be smooth? Well, no, I mean, how do you compare, so suppose I take, suppose I take, I fix, suppose you even fix U plus and U minus, and then I compare like A equals one and
28:05
A equals one-half. The point is that if you look at the definition of Z, like your domain is changing. And in fact, the way we've carefully defined Z, even if we consider like A equals I and compare it to, or let me write, E to the I pi over two, compared
28:27
to E to the, say, I, say for instance, right? Even in this case, with this careful definition, your domains have changed, right? They're all diffeomorphic, of course, even in this case they have the same modulus, but your domains have changed, and so all I meant
28:40
to say was what does it mean to sort of say smooth at this point already, because these things are, you know, the domains of your function are changing, and if you want to compare two functions, you'd want them to have the same domain, I think. Any other questions? Okay, so now, so now I want to say, I'm going to try and keep this, so...
29:12
Sorry, can you repeat what the role of this gluing parameter A is? Pardon? Repeat what the role of this gluing parameter... So, I mean, so as A goes to, so when A equals zero, it's as if your maps haven't been glued
29:25
at all, so it's the nodal map, and so then what A controls is, well, it controls in some sense how you construct, it essentially controls how you construct the ZA, which tells you how to construct, say, the modulus of this neck in between, more than that, I think, the modulus of the neck at this nodal point.
29:42
Why do you have to twist? Well, because, well, for one reason, if you didn't twist, then you'd have boundary, and then your Delaney-Mumford space of Riemann services would have boundary, and you'd know you'd done something wrong, right? I mean, there's just, I think if you want to get all nearby maps, you have to include that, you have to include that twist parameter.
30:10
Okay, so here's some ideas. So, the first idea is the right idea, and, yeah, I guess
30:24
I should say, I should, yeah, I should say, and so Katrin's told me this about a million times, and it was only yesterday that it sort of, the light bulb went off in my head that this was really the right, really the right, right idea, and that is, it takes a while for stuff to get in, let me tell you. The idea was to use this pre-glowing map to define a neighborhood of a nodal map, right?
30:56
And I bring this up specifically because during Duz's talk, you know, someone sort of said, well, wait a second, what's this topology on this, you know, on the space
31:03
of pseudo-holomorphic spheres or something, right? Once you compactify, what's this topology, and the immediate answer was sort of like, oh, I don't want to talk about that, and that's a perfectly valid reason because, you know, when you write down, because the way you would define the topology is essentially via Gromov compactness, and you read Gromov, the definition of Gromov
31:21
compactness anywhere, and it's complicated, and so you kind of don't want to do that. However, if you look at the pre-glowing map, if you look at the pre-glowing map on this ambient space that you're trying to build, then essentially when you restrict it to your moduli space, it gives you the Gromov topology, and it's not so difficult to see how this pre-glowing
31:42
map ought to give you sort of neighboring curves, right? You sort of say, okay, I take a nodal map, and I give a gluing parameter, and then I find these sort of approximate non-nodal curves nearby, and that gives you a bunch of sets, and that bunch of sets defines a topology for you, as long as everything's sort of open in the suitable, you know, in a sort of a suitable sense, and it's, I think, a very clean way to sort of define the
32:01
topology, or what you expect the topology to be in this sort of ambient space. So this is a good idea. So here's sort of, I have to be a little bit careful, it's right but wrong, this following idea, which is to use
32:21
this pre-gluing map to build a chart for these neighborhoods, right? And so, let's
32:40
see, what I want to do, so I can, let's see, I can even make that more precise, what I mean with the second statement, as this will be useful to keep in mind and sort of explore. So what do I mean? Well, let's say via, I have a map from C
33:06
cross E into capital Z, and so C takes this input A, which is a gluing parameter, and real, I'm going to use, you know, C, a reality we're thinking, just being an open neighborhood of zero, but just for convenience, let me say C,
33:22
and then we, and then E consists of these pairs, U plus and U minus, and then we want to take these to their image by the pre-gluing map, and then I want to define these function spaces for you, so E equals the set of all
33:41
pairs, E U plus minus, defined on R plus minus cross S1, sorry, capital Z is another function space, which I haven't defined yet, as I'm busy defining the first one, such that there exists a constant C in R2n, such that, let me try
34:10
to write this clearly, E delta naught, absolute value S, D alpha U plus minus
34:24
C is in corresponding L2, for alpha and absolute value 0 up to 3, and Z is the
34:43
union of A in C, where again I'm being sloppy with my notation, H3 maps from ZA into R2n, do I want to regard what? Oh right, what kind of object is, it's
35:08
this by definition, I mean, in this, I mean, so this is, these are, each one of these is a set, and I can take a union over a set, and I get another set, so it's a set at the moment, right, but you can see, but you can
35:21
sort of see that, what should Z be, I mean, in this sort of context, I mean, if we actually capped off our domains by these sorts of disks, well, then we would think of Z as being this sort of function space from a bunch of different, a bunch of pre-glued Riemann services, basically, right, I
35:44
mean, in particular, if you throw on some additional mark points, your modulus can change, right, and so now you just, you have this very large function space of H3 maps from each one of these pre-glued Riemann services, but they each have a different gluing parameters, they're each thought of as completely different.
36:01
Does that make sense? Just a disjoint union, then? Well, none of them are contained in each other, so yeah, I mean, I don't think it matters that much, but maybe I'm wrong, I can put a disjoint union in there. You can put this on fiber over the gravity A, right? That's true, yeah, that's true also.
36:23
Any further questions? Yeah. Why are you taking alpha to be bounded with three? Right, because I want to try and follow the Gromov-Witten paper as closely as possible, and they do that because these functions here then end up being in H3, and then that guarantees that they're C1, and
36:43
C1 is important once you want to put on transverse constraints, these sort of transverse hyper-services, right? Further questions? Why not just take it to be much bigger and then not lower? How much bigger? I don't know, like arbitrarily large in a way. You can, you can.
37:01
Arbitrarily large in what sense? Age one billion. I see, I see this number, yeah. That's a minimum. Yeah, this works. Although I would imagine that that might change things in terms of. It takes a scale anyway, so one, H1 billion exists also now.
37:23
Yeah, yeah, I mean I wouldn't be surprised though if somewhere along the lines making that choice forces you to do some extra work. Say for instance proving convergence in terms of Gromov compactness. Now you have to make sure that things converge in sort of, it's not enough to even sort of converge in sort of low regularity,
37:40
they have to converge in higher regularity. I mean, I don't know how the argument goes exactly in the polyfold framework. It's not clear to me, I think, but in general though the point is, you want this to be as low regularity as possible, just to make sure you're capturing all your curves. Cuz otherwise you could run into the same mistake that you already suggested, which is put in a double exponential and now a priori you're sort of excluding something.
38:01
So, keep things as, by making this be large you're restricting your ambient space and just in general being cavalier about this you might lose some information. In this particular case, yeah, you're right, probably it's not a problem. Yeah, you have to make a choice. This way it's easier to write than to do it again.
38:21
Could be. Wisdom, yes. So just to recap, these cylinders, right? So what was it again? You had two nodal discs included in the interior and these represent polar coordinates near the nodes. So what happened was, right, so we started with a pair of nodal discs and
38:45
then from that you have holomorphic coordinates around each one of them that you fix and that gives rise to these two half cylinders. And then those two half cylinders plus a gluing parameter A gives you ZA, which is the sort of cylinder of finite modulus. And then from that, once we had this sort of domain, we also wanted to find if we
39:03
had maps defined on that pair of nodal discs, we wanna know what's the course, but what should the corresponding map be defined on this ZA? And we arrived at that with this formula right here, right? And then now, of course, we said that there's some ideas running around here. And so one is to use this pre-gluing map to define a neighborhood
39:20
of a nodal map. And so what I'm saying is, in some sense, this is sort of part of our neighborhood. This ends up being part of your neighborhood of a nodal map. And I wanted to be able to sort of make this precise here, right? And we had the second idea, which is sort of right but wrong, which for the most part we should think of as wrong and I'll show you why.
39:43
Is to use this corresponding pre-gluing map to sort of say that, I mean, if this pre-gluing map defines a topology, that's essentially telling you that you're finding all nearby, you're finding all nearby non-nodal curves to a given nodal one using the pre-gluing map. So if this defines topology for us, right, which it does,
40:03
then you're finding all nearby maps. So why not try to make use of this pre-gluing then to actually just parametrize all your nearby problems, or sorry, all your nearby non-nodal maps. I thought those, the pre-gluing maps are not solutions, they're not. Correct, correct. So which of my statements is now confusing?
40:25
So why are those useful curves to consider? I think it's when you say a neighborhood of a nodal map, you really mean still in the ambient space. Still in the ambient space. Absolutely. Not a nodal Jacobomorphic map. Right. Yeah, I mean- Also, you have to, I mean, to get the neighborhood, you have to complete.
40:41
See, yeah, it's just a cylinder. You have to sort of add in the disks that you forgot about, right? That's true, although I would say that the most of today's talk is to sort of try and forget as much topology as possible and sort of, it's sort of- I have a neighborhood of- A nodal map, a nodal map. From what, right?
41:01
I mean, I could have changed my definition from a pair of, to a node, to a two disks, like a node and a two disks. Okay. Is it completely obvious that if I then want to discuss neighborhoods of things with many nodes, I'm not going to run into additional trouble with like different errors from the different ones multiplying? No, I mean, you can see from the, I mean, you can see from the,
41:21
you can see from this definition of sort of pre-gluing here, is that it's purely a local phenomenon. And your nodes are always sort of separated in a sort of, yeah, your nodes are always sort of bounded distance away from each other. It's sort of a thick, thin type decomposition sort of tells you this. And when you make it holomorphic, it will not be so local?
41:41
That's a different question. So at this point, all we're, yeah, I mean, this issue sort of, this issue right- There are no holomorphic maps here. Yeah, nothing's holomorphic. I'm building ambient spaces, right? I mean, in some sense, that's all we're doing for this entire first week is building ambient spaces, right? And with appropriate bundles and structure and so forth.
42:02
And so a benefit is that, yeah, we don't run into these sort of, we don't run into these problems of, what, associativity in terms of gluing, for instance. Good. So I have one more question. And the C is there to make, like, to account for the shift? Because it's only in the alpha equal to zero case, right?
42:20
All right, so what's going on here? So that's actually a good point. So if you look at this, if I don't put this sort of C here, then this function space is essentially the same one as the practice one. I mean, modulo truncating your domain to sort of half cylinders. So why is it the case that we have this C in here? And what does that do for us? Well, what that does for us is that if you think about, if you think, okay,
42:41
I've got a nodal map, sorry, a pair of nodal disks as my domain, and I think about maps from there, for simplicity, once you're writing down problems, you're gonna sort of assume, say, that you're basing your problem when the image of that node, it's sort of at a base map, is gonna go to zero. But then you're gonna want that node to move around. And that's what this C allows for, basically, right?
43:03
And so yes, I mean, if I were trying to construct purely toy problems for this lecture, I could have said, okay, let's kill this. But I really wanna keep things as close as possible to Gromov-Witten, cuz that's sort of, I mean, I think that should enable Nate to recycle a lot of the stuff that I've done and even write down charts in that case. So now I wanna address this point up here, which is the second one.
43:24
It's right but wrong, and I wanna emphasize at the moment that this idea is wrong for sure. And so why is this idea wrong for sure? Inductivity, so that's exactly right. So Katrin even mentioned this yesterday as well. When you look at this problem here, when you would try to do this,
43:43
well, if you'd wanna use this pre-gluing map to build a chart or parametrization or whatnot, at the very least, you'd want it to be injective, and we have a problem that because of the way that this pre-gluing map is defined, it sort of truncates your domain. You'd sort of lose a ton of information.
44:01
And so consequently, this map ends up being infinity to one in general. General being A non-zero, right? And so an infinity to one map is a terrible idea then to use as a parametrizing map. All right, so where's the, in this case too, you're saying-
44:20
Yeah, yeah. We lose data? Yeah, absolutely. Where's my hook? I guess I get to be a pirate for my talk, too. All right, so let me move this down so that we can still see this definition up here. You can see that what's going on here, right? So this bit here is U plus, so
44:42
U plus is defined, say, certainly on this region here, and this and beta in this region here, or this region here, I should really say, is one. So U is getting one, but one minus beta is zero in this region. So your map defined on this region right here is just U plus. And the same argument sort of tells you on this region here,
45:00
it's just U minus. But when it's just U minus, then U plus is defined on all of this region here. So all this information beyond here is just being killed, it's gone. Right, it's being killed because this is a cutoff function. I erased it, but it looks like this, right? It cuts off U plus after some finite amount of time, and
45:20
then we just, whatever the map is in this region, right? That make sense? Good, okay. And it's obvious to see that there is no way to not lose information when you do this? From this setup, what we've done so far, I mean, you're absolutely forced to lose information. You always have to truncate here.
45:40
I mean, how can you, yes, it's always gonna be the case that you lose information from this setup. Actually, I can- Crap. No, no, no, no, no, this is great. No, cuz I got really confused about this, cuz there's another way of making pre-gluing maps by which you don't lose information. Which is you rescale everything, like that whole entire half-infinite cylinder.
46:05
You can rescale that to like half of the cylinder, right? And then just attach the two to each other. So, puzzle, what's wrong with that as a chart map? That's a good question. Homework problem. Ah, that's excellent.
46:26
I love this. Okay, so I said right but wrong, and now we've seen why this idea here is wrong.
46:41
So then the question here is, why is it kind of right, though? So the key thing that fails here is that you sort of, in this setup anyway, is that you lose a bunch of information. And so then there's a fix that one would like to try to do. And that idea is to find a way
47:03
to keep track of lost information. And so the way we're gonna do that, I don't wanna write this. Okay, so I'm gonna define, okay, this should be good.
47:23
I'm gonna define another cylinder, and this is a doubly infinite cylinder. And I'm trying to draw it sort of suitably parallel to the other ones, because it's related. I'm gonna call this CA.
47:44
Actually, let me fix that. If I were only considering this finite portion of the cylinder, this is then essentially gonna be ZA. But if I wanna consider sort of the whole thing, the whole thing is called CA. Right, so ZA is still my finite cylinder. CA is my doubly infinite cylinder.
48:00
I can write a definition for CA, and you can even sort of, you should be able to guess it from what we've seen so far. R plus s cross s1t disjoint union. R s prime minus cross s1t prime.
48:23
Module of the equivalence relationship, which is the same as the one that we had before. St is related to s prime plus rt prime plus theta. And then, I can define theta minus gluing.
48:58
Sorry, the minus gluing, or rather minus pre-gluing.
49:02
What is it? Anti-pre-gluing, that's what we're calling it, is the following.
49:26
Sure? Since you're not talking about holomorphic curves at all. It should be a minus, a minus sign. Since you're not talking about holomorphic curves at all, it just is a map which looks like your broken map,
49:43
namely the one that takes that central circle to a point. So why are we mucking around and making the broken map? It's already there. What? I mean, we don't have a flat circle looking. Right. We don't have holomorphic maps. You are allowed a flat circle to a point.
50:02
Right, what I'm trying to do is motivate. I mean, here's what's, okay, so the goal of my talk, which I'm sort of running short on time here, is to sort of say, look, here's sort of a sequence of things. I mean, I wanted to introduce the pre-gluing map, which is sort of standard in this sort of analysis, right? So you're not gonna get away from doing some sort of pre-gluing map.
50:22
Okay, and then what I wanna do is say, okay, well, what we'd like to do is to use that map to make a parametrization sort of nearby something nodal, and then the reason that you can't do that is that there's information loss. So now what I wanna do is keep track of that information, and then the punchline of this is that if I keep track of that information in kind of a clever way, and the clever way is sort of the way that HWZ define it,
50:43
what you can do is in this, what ends up happening is in this weird subset of a scale monoc space as it turns out. It's an incredibly strange subset. I mean, it's very difficult. I mean, I think it's anyway, it's fairly difficult jumping dimensions and so forth. But nevertheless, it has a straightforward smooth structure on it.
51:01
And this provides, this then, with this, you can then build essentially something like a manifold, right, which is a smooth manifold, SC smooth manifold in some sense, which has enough structure to, I mean, it has a differentiable structure. You can build a Fredholm theory on it, et cetera, et cetera. And so- Is that any way to answer my question?
51:20
Yes, because what you're suggesting is I'm going to say, what next? How do you build a Fredholm theory on your problem? How do you prove a perturbation theory on this? How do you prove regularization with any other option that you're going to try to do? And the point is that I'm following HWZ, so I know all that stuff is going to appear. If you want to make a change at some point, then I'm going to say, okay, you've got another 1,000 pages to write.
51:41
And that's, I mean, that's how I would think of it. But Dooza probably has a more polite answer. If you have a juxtaposition, you're just sort of saying, on one part, please use u plus. On the other half, use u minus. And don't worry about, you know, those sort of converging to the same thing, assuming they go to c. Then you lose control of the analysis, I think.
52:02
I think that's the problem. And the whole idea is you've got to stay in control of the analysis. And these kind of functions allow you to do that. That gives you some kind of smooth transition you can estimate what you want in order. I mean, we, you know, in our book, we use the plus gluing. I mean, the minus gluing, you can use in the polyfault theory.
52:20
But with the plus gluing, you can show that you start off with holomorphic things. Then the plus gluing holomorphic things are sufficiently close to being holomorphic that they're the Newton process, it makes them holomorphic. So they're uniquely Newton process. That requires elasticity. So you need to remain, I mean, what he's saying is you need to remain in some analytic framework.
52:41
And if you don't like the one they've done, well, you're welcome to make your own. I think that's also a minus. Do you have a tattooed on it? It starts breaking. I think you subtract the mean.
53:01
No, so this minus should factor through. So this becomes a plus. And this should be a plus. This is a minus? No, because you've got a minus minus. So it's one minus beta times the A B plus. No, you have to correct the A B plus. You have to subtract the mean value from the end.
53:21
I have a minus from the last time you gave this talk. Oh, it's the beta. There's a beta here. Yeah, you're right. You're right. You want them to add. Yeah, thank you. What is what?
53:47
Like, what in just words do each of these pieces mean? Well, it's not obvious. I'm getting there, right. So yeah, even if you've seen Prigling before,
54:01
this should look terrible. And so what I want to do is try and make this a little bit more understandable. So this looks awful. And there's sort of no way around the fact that it looks awful.
54:20
But once you kind of open it up and sort of see where it comes from, it makes a decent amount of sense. Yeah? The definition of VA, the first one is a U plus, right? Sorry, what have I made a mistake on? You've read the plus superscript on the A bar.
54:41
Thank you. And it should be R over 2. It should be not R over 2. Yeah, yeah, that's right, too. There's like a hidden matrix. Yeah, yeah, there is. I'm showing that in a second. Makes life a lot easier, right? So what do I want to say? All right, so I guess the first claim is that this keeps
55:01
track of the lost information. I'll make that precise in a second. If you kind of want to forget about the fact, one big simplification, and we'll see this in just a second, is that if you're just seeing this for the first time, just kill these terms, these average value terms. Just pretend they're not there and write the same thing down, and that at least gives you a toy problem
55:20
to tinker around with, right? I'll tell you why those terms are needed in a second. Was that any different from the thing that was there before? I mean, the one that's right above it, it's clearly different. I mean, I would say by some sort of obvious symmetry. Yeah, there is a lot of symmetry. The idea is to keep track of precisely the lost information. So whereas on one side you see a beta, you'd want to see a 1 minus beta here,
55:40
and where you see a 1 minus beta, you'd want to see a beta. So I mean, yeah, there is symmetry. It's designed precisely to keep track of lost information. I can show you how it does that. So this is giving you the other side? It's giving you the, it's keeping track of the portions that were killed by your cutoff functions previously. So you could have done like one interpolation or the other and this is the other? The point is that you, the point is that you take, the point is that you interpolate between,
56:01
you sort of, you have a domain which you might think, you have two domains sort of split into two pieces. And your pre-gluing map sort of keeps track of this information and this information and then interpolates nearby. So the minus gluing map keeps track of this information and this information and then interpolates in between. And my claim is that you can actually reconstruct the first two maps from the latter two.
56:23
May I make a suggestion? We need to end promptly at 3.15, so why don't we let Joel finish what he had in mind and then we have our wonderful TA who can answer questions. And of course I can be assaulted with questions as well. The voice upstairs.
56:42
So, I don't know. Is that a Hofer reference? Oh, the voice upstairs, right, of course. His helmet's so tall, I just thought, okay.
57:00
So, here's a trick. And this trick is really, I think, sort of where this comes from. So, I'm going to write this as a matrix. I want to keep track of both pieces simultaneously. And then if I do that, well, what do I have? I have beta sub A, one minus beta sub A, beta A minus one, beta A.
57:28
The identity map, zero, zero. This is the shift map corresponding to A. U plus, U minus, plus zero, one minus two beta A, plus U minus.
57:48
So, now I have to tell you, you can kind of guess. Beta A of S is equal to S minus R over two. And if the shift map of A applied to some map U evaluated at ST
58:01
is then U of S minus RT minus theta, assuming I have, are those pluses or minuses? The beta A line. Indeed I did, thank you. Those are minuses, good.
58:20
Okay, writing it this way. It's okay, so one, first step is to just sort of verify this sort of is true, that you had this sort of equivalence this way. Second, this, I should have an AV in here, right? Second, I said assume the AV zero, the terms are zero for a toy problem. And then if you do that, you just have this matrix. And the first thing that I want to point out is that this thing
58:40
is just sort of obviously invertible, right? Because compute the determinant and it has to be one because these things are cut off functions in the properties that these cut off functions have. So, you invert this and then this operator here, I've got an identity and I've got the shift operator, which is also clearly invertible. So, writing it this way, it's clearly invertible, I think for any fixed A.
59:00
And then okay, it's a touch more work and in the lecture notes, there's a homework problem and I provide the hints which allow you to walk through that this is still a bijection even with these terms added in. In fact, it's a linear bijection. So, I think what this sort of does in some sense anyways, at least cleans up this mess.
59:20
I prefer it to be written this way. And then, so then I can say, sorry.
59:52
It's in my lecture notes by the way. So, I'll run over for just a couple. Thank you.
01:00:01
Just like this, you're forgetting, you promised to say what the point of the averaging terms were. Right. So I thought I said it at least, I thought I said it once. I can say it again. The purpose of the averaging terms then is to allow, is to allow, right, we have a question up front. Why we had this constant C in probably the function spaces
01:00:21
I just erased. Oh good, right, I put them up here. We wanted to allow these, we want to allow this, this constant C in here because you'll want it to, you want it, you want it to be the case that your node can move around, right. And so consequently when you put in, when you write down these, when you write down this map, you need these averaging terms to actually allow you to, allow it to be the case that you, you can, yeah,
01:00:45
you still allow it for the case that that node can move around. So you'll see why and sort of hopefully. So if you do the same for first theory, it will not be there. Right, because essentially your, your orbit there in the end is fixed. Yeah. So, so what I want to do is say, so, so here's, so here's basically what happens.
01:01:19
Why is sort of any of this relevant for anything?
01:01:24
So we had this idea. So we had this idea that we would like to use this plus gluing to build a chart for our neighborhoods. And the problem that we had was that it was infinity to one. So then I said, let's keep track of this lost information. So now you keep track of, of the, of the plus gluing and the minus gluing.
01:01:43
And we said that that was a bijection. And that's good because what that means then is that if I set O to be equal to the set of all triples, say A, U plus, U minus, such that, such that O minus, A, U plus, U minus is zero, then
01:02:07
the plus gluing maps essentially O into, into, into Z bijectively.
01:02:21
This is for A not equal to zero. And so, so now at least we have a bijective correspondence. And once you have a bijective correspondence, then the next natural question is to say, well, is it possible to give this space any sort of differentiable structure?
01:02:40
And, you know, you look at this and you sort of say, okay, the zeros of this map, you know, the zeros of, of, of the minus gluing map, you know, that's going to be some set. Who knows what that looks like, right? And a priori it might be complicated. And so why should that have any sort of differentiable structure? And the magic is, is it, it has and, or rather it, let's say, it supports the SC calculus.
01:03:15
And in just a brief second, I can sort of, I can sort of tell you why.
01:03:21
This map here, we can, we can call this map here, say the box dot gluing of U plus, U minus.
01:03:43
And then what we said was, well, this thing is invertible. And so now what you can do is you can define this map R of A U plus, U minus equals, so it's basically, you take box dot gluing,
01:04:04
you compose that with sort of the projection to the first factor composed with box dot inverse, right? And this, let's see, sorry, otherwise, this is A U plus U minus if A is equal to zero,
01:04:27
A not equal to zero. So you define this map and it ends up being the case, just to go over this very quickly here, is that this map has a very nice property, which is R composed R is equal to R and it's SC smooth.
01:04:45
And a consequence of this, and it's a very fast consequence, which I would go over if I wasn't already five minutes over my time, is that as a consequence of these two simple facts, you can then define a notion of an SC smooth function defined on, say, the image of R.
01:05:02
And the image of R is precisely the set of points where the minus pre-gluing is zero. And you do that simply by saying, well, F defined from, say, O into some other space Z, well, I don't want to say, well, into any other space, say, O prime, is SC smooth precisely, or say, SCK, precisely if F composed with R
01:05:26
on a slightly larger space is SCK. And so the point is this R acts as a retraction, and images of retractions are called retracts. And if they also happen to be SC smooth,
01:05:41
then that guarantees that they sort of support an SC calculus. And I'm going to talk about this more at the beginning of my lecture tomorrow. But the whole point of this, right, the whole point of this is essentially, right, so I can summarize very briefly. We said that what we really wanted to do was have a parameterization or a chart
01:06:00
nearby our nodal maps. And what we did is we said, well, the pre-gluing map is infinity to one, so we made use of this minus gluing map in order to cut out a bunch of garbage and make that map one to one. But now what I said is if you're clever with this rewriting, you can see that that set can be written as the image of this SC smooth retraction,
01:06:20
which I'll talk about more next time. And as a consequence, supports the SC calculus, meaning we have a notion of a differentiable map from one set of this form to another. And these then provide the models, the local models for M polyfolds. But we'll talk more about this next time.
01:06:52
I can answer a couple questions, yeah. OK, we have a few minutes for questions. But again, remember that we have many voices upstairs.
01:07:03
Any questions? The fact of a neutral manifold is the image of a smooth map which squares itself as a manifold. Is a manifold, yes. That is someone's last theorem. It's a submanifold. But in the SC calculus, it's something more general.
01:07:27
Yeah, Banach manifold is also true. In the SC calculus, they can have multiple varying dimensions, but they still have tangent space because you've got a chain rule. TR controls this. We also take TR of TU. That's a tangent space.
01:07:40
And it doesn't depend on the choice of the retraction. The image is the same, so it's a definition. So you can even have one-dimensional stuff, or finite-dimensional stuff in infinite-dimensional space, which has a varying dimension.
01:08:02
Any other questions? Sorry, just a question. PR1 is what? Oh, a projection onto the first factor. So if you have, but you'd sort of zero out the second term. I should have said that. PR1 of XY equals X0.
01:08:22
What's the image of R? So R, right, so R. Yeah, it retracts to this map? Well, it retracts to precisely O. Oh, yeah? It's a subset of C cross E. I'll make this more clear next time.
01:08:40
I went really fast in the last few minutes. I'm sorry about that. But the claim is that it's O. The claim is that the image of R is O. Tinker with it. But O is sitting inside this funny space of maps on the main Z area. No, no.
01:09:00
O is in C cross E. O should be a subset of C cross E. Have I written something which should be? So there's a subset of C cross E, which is a smooth retract. And if you do the pre-gluing, the structure, it's a bijection onto the union of this
01:09:25
arbitrary long cylinder of this map somewhere. Right, right. But it is, in fact, sitting inside C cross E as a handle of some map. C cross E is not the kernel. It's the domain of a map. It's the domain of R. It's the image of the retraction.
01:09:41
So actually, if you fiber it over C, then fiber-wise, it's a linear retraction. Yeah, for fixed A, yeah. But if A is 0, it's the identity. This is now saying. And if A is non-zero, it actually grows on the proper subset.
01:10:00
Yes. Yeah, yeah. Yeah, I mean, that's what I was confused about. I was confused about A equals 0. Otherwise, it's just you're changing coordinates and doing projection. OK. If A is 0, nothing happens. Yeah, it's just U plus U minus. Then, so minus gluing when A is 0 is defined to be 0?
01:10:22
Minus gluing when A is equal to 0. Well, it can be defined as 0. Yeah, I guess it has some sort of definition in that case. CA is BMP7 in that case. Pardon? CA is BMP7 in that case, actually. Right, but there's one map from BMP7. And if it takes the value of the vector space, it's 0.
01:10:41
OK, fantastic. OK, with that, I do actually have to go. So thank you for your questions.