Symplectic non-squeezing for the cubic NLS on the plane
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Teil | 9 | |
Anzahl der Teile | 21 | |
Autor | ||
Lizenz | CC-Namensnennung 3.0 Unported: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/20772 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
|
00:00
GammafunktionLie-GruppeAlgebraisches ModellMinkowski-MetrikUnendlichkeitBetafunktionSpezielle unitäre GruppeVersuchsplanungDualitätstheorieFächer <Mathematik>Gleitendes MittelMereologieIteriertes FunktionensystemSummierbarkeitZwölfMaß <Mathematik>AusgleichsrechnungRuhmasseMultiplikationsoperatorOptimierungAlgebraische StrukturApproximationDämpfungEvolutionsstrategieFolge <Mathematik>FunktionentheorieGeschwindigkeitLie-GruppeMannigfaltigkeitMathematikNumerische MathematikOrdnung <Mathematik>RelativitätstheorieSpieltheorieStreuungSymmetrieTopologieWärmeleitfähigkeitWellengleichungMengenlehreSingularität <Mathematik>FrequenzProdukt <Mathematik>Profil <Aerodynamik>AsymptotikInzidenzalgebraModulformStörungstheorieFilter <Stochastik>Kategorie <Mathematik>ZeitbereichImpulsErhaltungssatzIntegralDreieckDimensionsanalyseDerivation <Algebra>FinitismusUnendlichkeitGrenzschichtablösungUniformer RaumAnalogieschlussÜbergangÄquivalenzklasseAffiner RaumAggregatzustandArithmetisches MittelMorphismusDiffeomorphismusDifferentialDivisionEbeneFundamentalsatz der AlgebraFunktionalGeradeGerichteter GraphGrundraumGruppenoperationIndexberechnungInverser LimesKompakter RaumKomplex <Algebra>Kontraktion <Mathematik>KreiszylinderLaplace-OperatorLeistung <Physik>Lokales MinimumMaßerweiterungMereologieMomentenproblemPaarvergleichPhysikalische TheoriePhysikalisches SystemPrimzahlzwillingeProjektive EbeneRangstatistikResultanteStatistische HypotheseStatistische SchlussweiseStellenringStichprobenumfangStichprobenfehlerSuperposition <Mathematik>TheoremThermodynamisches SystemTorusVertauschungsrelationWiderspruchsfreiheitZentrische StreckungFlächeninhaltOvalGüte der AnpassungKonstanteNichtlineares GleichungssystemFamilie <Mathematik>EinflussgrößeÜberlagerung <Mathematik>Regulärer GraphNichtlinearer OperatorWasserdampftafelBasis <Mathematik>AbstandParametersystemGewicht <Ausgleichsrechnung>Faktor <Algebra>NormalvektorStrategisches SpielPropagatorPunktspektrumKopplungskonstanteCoxeter-GruppeEnergiedichteSchwebungFormation <Mathematik>ExistenzsatzPunktKörper <Algebra>Euklidischer RaumDimension 2Flüssiger ZustandJordan-NormalformKreisflächeUnrundheitQuelle <Physik>Philosophie der LogikSortierte LogikRichtungVollständiger VerbandSchätzfunktionStabilitätstheorie <Logik>SummengleichungChi-Quadrat-VerteilungRadiusSchnitt <Mathematik>FlächentheorieZahlzeichenKomplexe EbeneLimesmengeVorzeichen <Mathematik>Explosion <Stochastik>Nominalskaliertes MerkmalEreignishorizontNichtunterscheidbarkeitAbgeschlossene MengeKonditionszahlMassestromGibbs-VerteilungDifferenteElement <Gruppentheorie>sinc-FunktionAuflösung <Mathematik>DickeKreisbewegungWald <Graphentheorie>Minkowski-MetrikRechter WinkelFigurierte ZahlOrtsoperatorKonzentrizitätSpezifisches VolumenAnalysisEigenwertproblemGeometrieHamilton-OperatorVektorraumAusdruck <Logik>PhasenraumHyperbolischer DifferentialoperatorAlgebraische K-TheorieBeweistheorieEinfach zusammenhängender RaumKoordinatenPhysikalismusSinusfunktionSkalarproduktraumTermTrigonometrische FunktionSymplektische GeometrieKorrekt gestelltes ProblemLinearisierungReelle ZahlGegenbeispielStochastische AbhängigkeitSchwache TopologieRuhmasseUnterraumHelmholtz-ZerlegungGebundener ZustandSummierbarkeitBetrag <Mathematik>Klasse <Mathematik>WellenlehreGradientKubischer GraphKartesische KoordinatenHamilton-FormalismusSymplektischer RaumPartielle DifferentiationJensen-MaßImaginäre ZahlBestimmtheitsmaßKlassische PhysikFourier-TransformierteZweiEinsComputeranimationVorlesung/Konferenz
Transkript: English(automatisch erzeugt)
00:15
I'd like to start by thanking the organizers for inviting me to this program. This is my first time here and I've really enjoyed myself.
00:24
So let me start by giving you the rudiments of symplectic geometry needed to state our result. And of course I have to tell you about the classical symplectic non-squeezing result of Gromov. So starting slowly, what is a symplectic manifold?
00:48
A symplectic manifold is a manifold that is even dimensional, endowed with a symplectic form. So what does this form need in order to qualify as a symplectic form? It needs to be closed, non-degenerate, alternating to form.
01:15
Now the canonical example of a symplectic manifold is the following. A un-canonical is nothing else but r to the power 2n.
01:31
So a point z in here, we'll write it as x comma p. And we view x, write that both x and p are elements of Rn,
01:42
and we view x as including the positions of n particles in R, and p, the momenta of those particles. Or if you want, the position and momentum of a single particle in Rn. For example, if n is 4, it could be dealing with one particle in R4, two particles in R2, or four particles in R.
02:02
What is the canonical symplectic form? It's dp1 which is dx1 plus dpn which is dxn. And it's clearly a 2-form, which is closed, non-degenerate, and anti-symmetric.
02:21
Now this symplectic form is the one that is responsible for Hamilton's equations in their traditional form. So if H is a Hamiltonian, let's say, from r to n to r is a Hamiltonian,
02:43
then the flow generated by this Hamiltonian is defined as follows. Omega of dot z dot, the dot above the z stands for the time derivative, so z is a function of time, taking values in phase space.
03:04
This is the differential of H at z, and you put over here whatever you put over here. Now if you decode what this means using the definition of omega, you recover exactly Hamilton's equations.
03:20
xj dot, the xj entry of z, is del H del pj, and pj dot is minus del H del xj. Now one can rephrase this canonical example as follows.
03:48
We can regard r to the power 2n as cn, and we write the point z in here as x plus ip.
04:04
And we rewrite the canonical form as follows. Omega of z and zeta is minus the imaginary part of the inner product between z and zeta, inner product in cn. And our inner products are c linear in the second entry as per Dirac's convention.
04:23
So let me write this down. This is minus the imaginary part of the sum of zj bar zeta j. Now the advantage of rephrasing the canonical example in this way is that it easily generalises to a symplectic Hilbert space.
04:42
So what is a symplectic Hilbert space? This is a Hilbert space H, which is a complex Hilbert space,
05:04
endowed with a symplectic form, defined by a minus the imaginary part of the inner product between z and zeta, where now the inner product is in H. This is the setting in which the nonlinear Schrodinger equation can be seen to be Hamiltonian,
05:25
with the underlying symplectic Hilbert space being L2. I will tell you more about that soon. For now, let me just retreat to the finite dimensional setting, and explain to you what the symplectic non-squeezing result of Gromov is.
05:51
Well, Hamilton's equations have structure. So let me write here, Hamiltonian flows reserve the symplectic form omega.
06:09
And to introduce one piece of vocabulary, that is, they are symplectomorphisms.
06:27
So what is a symplectomorphism? It's a diffeomorphism that preserves the symplectic form.
06:42
How do we write that down mathematically? What does that mean? It means that if you look at the Hamiltonian flow at some time t, then the pullback of the symplectic form via the Hamiltonian flow at time t is just the symplectic form. And that is true for all the times t.
07:03
It is convenient, perhaps more enlightening, to write this down in the integral form. OK, so you have a question? OK, so with integral form, we can rephrase this relation like this.
07:20
Here are two copies of Cn. And the Hamiltonian flow at time t takes one to another. Now, if you have a two-dimensional surface S over here, we take every single point on the two-dimensional surface, and we flow it according to the Hamiltonian flow at time t.
07:43
So what we recover is another surface like this. And this relation over here is nothing than the following. The integral over S of omega is equal to the integral of omega over the image of the surface S under the Hamiltonian flow.
08:05
Now, what does that mean for us? Well, if the complex dimension n is equal to 1, then omega is a volume 4.
08:23
It's, after all, a non-degenerate 2,4. Then omega is a volume 4. So what we see in this case is that Hamiltonian flows preserve.
08:47
So the area of S is the area of S under the Hamiltonian flow at time t. Now, if the complex dimension n is strictly bigger than 1,
09:01
then omega, which omega n times is a volume 4, as a non-degenerate 2n4. Now, because Hamiltonian flows preserve the symplectic form omega, they are going to preserve the volume 4.
09:22
So what we recover is the following theorem of Youville, which is that Hamiltonian flows preserve phase space volume.
09:49
So this leads us to the following natural question. Is preservation of volume the only obstruction for the existence of a symplectomorphism? OK, so let's write it down.
10:01
Question, is preservation of volume only obstruction for the existence of a symplectomorphism?
10:33
OK, so what do I mean by that with the picture? If I have two blobs, OK, so here is blob number 1, and here is blob number 2, right here,
10:43
and I know that the volume of blob number 1 is equal to the volume of blob number 2, does there exist a symplectomorphism between the two blobs? Now, if, so Moser proved that if I have two blobs of the same volume,
11:11
with the property that there is a different morphism between them, that is, they have the same topology, then there is a different morphism that preserves volume.
11:20
Now, if the complex dimension n is equal to 1, and I have two blobs of the same volume and a different morphism between them, then Moser tells me that I have a different morphism that preserves the volume, that is, the symplectic form.
11:43
By definition, that is a symplectomorphism. So, I mean, the volume already is preserved. Yes, that means, you know, mapping every single patch in a volume-preserving way. So you say that there exists, then... If there is a different morphism, then there is one that preserves the volume.
12:06
Well, on surfaces, right? So, the answer in this case is yes, and it is due to Moser. If, however, the complex dimension is strictly bigger than 1,
12:22
then the answer is no, right? And this is Gromov's symplectic non-squeezing theorem. So let me tell you what that is. Okay, so, I can't do it.
12:43
That is too far for me to write on. And then I will never reach it again. So, Gromov theorem, due to Gromov. What does it say? Well, imagine that we have a ball centered at z star of radius r.
13:06
So this is a ball in Cn. And imagine that we have a cylinder, which I'm going to write like this. Cr of alpha nl.
13:21
So the parameter r is going to be the radius of the cylinder is positive. Alpha is going to be the center over here, is a complex number. And l is an element of Cn, which is normalized to have length 1. So what is this cylinder? Well, it's the points z and Cn with the property that if I take z in a product with l minus alpha, this is smaller equal to r.
13:50
So what does that mean? Well, l determines a subspace of one complex dimension. Or if you want two real dimensions.
14:01
So what do I do over here? I take z, I project it on that subspace and I say, well, it needs to live within r of this center over here. Now imagine that I have a simplectomorphism, phi.
14:24
Now if phi of the ball, if phi takes the ball inside the cylinder, then necessarily little r is bigger or equal than capital R.
14:42
So in other words, the simplectomorphism cannot squeeze a ball inside a thinner cylinder, despite the fact that this has finite volume and this has infinite volume. So this is the point where people like to draw the analogy with a simplectic camel, which cannot squeeze through the eye of the needle.
15:04
Now this is not, if you have seen Gromov's theorem before, this is not exactly its traditional formulation. To obtain the traditional formulation, you simply take l to be e1, the first vector in the eigenbasis for Cn.
15:29
Because in that case, the cylinder becomes, let me write it like this, it's the point z in Cn with the property that x1 minus the real part of alpha squared plus p1 minus the imaginary part of alpha squared is more or equal than r squared.
15:49
This is the way Gromov's theorem is typically written. However, the two formulations are entirely equivalent, so they are equivalent. So how do we see that? Well, because l is normalized to have length 1, there is a unitary map that takes e1 to l.
16:18
And unitary maps preserve the inner product, which means that they preserve the simplectic form.
16:23
So if there is a simplectomorphism between the ball and this cylinder, then composing it with a unitary map, you get a simplectomorphism that maps the ball inside this cylinder. And the other way around.
16:42
Now, before I move on to the NLS setting, I would like to give you a few examples of squeezing. They are not going to be counterexamples to Gromov's theorem. They just serve to better appreciate the hypothesis of Gromov's theorem. So examples of squeezing.
17:12
So there are really two remarks that I want to make over here. The first remark is that one might imagine that the reason why one cannot squeeze the ball inside
17:23
the thinner cylinder is because for some reason the ball is very fat in the x1, p1 direction. However, that is not the case. And here is an example. That is not the right way to think about it. Let's consider the Hamiltonian H to be minus p1 x2 plus p2 x1.
17:48
Then the flow generated by this Hamiltonian, you can write down Hamilton's equations if you want in the following form. If z is the flow, then z dot becomes 0, minus 1, 1, 0, apply to z.
18:07
So this is nothing else but a rotation. So z of t can be written explicitly as cosine of t, minus sine of t, sine of t, cosine of t, apply to the initial data at time 0.
18:25
So why is that good news or bad news depending on how you want to think about it? Well, if you take as initial data lying inside an ellipsoid, which is very fat in the x1,
18:40
p1 direction but thin in the x2, p2 direction, and you flow it for time pi over 2, then what you recover is now a cylinder which is very fat, basically just rotated. It's a cylinder which is very fat in the x2, p2 direction but very thin in the x1, p1 direction. So in particular, you have flown this fat ellipsoid, if you want, inside a thinner cylinder.
19:08
So the reason why one cannot squeeze the ball inside the thinner cylinder has nothing to do with the fatness of the ball in the x1, p1 direction, but rather the whole property. In fact, the ball is fat in all directions.
19:24
Now the second remark I want to make is the following.
19:53
Remark number 2 is that it's essential to use conjugate coordinates when defining the cylinder, an example of which are x1, p1.
20:34
How do we see that? Well, you could ask, could it be possible to prove Gromov's theorem if we choose as coordinates x1 and x2, the positions of two different particles?
20:51
Well, the answer is no, and here is an example. So we cannot choose x1, x2.
21:00
How do you see that? Well, simply consider the Hamiltonian H, which is minus x1, p1, minus x2, p2. Then what is the flow for this? Well, it's sort of independent in x1, p1 from x2 and p2, so let me just draw a picture.
21:20
What happens over here? Well, the flow looks like this, right? You take an initial data, this is how it flows. So in particular, you see that it squeezes the x1 direction. If you start with the initial data in a ball over here, then as time goes by, it's going to move into sets like this. And similarly, in p2, x2, it's exactly the same picture.
21:44
So it squeezes the x1 and the x2 direction, so you cannot prove Gromov's theorem with these coordinates for the cylinder. But then you can ask, well, what if I use the position coordinate of one particle and the momentum coordinate of another particle? Could I have squeezing again?
22:01
I mean, could I prove no squeezing then? And the answer is no. So I cannot choose x1, p2, let's say. Simply take the Hamiltonian to be, well, leave it at the same in the x1, p1, but reverse time for x2, p2.
22:26
Then it squeezes x1 for exactly the same reason as before, but it does this in the x2, p2 direction. So if you start with a ball of initial data, then as time goes by, it's going to squeeze the x2 direction, the p2 direction.
22:50
So that's what I wanted to say about squeezing. There are no questions, I would like to move on to the NLS setting.
23:14
So the nonlinear Schrodinger equation that we will consider is the following.
23:21
i times the partial derivative with respect to time plus the Laplacian of u is equal to the absolute value of u to the p times u. The power of p is assumed to be positive. And a solution to this equation is a complex valued function of time and space. Time is going to be real, but we can pose this equation either on Euclidean space or on the d-dimensional torus.
23:47
This is rd mod zd. And the dimension d is assumed to be bigger or equal to what? Of course, you can pose NLS on other manifolds, but for us, this is enough for today. We're only going to consider these two cases. Now, this equation is Hamiltonian.
24:04
The Hamiltonian is h of u, the integral of half of the gradient of u squared, plus 1 over p plus 2 u to the p plus 2 ds.
24:20
And the symplectic form with respect to which NLS is seen to be Hamiltonian with this Hamiltonian is the following. Omega defined on L2, let's stick to rd for now, cross L2 of rd with real values, omega of u and v.
24:40
So remember what it was? It was minus the imaginary part of the inner product between u and v in the underlying symplectic Hilbert space, which for us is L2. Our inner products were C linear in the second entry. I'm going to move the bar from u to v, and I'll get rid of this minus over here. So I'm going to write this as the imaginary part of uv bar dx.
25:04
So what do I want? Well, I want a symplectic non-squeezing result for NLS.
25:21
Let's state it. So we want the following fact.
25:40
If we have a ball B centered at z star of radius r, leaving inside the symplectic Hilbert space L2, and we have a cylinder Cr of alpha and L, r is positive, alpha is a complex number,
26:02
L is an element of the Hilbert space which is normalized to be 1. And what are these? These are the functions z in L2 with the property that the inner product between z and L, if you project z on L, then it lies within a disk of radius r around alpha.
26:21
We want to prove that if we're looking at the NLS flow at time capital T, let's say t is some positive number, and this flow at time capital T maps the ball inside the cylinder,
26:42
then necessarily little r is bigger than capital R. Exactly the same statement as before. Now, we see that even by stating the problem, we immediately uncover an issue that needs addressing. Namely, what are we doing? We want to say that for all initial data in this ball, we can define the flow up to time t.
27:07
And this was just some arbitrary ball, so the question arises, when is NLS well posed on L2? Well, nowadays we know the answer to that question very, very well. So NLS is well posed on L2 of r2 precisely when p is more or equal than 4 over the dimension d.
27:34
So how does that work? Let me just quickly review this. If p is strictly less than 4 over d, then the problem is subcritical.
27:44
And using contraction mapping together with Stryker's estimates, one can construct a solution locally in time, with the time of well posed being bounded from below by a negative power of the L2 norm of the initial data.
28:01
So using this, an iteration, one can immediately construct global in-time solutions. In particular, the flow is going to be defined for all times t. So how do you do that? Well, you start with your initial data at time t equals 0, and you run contraction mapping together with Stryker's estimates, and you solve the problem. You construct the solution up to some time of well posed in this over here.
28:22
But now when you sample your solution over here, it has exactly the same mass, exactly the same L2 norm as the original time, because mass is conserved. So you start your contraction mapping over here, and you solve it for another time of well posed in this. And you keep on going, constructing a global solution. Now, if, however, the power p is 4 over d, then the problem becomes critical.
28:48
And in this case, the time of local well posed in this for which you can solve the equation, is a function on the initial data itself. The scaling symmetry tells you that this time cannot depend on the L2 norm of the initial data.
29:05
In fact, it depends on how concentrated the initial data is, with the intuition that the more concentrated the initial data, the shorter the time of existence. So this naive procedure over here is not going to allow you to construct global in time solutions, because maybe with every single iteration, the solution gets more concentrated,
29:22
so you can only solve it for shorter amounts of time. And this naive procedure will not help construct global solutions. So in this case, global well posed in this is a deep result of Dotson. And it says the following thing. For an arbitrary initial data, U0 in L2, there exists a unique global solution, U, to the mass critical in L.
29:58
So i del t plus Laplacian U is U to the 4 over d times U.
30:04
The initial data is U0. And this solution obeys uniform space-time bounds. So the integral over r, the integral for rd of U to the power 2d plus 2 over d dx dt,
30:23
is bounded by a constant that depends only on the mass of the initial data. So in particular, this theorem allows us to consider flows for arbitrary amounts of time. Oh yes, this is rd.
30:45
Thank you. Thanks. OK. So to make life interesting, we're going to consider the non-squeezing result for the mass critical problem.
31:05
So we're going to work with the mass critical analysis. And to keep notation simple, we're just going to look at the cubic analysis in two dimensions, which is mass critical. Although the method that I'm going to describe applies equally well to all the other dimensions.
31:26
So from now on, NLS for us is just the cubic NLS on R2.
31:42
And the theorem I want to talk about is the following. This is joint work with Rohan, Killep, and Xiaoyi Zhang. And it says the following thing. Assume that you're given a bunch of parameters. So R, capital R, these are positive numbers, finite.
32:04
Z star is an element of L2 of R2. L is an element of L2 of R2 of length 1. Alpha is the complex number, and t is a positive number, a positive time.
32:22
If the flow, time capital T of the ball, this is the ball in L2, is mapped, if this flow leaves inside the cylinder, then necessarily little r is bigger or equal than capital R.
32:47
Now, the incentive for us to consider this problem came from our attendance of a talk during the fall semester at MSRI, last fall. A talk in which Dana Mendelsohn presented her symplectic non-squeezing result for the cubic Klein-Gordon equation on the three-dimensional torus.
33:07
At the end of her talk, there were several questions from the audience about the existence of a symplectic non-squeezing result in infinite volume. And at that time, indeed until this theorem, all existing symplectic non-squeezing results were in the periodic setting.
33:25
And we're going to see that there is a good reason for that. So that got us interested in this problem. In particular, we were wondering whether there is an intrinsic obstruction to proving a symplectic non-squeezing result in infinite volume, or was that just an artifact of the methods we used that far?
33:41
So let me quickly review a little bit of history. The very first symplectic non-squeezing result for a Hamiltonian PDE is due to Cookson, who proved symplectic non-squeezing results for flows of the form linear part,
34:05
where the linear part is assumed to have discrete spectrum, and plus a smooth compact perturbation. And he offered examples of such flows on tori.
34:27
Then the next entry in our history is Bourguin. He had two contributions. The first one was a paper in which he gave more examples of flows that fall under Cookson's framework,
34:41
and another one in which he proved non-squeezing for the cubic NLS on the torus, which does not fall under Cookson's framework. Then the I-team, Koliandekil, Stafilani, Takaoka and Tawo proved symplectic non-squeezing for KdV on the torus.
35:02
Recently there is a paper of Hong and Sun-Sei-Kwon in the audience, who re-proved this result of the I-team dispensing with the use of the Miura transform, and they also proved symplectic non-squeezing for a system of coupled KdV equations, again on the torus.
35:23
We have a result by Roubaix-Gu, who proved symplectic non-squeezing for the Benjamin-Bona-Mahoney equation, a close relative of KdV, again on the torus, by proving that this equation falls under Cookson's framework.
35:43
Finally, let me just write it here, we have a result by Mendelsohn for the cubic Klein-Gordon equation on the three-dimensional torus.
36:01
Mendelsohn's result is a critical result in the same way that ours is a critical result. That's what I mean by that. The regularity needed to define the symplectic form coincides with the scaling critical regularity for her equation. In that case, we only know local well-posedness for solutions
36:22
in the critical space, with the time depending on the profile of the initial data. In order to prove her symplectic non-squeezing results for arbitrary times T, she assumes that the cubic Klein-Gordon equation is globally well-posed, with uniform space-time bounds.
36:41
She assumes more than that. She actually assumes that various frequency-tragrated versions of the equation are globally well-posed with uniform space-time bounds. That assumption is stronger than the initial assumption. Well-posedness for the frequency-tragrated equations implies well-posedness for the cubic Klein-Gordon. But the reverse is not true.
37:01
I'll tell you more about that later, because it actually has a bearing on our theorem as well. So what is so special about this periodic setting? To formulate the problem, you don't need global well-posedness. Well, you say, give me a time, and I want to flow a ball, an arbitrary ball.
37:24
If you take arbitrary, okay. You want to imitate Gromov, right? You want the result as general as possible. You can always restrict the size of the ball. We're going to see soon why that is not the case in the critical case, because scaling tells you that you're going to have to do it uniformly, globally in time.
37:43
I'll point that out when I get to it. It's because you will need to prove stability of your finite-dimensional approximation. In order to do that, you're going to have to work globally in time.
38:07
What's special about the periodic setting? Very briefly, in the periodic setting,
38:25
what one can do is take the solution and express it as a superposition of plane waves. So this is just the sum. Let's say periodic setting 2d. I'm summing over k's in zd, u hat of 2k, e to the ipx.
38:47
So if one takes the solution and truncates it to finitely many frequencies, then one obtains a finite-dimensional Hamiltonian system. So truncating two frequencies smaller or equal than n,
39:11
either by using a sharp projection in the case of Burgan, or a nice smooth projection in the case of the i-team, one gets a finite-dimensional system.
39:27
But on finite-dimensional systems, Gromov's theorem applies. So Gromov is going to tell me that non-squeezing holds for the finite-dimensional system. So all that one needs to do is prove that solutions to the finite-dimensional system
39:43
are a good approximation to the solutions to the full equation in order to deduce that non-squeezing the a implies non-squeezing here. Moving from the periodic setting to the Euclidean setting, it's not clear how one should define a finite-dimensional approximation.
40:05
In fact, that was one of the key challenges that we had to overcome. How do you define a finite-dimensional approximation? Moreover, because the Laplacian on Euclidean space has absolutely continuous spectrum, one cannot even find a finite-dimensional subspace of L2 that is left invariant,
40:25
even by the linear flow. What did we do? Let me just sketch the proof.
40:42
We argue by contradiction. So assume that we have parameters, as in the theorem, but we take little r to be strictly less than capital R, and the flow, a time capital T, maps this ball inside the thinner cylinder.
41:06
Who likes to derive a contradiction? So what are we going to do? We are going to use a frequency-truncated large-box approximation to our NLS. I'm going to choose parameters, frequency scales going to infinity,
41:23
and spatial scales going to infinity, and I'm going to consider the following finite-dimensional system. Let me call it NLSN. I del T plus Laplacian U N. What do I do? I insert a projection in the non-linearity to frequencies smaller or equal than NN,
41:45
and I truncate every single copy of my solution to frequencies smaller or equal than NN. To make sure that this is Hamiltonian. I'll write down the Hamiltonian in just a second. Where do I pose this problem?
42:01
Well, T is going to be in R, but X is going to be in TN, which is R2 mod LN Z2. So I'm posing this problem on ever larger tori. I take more and more frequencies, I project on higher and higher frequencies, and I pose the problem on larger and larger tori.
42:23
I'm taking my initial data, U N times 0, to be U 0 N, a function in H N. What is this? These are the functions f in L2 of this large torus,
42:41
with the property that they do not have frequencies larger than 2N. So I have made it dimensional. As I promised you, this is a Hamiltonian system. Let me just write down quickly the Hamiltonian.
43:08
The Hamiltonian is H N of U N is half of gradient of U N squared plus 1 over 4,
43:25
the projection to frequencies smaller or equal than N N of U N to the power 4. Now, because this is the finite dimensional Hamiltonian system, Gromov's non-squeezing theorem applies.
43:42
So one can find winner system non-squeezing. So I can find U 0 N living in the bowl centered at Z star of radius R, where this bowl is in H N, such that the solution corresponding to this initial data,
44:06
2N L S N at time capital T, lies outside a cylinder. I'm going to take my cylinder to be a little bit bigger. So this is bigger than, let's say, R plus R over 2.
44:24
Now, you see that there is a little bit of fudging going on there. Z star and L are just some elements of L2. So in particular, there's no reason why Z star should live over here. However, I can take Z star and I can project it to frequencies smaller or equal than N N,
44:43
and as little n goes to infinity, I'm making a smaller and smaller error by the Molotov convergence theorem. And at some point, because I have a positive distance between little r and capital R, At some point, that error becomes acceptable. So I'm not going to write the projection over here.
45:01
Just bear it in mind that there is one to make things perfectly true. Similarly, over here, by making an acceptable error, I can replace L by a compactly supported function. And as little n goes to infinity, we eventually live in the domain of one of the torus, T N.
45:24
So what do I want to do? I want to take these weaknesses to non-squeezing, and I want to produce a weakness to non-squeezing for our equation, the cubic NLS on the torus. On R2, sorry. So what is the strategy? Naive strategy is the following.
45:43
Take your initial data. It's mapped by N L S N to U N at time T. And let's take weak limits. I recover U 0 infinity over here. I recover U T infinity over here.
46:02
And let us assume that we can prove somehow that these two solutions are related by the N L S flow. So what do I mean by that? That this function is the solution to the cubic N L S on R2 with this initial data at time capital T.
46:21
Now, if I can do that, then I am done. And the reason why is because the complement of the cylinder and the ball are closed under weak limits. So these being weaknesses to non-squeezing will imply that this is a weakness to non-squeezing. All right, so can I make this strategy work?
46:44
Well, if you think about it for a few minutes, you will see that even the simple step of passing to the limit and extracting those weak limits over there is a little bit controversial. And the reason why is because every single element in the sequence and the limit itself live in different Hilbert spaces.
47:03
The ones above live on L2 of the torus and the other one lives in L2 of R2. So how do I pass to the weak limit? Well, what can you do? If you have a function on the torus, how can you embed it to be a function on the plane?
47:26
Let's say that this is Tn and this is our initial data, U0n. Well, what do you do? You cut the torus, you unwrap it, and you embed the solution in the plane. However, you have to be very careful where you cut the torus. You cannot cut the torus at a point where let's say that the initial data has a bubble of concentration, bubble of mass.
47:48
And the reason why is because if you cut it over there and you unwrap it on the real line, you get two bubbles, two half bubbles. And the solution to the equation with one bubble on the torus looks nothing like the solution on the plane with two half bubbles.
48:03
So you're going to have a lot of trouble proving that the weak limit is going to be a solution to NLS. So because of that, we have to cut at the point where the solution doesn't have a lot of mass. In fact, for the same reason that I just explained right now, not only do you
48:20
have to cut it at the point where the initial data doesn't have a bubble of concentration, but the solution after time capital T doesn't have a bubble of concentration for exactly the same reason. So where to cut? Well, choosing the torus to be sufficiently large by taking ln to be really, really large and using a pigeonhole principle, I can find the region of the torus where the initial data has tiny mass.
48:45
And that's where we're going to cut. We're going to prove that we have an almost finite speed of propagation for the equation. Because the initial data doesn't have a lot of mass over here, the solution after time T is not going to have a lot of mass over here.
49:03
In order to do that, we're going to take the torus to be really, really large compared to the frequency scale nn. In particular, ln is going to be much larger than nn times T. So that the solution doesn't have time to wrap around the torus and put a lot of mass over here.
49:21
But are you cutting in Fourier space or in physical space? Physical space. But you don't have huge velocities. Well, the velocity, I have the projection, remember there's a projection to frequencies smaller or equal than nn in the non-linearity. Eventually it will go to infinity. Yes, but ln is going to go to infinity much faster.
49:44
That's the whole point. ln is going to be much, much larger than nn times the time T where you want to prove non-squeezing. All right, so that takes care of the weak limits. Now the next issue we have to deal with is stability.
50:05
This is where I will serve you an answer. So what do I mean by stability of the finite dimensional approximation? I mean that as you increase the quality of your approximation, you get uniform space-time bounds for the solutions.
50:22
And that is going to give you a fighting chance to prove that the limit is going to be indeed a solution to NLS. Now this turns out to be a very hard problem because of criticality. And to make the discussion a little bit simpler, let us consider stability for this equation where we dispense with the geometry.
50:42
Rather than asking for this equation on tori, let's just consider it on Euclidean space. Even then, it's non-trivial. So let us ask stability for NLS n, but posed on R2.
51:02
Can we solve this problem? Now because this equation has a scaling symmetry, you can use that scaling symmetry. And you will see that stability for NLS n on R2 is equivalent to uniform space-time bounds for the following equation.
51:20
I can replace all the n by 1. Of course in doing so, there is a vestige of the scaling that I just used. And that is that now I have to find uniform space-time bounds globally in time.
51:41
Now can we find uniform space-time bounds for this equation globally in time? In the Dotson theorem, he dealt with a cubic. What we're doing here is adding a projection to low frequencies. Surely you should be able to prove uniform space-time bounds for this equation. However, it turns out that this is a strictly harder equation.
52:02
Uniform space-time bounds for this equation actually imply Dotson's theorem, but the converse is not true. And the reason why is because one can embed solutions to Dotson's, to the usual cubic NLS, into the class of solutions to this equation by using scaling. So then you say fine, you cannot use Dotson's theorem directly, how about the proof, can you rescue the proof?
52:23
And the answer is no. And the reason why is because these projections over here destroy the Morawetz monotonicity formula. The reason why is because they do not commute with the weights in the Morawetz formula. There is no reason why the commutator should be small.
52:44
The previous authors also had to face this stability problem. Why was it easier before? What did they do? Well, in the sub-critical case, let's say these results,
53:01
well-posedness of the original equation, actually the same method implies well-posedness and space-time bounds for the frequency truncated equation. Basically, because one has the luxury of a holder in time when deriving well-posedness. And what about the other critical result? What did Mendelsohn do?
53:25
Well, she assumed that the frequency truncated equation is globally well-posed with uniform space-time bounds. This is one of her assumptions. So what do we do? Well, we developed a method to prove uniform space-time bounds, provided one modifies these projections slightly.
53:47
So what is the modification? There are no longer the usual Littler-Paley projections, but rather they are projections that decay very, very, very slowly. What is the advantage?
54:01
Well, there are two cases you can consider. Either the initial data is very localized in frequency, or it is supported on a very large frequency band. If it is very localized in frequency, then it meets these projections over here, basically like coupling constants, because they vary very slowly.
54:20
And because Dotson gives us uniform space-time bounds for the cubic NLS, we have space-time bounds for the cubic NLS with a coupling constant. Now what happens if the initial data is supported on a very large frequency band? Well, if the initial data is supported on a large enough frequency band, then by the pigeonhole principle, somewhere in that frequency band, I find a region where you have very tiny mass.
54:46
And I split the initial data into two bubbles, one to the left of the tiny mass area and one to the right of the tiny mass area. Now, we use an induction on mass argument. How does that go?
55:02
Once I have split my initial data into two initial data, each of them is going to have mass strictly less than the original mass I started with. So I can solve the problem globally in time with those two initial data, and I have uniform space-time bounds. Now, because the two initial data were well separated in frequency at the initial time, you can prove that the interaction between the two global solutions are weak.
55:24
And while the sum of the two global solutions is not a solution, it is almost a solution. So perturbation theory is going to give space-time bounds for the solution to this equation in that case. So induction on mass when it's supported on a very large frequency band, and simply Dotson's theorem when it's very localized in frequency.
55:46
Alright, so that's what we do. We come over here and we modify this projection simply by rescaling. So what we did so far, we obtained stability for this equation, but now posed on R2.
56:02
So what do I have to do? I have to take the solutions that live in the plane, and I have to wrap them around the torus. And I have to prove that they are approximate solutions to that problem. Let's put a T lower. Now, in order to have a perturbation theory good enough to do that, that means that I need to have a perturbation theory in critical spaces.
56:29
Which means that I need Stryker's estimates at the critical regularity, L2, right? Critical Stryker's estimates. However, Burgan tells me that they fail. There are no Stryker's estimates at the critical regularity.
56:41
So what do we do? Well, we prove Stryker's estimates at the critical regularity, but they are again adapted to our parameters, Ln and Nn. If we take Ln to be much, much larger than Nn, then the solution doesn't have time to wrap around the torus, and you can prove Stryker's estimates in that setting. So Stryker's estimates, if you want critical Stryker's estimates on the torus.
57:11
All right, so I'm a little bit out of time. Can I have two more minutes to finish? Yeah? All right, so the very last thing we have to do is we have to prove that these two weak limits over here are indeed related by NLS.
57:30
We basically have to prove that a weak limit of solutions to NLS is a strong solution to NLS. So what we need is well-posedness in the weak topology.
57:46
Now, in the sub-critical setting, well-posedness in the weak topology goes back to work of Kato. In the critical setting, the only result that we were able to find in the literature is a result of Bahoui and Gérard, who proved weak well-posedness for the energy critical wave equation.
58:04
And in order to do that, they used the concentration compactness principle they developed for that equation. Now let me just one minute explain to you why you need something as heavy as concentration compactness to do that. So consider the following scenario.
58:20
Let's say that in frequency you have initial data, looks like two bubbles, one supported around 0 and one supported around nn, and nn runs to infinity. Then you pass to the limit, the weak limit. All you recover is this initial data that lives at the origin. So weakly, this converges to U0 infinity.
58:44
Now, in the sub-critical setting, the high frequencies are weak. In particular, they are small in all the relevant norms. And it's not hard to prove that the solution with this initial data converges to the solution to this initial data weakly. Now in the critical setting, all frequencies are equally strong.
59:02
So in particular, the initial data can have half its norm over here and half its norm over here. So in order to prove that the solution with this initial data converges weakly to the solution with this initial data, what we have to prove is that asymptotically, the solution to the problem with this initial data is the sum of two solutions,
59:24
one with this initial data and one with this initial data. And this is precisely what the linear profile decomposition does. It gives you an asymptotic principle of superposition for a nonlinear equation. Now, we also use concentration compactness to do that, but in our setting it's a little bit more complicated
59:46
because there's also a change of geometry. This problem is posed on the torus, while the problem we want to solve is posed on the whole plane. So there's a change in geometry. And of course there is a change in equation because the...
01:00:00
The Laplacian on the tortoise is nothing like the Laplacian on Euclidean space, okay? So thank you. Sorry for taking so long. Any questions, comments?
01:00:20
I have a comment, if no one minds, which is there's connection with your talk and with the talk on Monday afternoon of Laurent Tomin, whose one of his corollaries was growth of Sobolev norms. That's right. Right, that was actually the original motivation for Cookson to prove his result, right?
01:00:41
He argued that the non-squisic theorem measures in a way how weakly turbulent a flow can be, right? It proves that the energy cannot fully evacuate the low and the middle frequencies, right? It cannot run to high frequencies. It can for one solution, but it cannot do it uniformly. Exactly. Not uniformly on both.
01:01:01
Okay? Actually, can you elaborate on this? So what is exactly the connection between the non-squisic theorem and weakly turbulent? I'm allowed to make my comment, it's take a ball in L2, and you'd like to know, you'd like to say, can I uniformly make the Sobolev norm grow?
01:01:24
So then you want to say, that's like saying, is there a map from this ball to the cylinder? Well, the cylinder is the projection onto all high frequencies less than little r, but you allow the low frequencies to be, sorry, all the low frequencies less than little
01:01:44
r, but all the high frequencies to be free. So can you map uniformly the ball of radius big R to a cylinder of radius little r where the low frequencies are constrained by little r? The answer is, only if little r is bigger than big R. That is, you cannot uniformly
01:02:01
send frequencies to higher frequencies. Just take L to be a character, and you get that statement. It tells you that the Fourier transform of the function at a frequency, that becomes a cylinder. The Fourier transform of the function at a frequency lies within a cylinder or not,
01:02:22
within a circle, a smaller circle. So you're interested in whether, you know, can you say something better about the resolution of a single frequency, even if you're willing to give up on all the other frequencies? And the answer is no, right? Not uniformly in balls. But you actually get the result of this.
01:02:40
So for us, quantitative, not more quantitative than what I wrote. But you can think of this result being on the plane as a statement about scattering as well, right? So what does scattering intuitively tell you? This equation scatters, right?
01:03:01
Dotson showed us that, the cubic analysis on R2. So intuitively, it means that the energy leaves any compact set, it just runs away to infinity. What the non-squeezing theorem says is that it cannot do this uniformly on balls. Any other questions or comments?
01:03:22
No. The wave maps, wave operators are symplectic. So doesn't scattering reduce the question to a linear setting? That's not exactly the same on squeezing, but... I'm sorry, what are you after?
01:03:41
When you have a symplectic morphism to the wave operators. To the outgoing waves, right? Scattering linearizes the problem. When it's a linear problem, you have non-squeezing. Because it's linear. So that gives some non-squeezing. In terms of scattering for a body, yes. For feet, yes.
01:04:01
So it completely distorts the ball and the cylinder. It distorts the ball and the cylinder, but actually non-squeezing should not just be for balls and cylinders. You should have any set and symplectic capacity of the set. So if it's a more general picture, which I think is not impossible.
01:04:23
Well, it's not really clear how to define a symplectic capacity in infinite dimension. I thought that symplectic geometries have been trying to do that for a while. But we do have some tools that you explained to us.
01:04:40
I can't say anything about arbitrary states. Any other questions? No? Let's thank the speaker again.