We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

6/7 The energy critical wave equation

00:00

Formale Metadaten

Titel
6/7 The energy critical wave equation
Serientitel
Teil
6
Anzahl der Teile
7
Autor
Lizenz
CC-Namensnennung 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
The theory of nonlinear dispersive equations has seen a tremendous development in the last 35 years. The initial works studied the behavior of special solutions such as traveling waves and solitons. Then, there was a systematic study of the well-posedness theory (in the sense of Hadamard) using extensively tools from harmonic analysis. This yielded many optimal results on the short-time well-posedness and small data global well-posedness of many classical problems. The last 25 years have seen a lot of interest in the study, for nonlinear dispersive equations, of the long-time behavior of solutions, for large data. Issues like blow-up, global existence, scattering and long-time asymptotic behavior have come to the forefront, especially in critical problems. In these lectures we will concentrate on the energy critical nonlinear wave equation, in the focusing case. The dynamics in the defocusing case were studied extensively in the period 1990-2000, culminating in the result that all large data in the energy space yield global solutions which scatter. The focusing case is very different since one can have finite time blow-up, even for solutions which remain bounded in the energy norm, and solutions which exist and remain bounded in the energy norm for all time, but do not scatter, for instance traveling wave solutions, and other fascinating nonlinear phenomena. In these lectures I will explain the progress in the last 10 years, in the program of obtaining a complete understanding of the dynamics of solutions which remain bounded in the energy space. This has recently led to a proof of soliton resolution, in the non-radial case, along a well-chosen sequence of times. This will be one of the highlights of the lectures. It is hoped that the results obtained for this equation will be a model for what to strive for in the study of other critical nonlinear dispersive equations.
BeweistheorieFolge <Mathematik>MultiplikationsoperatorSingularität <Mathematik>Coxeter-GruppeStichprobenfehlerHelmholtz-ZerlegungWellengleichungArithmetische FolgeNichtunterscheidbarkeitEnergiedichteKategorie <Mathematik>FrequenzPunktOrdnung <Mathematik>KonzentrizitätAbstandSchätzfunktionLeistung <Physik>Arithmetischer AusdruckLogarithmusAnalysisMereologieNumerische MathematikSterbezifferFaktor <Algebra>GruppenoperationDimensionsanalyseZentrische StreckungProfil <Aerodynamik>TeilbarkeitNichtlineares GleichungssystemModulformIntegralStereometrieGleichheitszeichenVorlesung/Konferenz
OvalNeunzehnGleitendes MittelWellengleichungZentrische StreckungDrucksondierungPunktMinkowski-MetrikFinitismusRandwertTermKategorie <Mathematik>ZeitrichtungZahlensystemsinc-FunktionMultiplikationsoperatorPhysikalische TheorieSchätzfunktionRegulärer GraphTheoremDerivation <Algebra>SchlussregelMatchingBeweistheorieFluss <Mathematik>KoordinatenGleichungssystemNormalvektorEnergiedichteBeobachtungsstudieNichtunterscheidbarkeitPartielle IntegrationSchwache TopologieEbeneFormation <Mathematik>Umkehrung <Mathematik>ModulformObjekt <Kategorie>Gesetz <Physik>Vollständiger VerbandPhysikalisches SystemRechter WinkelAggregatzustandStandardabweichungMereologieTopologieAdditionVorlesung/Konferenz
Unabhängige MengeNegative ZahlRegulärer GraphQuadratzahlPhysikalischer EffektExogene VariableMereologieObjekt <Kategorie>RandwertMomentenproblemFluss <Mathematik>NichtunterscheidbarkeitZahlensystemÄhnlichkeitsgeometrieSummengleichungFunktionalEnergiedichteWellengleichungAbstimmung <Frequenz>TermGradientMultiplikationsoperatorDifferenzkernDichte <Physik>StandardabweichungRechter WinkelGeradeNormalvektorDrucksondierungTheoremPoisson-KlammerGauß-IntegralsatzKette <Mathematik>Vorlesung/Konferenz
Nichtlineares GleichungssystemMereologieParabel <Mathematik>TermMengenlehreMathematikRandwertGrothendieck-TopologiePunktÄhnlichkeitsgeometrieInzidenzalgebraArithmetischer AusdruckRegulärer GraphGrenzwertberechnungExogene VariableObjekt <Kategorie>WellengleichungGrundraumLeistung <Physik>KoeffizientVariableWasserdampftafelAusdruck <Logik>NormalvektorAnalogieschlussDrucksondierungGewicht <Ausgleichsrechnung>Entartung <Mathematik>Eigentliche AbbildungEllipseGradientDerivation <Algebra>KoordinatenRechter WinkelIntegralGesetz <Physik>EinflussgrößeDifferentialgleichungElliptische KurveVorlesung/Konferenz
Vorzeichen <Mathematik>Gleitendes MittelDreiObjekt <Kategorie>MatchingArithmetischer AusdruckNichtlinearer OperatorAusdruck <Logik>RandwertMultiplikationsoperatorRechter WinkelVariableMultifunktionQuadratzahlMathematikExogene VariableTeilbarkeitEnergiedichteInzidenzalgebraRegulärer GraphSchlussregelGüte der AnpassungSiedepunktWellengleichungTermMereologieGrenzwertberechnungDifferentialFortsetzung <Mathematik>Fluss <Mathematik>Gesetz <Physik>SchätzfunktionPerfekte GruppeMultiplikationIntegralGeradeDerivation <Algebra>Nichtlineares GleichungssystemRechenbuchUngleichungHardy-RaumPartielle IntegrationSigma-AlgebraKonstanteSummierbarkeitVorlesung/Konferenz
AchtDreiMittelwertWellengleichungFluss <Mathematik>TermRandwertGrenzwertberechnungMultiplikationsoperatorWurzel <Mathematik>SchätzfunktionTheoremVorzeichen <Mathematik>Nichtlineares GleichungssystemLeistung <Physik>ZweiMereologieEnergiedichteGradientQuadratzahlAusdruck <Logik>IntegralKreiszylinderMathematikMaß <Mathematik>PunktBetrag <Mathematik>RichtungGrenzschichtablösungTrennschärfe <Statistik>NichtunterscheidbarkeitWeg <Topologie>Amenable GruppeDelisches ProblemRechter WinkelVorlesung/Konferenz
Gesetz <Physik>BeweistheorieLokales MinimumHelmholtz-ZerlegungMengenlehreInverser LimesLemma <Logik>RandwertTermInnerer PunktDesintegration <Mathematik>MereologieNichtlineares GleichungssystemEnergiedichteFluss <Mathematik>SchätzungRechter WinkelKraftMittelwertSupremum <Mathematik>EvolutionsstrategieFortsetzung <Mathematik>GrundraumLeistung <Physik>BruchrechnungArithmetischer AusdruckDickeGruppenoperationGradientUnendlichkeitPunktDifferenzkernRechenbuchRichtungVorzeichen <Mathematik>ParametersystemNumerische MathematikFokalpunktFrequenzRechter WinkelFundamentalkonstanteRechenschieberGrenzwertberechnungTermWürfelSchwache TopologieSinusfunktionMengenlehreWellengleichungNichtlineares GleichungssystemSchätzfunktionPerspektiveZentrische StreckungBerechenbare FunktionVorlesung/Konferenz
SummierbarkeitSupremum <Mathematik>KraftMittelwertSchätzungRechter WinkelLemma <Logik>DreiHill-DifferentialgleichungGesetz <Physik>BeweistheorieFunktion <Mathematik>Fächer <Mathematik>MereologieFolge <Mathematik>Lokales MinimumReiheParametersystemMittelwertMultiplikationsoperatorBruchrechnungÜbergangDifferentePunktVorzeichen <Mathematik>FunktionalMengenlehreBeweistheorieDickeDifferenzkernPhysikalischer EffektGeradeFokalpunktSchätzfunktionRechter WinkelEinflussgrößeVollständiger VerbandIndexberechnungKlassische PhysikObjekt <Kategorie>Prozess <Physik>SterbezifferIntegralStreuungSummierbarkeitKategorie <Mathematik>Heegaard-ZerlegungChi-Quadrat-VerteilungUngleichungNormalvektorTotal <Mathematik>TorusLemma <Logik>Jensen-MaßGrenzschichtablösungVorlesung/Konferenz
Supremum <Mathematik>Lemma <Logik>TheoremHelmholtz-ZerlegungOrtsoperatorMaßstabKonditionszahlFolge <Mathematik>BeweistheorieMereologieParametersystemRegulärer GraphDivisionÄhnlichkeitsgeometrieStichprobenfehlerMultiplikationsoperatorMereologieZentrische StreckungFolge <Mathematik>UngleichungGeradeLokales MinimumKategorie <Mathematik>SchätzfunktionProfil <Aerodynamik>ParametersystemRegulärer GraphDispersion <Welle>FunktionalFrequenzStellenringReihep-BlockPunktKonditionszahlDifferenzkernHelmholtz-ZerlegungFlächeninhaltDifferenteSolitärspielKondition <Mathematik>Inverser LimesOrthogonalitätSingularität <Mathematik>NormalvektorFreie Gruppesinc-FunktionTranslation <Mathematik>EnergiedichteRechter WinkelComputeranimation
TermRestgliedDreiPunktNormierter RaumWärmeausdehnungLokales MinimumKraftTheoremEnergiedichteFächer <Mathematik>ZahlentheorieE-FunktionGravitationsgesetzGeradeBeweistheorieApproximationSinusfunktionHelmholtz-ZerlegungWellengleichungKategorie <Mathematik>FlächeninhaltPunktZentrische StreckungGradientProfil <Aerodynamik>Translation <Mathematik>PropagatorTheoremSummengleichungApproximationFormation <Mathematik>Numerische MathematikWärmeausdehnungAlgebraische StrukturResultanteSpieltheoriePhysikalische TheorieTermHelmholtz-ZerlegungNichtunterscheidbarkeitStandardabweichungFolge <Mathematik>BeweistheorieParametersystemSummierbarkeitNichtlineares GleichungssystemAusdruck <Logik>Kompakter RaumNormalvektorFunktionalPythagoreisches ZahlentripelQuaternionengruppeCharakteristisches PolynomGebundener ZustandInverser LimesPrädikatenlogik erster StufeStichprobenfehlerDispersion <Welle>IndexberechnungMultiplikationsoperatorZweiFinitismusEnergiedichteComputeranimation
TermMengenlehreRichtungWellengleichungBeweistheorieAnalysisDreiStichprobenfehlerGebundener ZustandResiduenkalkülKonditionszahlHelmholtz-ZerlegungAdditionFolge <Mathematik>MultiplikationsoperatorGewöhnliche DifferentialgleichungAuflösung <Mathematik>OrthogonalitätTheoremNichtlineares GleichungssystemTermProfil <Aerodynamik>TheoremHelmholtz-ZerlegungRichtungWellengleichungPrädikatenlogik erster StufeGrenzwertberechnungRadiusFolge <Mathematik>MultiplikationsoperatorElliptische KurveSchwache TopologieMittelwertGebundener ZustandModulformAdditionNormalvektorEllipseNichtunterscheidbarkeitTransformation <Mathematik>EnergiedichteKategorie <Mathematik>Berechenbare FunktionMereologieZentrische StreckungRechter WinkelSchätzfunktionDerivation <Algebra>FunktionalStichprobenfehlerSolitonStellenringBeweistheorieGradientVariableOrthogonalitätUnendlichkeitNumerische MathematikAusdruck <Logik>Arithmetischer AusdruckPunktFokalpunktHochdruckFrequenzWärmeübergangWasserdampftafelOrtsoperatorMengenlehreInverser LimesKovarianzfunktionRandwertArithmetische FolgeKompakter RaumGruppenoperationKonditionszahlComputeranimation
Lokales MinimumAuflösung <Mathematik>VarianzProzess <Physik>OrtsoperatorMultiplikationDesintegration <Mathematik>Fächer <Mathematik>EnergiedichteFluss <Mathematik>Gesetz <Physik>SchätzungMengenlehreWellengleichungLeistung <Physik>DickeFlächentheorieTermFluss <Mathematik>StandardabweichungMereologieQuadratzahlFlächeninhaltWasserdampftafelSpieltheorieRechter WinkelRechenbuchParametersystemNichtunterscheidbarkeitSpezifisches VolumenMaßerweiterungPunktUniformer RaumGruppenoperationUngleichungNumerische MathematikEinfügungsdämpfungGrenzwertberechnungNormalvektorMultiplikationsoperatorExogene VariableMittelwertObjekt <Kategorie>Nichtlineares GleichungssystemStereometrieGebundener ZustandRadiusHelmholtz-ZerlegungKategorie <Mathematik>DreiecksungleichungSummierbarkeitSupremum <Mathematik>Güte der AnpassungIntegralGradientDivergenz <Vektoranalysis>Hardy-RaumPartielle IntegrationComputeranimation
UngleichungLokales MinimumStichprobenfehlerMultiplikationsoperatorHelmholtz-ZerlegungTheoremKonditionszahlTermRechenbuchParametersystemOrthogonalitätGebäude <Mathematik>Regulärer GraphMereologieSchätzungVerschlingungEuler-DiagrammNormalvektorKategorie <Mathematik>Translation <Mathematik>EnergiedichteMultiplikationsoperatorParametersystemStichprobenfehlerRechenbuchProfil <Aerodynamik>UnendlichkeitRegulärer GraphFolge <Mathematik>Lineare GleichungKonditionszahlEinfach zusammenhängender RaumSolitonPunktMengenlehreFunktionalTermDickeSchätzfunktionStandardabweichungObjekt <Kategorie>MereologieNichtunterscheidbarkeitOrthogonalitätInverser LimesEinflussgrößeDifferenteSupremum <Mathematik>GruppenoperationFrequenzEreignishorizontTeilbarkeitVorzeichen <Mathematik>WellengleichungNichtlineares GleichungssystemTransformation <Mathematik>FlächeninhaltSolitärspielMomentenproblemFisher-InformationComputeranimation
MultiplikationsoperatorSchätzfunktionFolge <Mathematik>WiderspruchsfreiheitUnendlichkeitEinsMathematikRechter WinkelGlattheit <Mathematik>TeilbarkeitVorlesung/Konferenz
Transkript: Englisch(automatisch erzeugt)
So the last time we showed how to extract the scattering profile in the infinite-time case.
And now, today and next time, we will sketch the proof of the decomposition in the finite-time blowup case. So we have a solution that blows up at time 1, let's say, that remains bounded in H1 cross L2 up to time 1.
And we want to now produce a decomposition into solids. So that's the task. So in the notes, there's a first step in which we prove the decomposition with a weaker error than what we'd like.
And then we will go through several stages to improve the error.
And in order to improve the error, sometimes we will have to slightly tune the sequence of times. And at some point we don't tune the sequence of times anymore, but then we show better and better properties of the error.
Okay? Now the first step is inspired by work on wave maps. This goes back to the work of Grilakis on wave maps.
And it's what we call a Morawiec estimate, or a Morawiec identity. The fact that there's a way to use ideas from wave maps for the energy critical wave equation
goes back to work that we did with Kott, Laurie, and Schlage. So this is the approach that's presented in the notes. But today I'm going to present a different approach,
which I think is more concise conceptually, and it allows, I think, hopefully, for more progress as we move along. So because of that, we'll use the blackboard for the beginning.
And then in the notes is the other proof, so you will be able to read that. So to define a blowup case, we can always make it to be one by scaling.
It's not the problem. So we're going to prove the following claim.
And we call this the crucial Morawiec estimate. Although in the new form in which we're going to do it,
we do not recognize this as coming from a Morawiec identity. And in fact, I think you can recover the usual Morawiec identities even in the wave map case from this point of view. So the claim is that we assume T plus equals to one,
and that the origin is a singular point.
So it's a point where even locally I cannot continue the solution anymore.
We introduced this notion earlier. So what it means is that it is not a regular point, and a regular point is a point where if you take integrals over small balls around that point, you get that to be uniformly small for all times up to one.
Those are the regular points. And this is a singular point, so it's a point where there is concentration. That's what that means. And let me assume that it's spaced by one from all the other singular points. Since there are finitely many, I can always pretend that.
This is just so as not to have to write more. It's not important. Then, with the constant C, such that for zero less than T up to one,
no, maybe not, let's say a half. So this half is symbolic, and it depends on the fact that the other singular points are a distance further than one. Otherwise, you have to go sufficiently close to one.
So we have an estimate, and the estimate is that the integral from T to one of the integral of X less than one minus T of the following expression.
So we get a logarithmic estimate, but what's crucial here is that,
well, first of all, this integral is finite. That's already very good. What is interesting here is that the power there is less than one. So the original proof had the power three-quarters.
We can get the power one-half using this approach. Now, why is the power one-half interesting? Why is the power less than one interesting? It's because the finiteness of this, suppose that this was just bounded,
this inner integral. Then, of course, the integral up to one minus T would behave like the logarithm of one over one minus T to the power of one.
But somehow we're only getting the power one-half, so that means that in some sense the inner integral is vanishing with a rate as equal to T equal to. And the power two is dimension-dependent.
So the number is different in dimension three, and this has to do with the scaling of the equation. That's exactly what it is. Now, in wave maps, this factor doesn't appear, and that's because there's no factor in front when you rescale wave maps.
Okay? So that's how this thing is working. Now, the proof of this that I'm going to show now is a very, from my point of view, is a very satisfying one, because it uses the ideas that we've already seen
that come from the work of Merle and myself, where we showed that there cannot be a compact solution which has self-similar scaling. Okay? And if you recall the proof of that, which was given,
I don't know, a month ago or something like that, it used the introduction of self-similar coordinates and then some integration by parts. Now, the reason one could do that in the compact situation
was that the boundary values on this inverted cone, let me put the inverted cone here, the boundary values of a compact solution are forced to be zero. And therefore, we had some boundary terms that disappeared
because that was zero. Now, in this case, we can no longer assume that the thing is zero, so we have to somehow handle these boundary terms. And these boundary terms are huge. So we will see.
So I'm going to prove the claim. So the first step is what we call the energy flux estimate.
And we recall we have this solution that blows up at t equal to one. So this is t plus equal to one. We recall that there is a V of t which is a regular solution at t equal to one.
So this is not one that blows up, but it's one that moves continuously up to t plus equal to one
with the property that the support U of t minus V of t is contained in the inverted cone.
So that outside the inverted cone, our solution looks like a regular solution. And we saw that this was a consequence of finite speed of propagation, basically. So what you do is you consider the weak limit as you tend to one,
and you now solve the nonlinear wave equation with that initial data at time one. And that's the V. So we have the object.
Now, since V is a regular solution, we have two facts. Of course, the norm, let me just put it like that.
So the V arrow and U arrow notation means the pair U d dt U and V d dt V. OK, so it's just a shorthand. So this is a finite, but moreover, the spacetime norm, the Stryker's norm, is finite.
Because this is a regular solution, so by the finite blow-up criterion,
that norm is finite, because it doesn't blow up. OK? Is this OK? In the last one, x is in the cone or everywhere?
No, x is everywhere, because this is a regular solution. So this is what this local well-posedness theory gives you. OK? So the next remark I'm going to do about the V is that,
as a consequence of these two properties, and this is a very remarkable fact, but it's trivial, right?
From the two other properties and the Leibniz rule. Write V to the sixth, you take d dt, you get V to the five times d dt. You do the norm in x, you do Cauchy-Schwartz,
you get the L10 times the L2, and then the other one is in L1 and the other one is in L infinity, so this matches. But now, this implies that V to the sixth belongs to L1 in t and x
in the boundary of the cone d sigma. That's the standard trace theorem. If you have the t derivative in L1,
you don't need to even invoke a theorem, you just use the fundamental theorem. But on the boundary of the cone, V to the sixth equals U to the sixth by this support.
So therefore, now we've proven that U to the sixth belongs to L1.
And this is huge. This tells us a lot. Because now we combine this with the flux. So I'll write down the flux. Well, maybe I'll write this explicitly here.
It's the same, I'm just writing this in coordinates. So now I'm going to write the energy flux.
The energy flux is the following identity. So let's take t1 and t2 less than 1.
So the energy flux is fundamental in the study of wave equations. And this is a well-known identity.
This is the standard flux identity. And you prove this by calculating d dt of the density of the energy and then using the fundamental theorem and the divergence theorem.
There's nothing more than that. Now, one of the big difficulties in the focusing wave equation comes from the fact that this object in here is not non-negative. It can possibly change sign. And this is what causes a lot of the difficulties that we see.
But this observation tells you that you can control the negative parts, the key thing. We can control the negative part because it's exactly on the boundary of the cone
where your blowup solution equals a regular solution. So the corollary of this is that...
Why? Why is this true? Well, we use this identity by the fact that the h1 cross l2 norm is bounded. This term is bounded. This term is bounded. By the previous thing, the l6 term is bounded. So what I have is bounded.
I'm just worried that there is a square here and not in the first line. There's a square where? I'll try the bracket here because it's in the first line of the group. It's because...
Here, this shouldn't be there. You're worried because I made a typo. Clearly, this thing is non-negative because it's a half here.
We're completing the square. Another way of writing this, which will sometimes be used, is that...
I'll introduce some notation.
I hope you can read this where, by definition, this is the tangential part of the gradient.
This is the definition of that one. The reason why this follows from that is that if you just expand this square, you get what's on top.
It's telling you some control on the boundary of these objects. I will need one other term, which is a Hardy term here.
I will add this to the corollary.
This is also bounded. This needs a proof.
You can't just... You'll see things work very neatly here because... Let me define f to be this function.
This is a function only of x. I'm going to use Hardy on this function of x. Let's calculate the gradient of this function of x.
This holds just by the chain rule. You see that exactly the gradient of this f is the flux.
Since the flux is in L2, this gradient is in L2. Now I can use Hardy. On the boundary of the cone, 1 minus t equals x.
Now we've got the three things that we want. Did I call this step one? I did? Now I'm going to go to step one prime.
Step one prime introduces similar coordinates, which we did already when we studied compact solutions in the ground state conjecture.
This will be the same. Now suppose that x is less than 1 minus t.
In between these two sets of lectures, I was giving another set of lectures
where there was a blackboard with a hole and I lost the eraser. It went down the hole. The whole thing grounded to a hole. Anyway. So inside the cone, y is x over 1 minus t,
so that y is less than 1. And s is minus the log of 1 minus t. So our new universe now is the s, y variables. And this is, as I mentioned in previous lectures,
this is the analog of the parabolic cells, similar variables that were introduced by Giga and Cone.
And that's what we use here. So now we define w of ys to be 1 minus t to the 1 half u of x t.
This was just like we did earlier. And this 1 half power of 1 minus t is responsible for the coefficient 1 half in front of the u. That's where that comes from.
We'll get to that. This is just information. It's not something that's supposed to be obvious. OK. So now we introduce some weights.
And this weight obviously blows up at y equals to 1, which corresponds to the boundary of the cone. And that's why when we were dealing with compact objects,
we needed to have things that vanished on the boundary of the cone. OK. But now all we're going to do is regularize this weight,
work with this weight instead, and then choose epsilon appropriately. So now we have s goes from 0 to infinity now by this change of variables.
When t goes between 0 and 1, s goes 0 to infinity. And y is less than 1 because we're looking inside the cone. OK. And now I write the equation for w, which we have already seen.
The equation is the same as before.
OK. So this is the equation that W verifies. So it is a wave equation, but it has a nonlinear wave equation
that you cannot avoid. But it has some interesting features. First, the elliptic part degenerates as y goes to 1. It's a degenerate elliptic equation. And the second thing is that there's this extra term here.
What I didn't do is copy this properly. There's a gradient and a DDS. So it's a second order derivative. And that's a kind of parabolic type term.
And that's why this becomes a little bit parabolic in some similar coordinates. Anyway, so this is the equation. And I'm just going to tell you two more things.
This we can compute in terms of u.
So the s derivative of w with light.
Oh, I'm sorry. Should I lower this? Oh, maybe we need light there.
It's OK now? Oh, it should be OK now. OK, so if we look at this expression here, you see that it corresponds exactly to what I have inside my integral.
I just have to divide by the power 1 minus t to the 3 halves. And so that's the point that this expression inside is a similar time derivative.
OK? You first released the law actually and then you took a minute. I haven't... We haven't done anything yet. I have not done anything yet. This is just the formula. There's no epsilon in this formula. Right. And this equation has the rho. It doesn't have the rho epsilon.
OK? Which is singular at y equals to 1. And then I'll tell you when the rho epsilon comes. And then I put the gradient in y.
This one is much simpler, right? It's just...
So, from these two formulas, we get that the H1 norm is bounded.
OK? Because that's just the change in measure. Right? We know what d dy is. We know what d dy is.
It just gives you the right factor. This just comes from the fact that each one of these objects, once you weight it appropriately, is in L2. And for u, you have to use the regular Hardy inequality.
Now, I tell you what the flux estimate gives me, together with this estimate. The flux estimate, together with this estimate, gives me
that the integral from 0 to infinity, integral of y equals to 1, of ds w of ys squared, and the sigma in y ds is a sum constant.
Why is that? Because, well, d ds has these terms in it. And this one, once you change the variables, corresponds exactly to the flux term.
It corresponds to the first term there. And this one corresponds to this term. So it's just a change of variables.
So, you see the boundary values may be bad, but there's something good about them. The s derivative can be integrated.
And finally, my claim boils down to the following.
And this is just the change of variables
and the formula for d ds. You just have to change variables in the y and in ds. You see, the d ds change of variables produces the factor of 1 over 1 minus t
because s is the log. So this all fits, right? It's all fitting perfectly. So now, how do I prove the estimate?
And now, maybe, can people see here? So now what we do is we introduce an energy. That 2 is proof of play.
So I define e epsilon of s. So this is an energy which now depends on epsilon.
So that's my definition. So it's a definition.
Now this is the natural, instead of having rho epsilon, I had rho. This would be the natural energy associated to this equation. If you multiply by dw ds and integrate by parts
and assume that all the boundary terms are 0, you would get exactly this. You have to do ds w times rho. Multiply and integrate by parts, which is what you always do for the wave equation. That's how you deduce the energy. If you do that, you get the energy for epsilon,
and all the boundary terms disappear, you get the energy for epsilon equal to 0. Now, unfortunately, the boundary terms don't disappear for us because nothing is 0. And second, we don't know that when epsilon is 0, this expression is finite.
Because it's singular, y equals 1. And why should it be finite? That's the energy cost when you do the first line of the equation. No, do the second one. These two terms combine today. That's also included. Yes, it includes everything.
So you have to be a little patient when calculating. But everything cancels out and it gives you that. But for us, this rho is too singular because we don't have 0 in the boundary. In the case when we were working with compact objects,
they were 0 in the boundary, and we could use the E0 energy. And that's what we did in our paper. So now, you put the epsilon. The interesting thing is the time derivative,
the s derivative of epsilon. That's what you need to look at. And now, the amazing thing is that there's a clean formula. So this is the very nice thing that happens here.
Okay, so this is the formula. And I guess we were very surprised when you could actually calculate it. And it's very compact, right?
So let's look a little... So how do you prove this first? You do the same as you would do trying to show that the energy for the wave equation is constant in time. You multiply the equation by w times rho epsilon.
But ws, I mean, the s derivative of w times rho epsilon, and you integrate by parts. Is there a square in epsilon in the second term? No, it's a power one. It's too strange because the equation is the same for epsilon and quantity.
Then some of the signs will change. But it is the power one because if you look at... If epsilon is the same, when epsilon is negative or positive, right? Yes.
Oh, I see what you're saying. Let me think about it. It would be absolute value, epsilon. What you get is an epsilon squared square root.
No, it is epsilon. It is not. The reason that it is epsilon is what happens with this when you go to y equal to one. It's exactly epsilon squared square root.
So the way you prove this is you multiply by d ds w times rho epsilon and you integrate by parts. Of course, if you're going to do it at home, take your time because most of the times you will get it wrong.
I know by experience. But this is the formula. Now let's understand this formula a little bit. What we want...
First, let's assume that w was zero on the boundary, like the old case. When w is zero on the boundary, d ds w is also zero because it's a cylinder. So this term would disappear.
This term, you make epsilon equal to zero, goes to zero. This term disappears. And this term, you get one plus epsilon squared, so you get one. Now when we integrate, if the energy is bounded,
you get the integral of this thing, and the integral of this thing is exactly what you're trying to control. So this is the right kind of thing.
We want to integrate, and so the best thing to integrate is the derivative because by the fundamental theorem you can integrate. In our case, when we integrate in s,
this term is the one we already know from the flux. The integral of this term is convergent by the flux. So that's somehow what saves you here.
You have the flux, and therefore, I think I wrote it here. Then the integral is convergent. That was the flux estimate. Of course, I'm not going to make epsilon zero.
If I make epsilon zero, I die. This is something obvious. You said if we pretend to take epsilon zero, we're left with the first term, and then we're integrating.
If e is bounded... What happens with y? y got integrated. There's no... It's just an energy, and then we integrate it. Then by this magical identity, you control this integral
because this is a positive quantity, which is what you really want to control. So that's how this magic works. Now what do we do when we don't have the boundary terms?
The first observation... I'm going to use all the blackboards and then move on back to the transparencies. The first observation is that the energy is bounded by one over epsilon.
Why? Because... I'll just... I'll write the formula. There's only one remark because of this.
In the definition, this is rho epsilon,
and the rest is covered by... And this term, when I integrate it, is bounded.
So if I do an integral here, the integral of this is going to be the e epsilon at the two endpoints, which is one over epsilon. This is a constant times the one over epsilon.
And then I have these two terms. This is the one I like, and this one is a horrible term. But it has the redeeming feature that there's an epsilon squared in it. There's an epsilon squared, and it's mixed.
It has a gradient and the d ds. Now we're going to just use Cauchy-Schwarz. We will integrate between S1 and S2 and use the fundamental theorem.
Then 1 plus epsilon squared, integral between S1 and S2, integral y less than 1, ds w squared,
1 plus epsilon squared minus y squared to the 3 halves, will be less than or equal to C over epsilon. That comes from this term and this term.
And then plus.
Now I'm going to do Cauchy-Schwarz, keeping this weight. But the epsilon squared I will keep with this term, which is the one I don't like. And the other one I leave alone. Then I put a small constant in front of this one that I absorb by the left.
And what I'm left with is the following. C over epsilon plus. Now here I will get an epsilon to the fourth when I square.
And in the denominator I have a 1 over epsilon cubed. Right? So I get an epsilon to the 1 over an epsilon. Then I have the length of the integral,
because I'm not using anything on that. I'm just doing it in y. And that's it.
And maybe instead of 1 here I have 1 over 8 after I hit that one term.
I just did Cauchy-Schwarz. I kept these two things together. Pardon me? The last constant C is not necessary, but what is there? It comes from the Cauchy-Schwarz.
Cauchy, what do you need 1 over 8? What's that? Why do you need 1 over 8? Oh, you lose a little bit. It's just a symbol for something smaller than 1. OK? I don't care what constant I have as far as it's a universal constant.
So all I'm trying to say here is that I lose a little bit in the constant here, and I increase a little bit in the constant there. What information do you have on the y dot gradient? The y is less than 1, so I can throw out the y.
And the gradient is in L2. So this is not an infinity problem. OK? So the y doesn't hurt me. What hurts me is that I can't integrate in S. But now I'm going to... So once I have this bound,
the first thing I say, OK, I'm going to throw away this thing. After all, it's bigger than 1. Right? It's only bad. I mean, it only grows. So if I bound it from below, I can throw it away.
And now I just choose epsilon to be...
You make the two terms equal. And that's the proof.
So is this OK? So you just have to have faith. Because after all, the computations are not difficult. But you have to think that it will work.
Right? I mean... But the key point that we realized later was the fact that the DDS of this w has this direct relationship with the Morawitz thing.
This has to do with the scaling of the equation. And this does work also for wave maps. I mean, this kind of argument gives you control of the Morawitz type.
estimates in wave maps, which, you know, are in the work of the servants and Tataru and Tao, they're very important. And this gives another way of looking at that, too.
Okay. And all this computation has nothing to do with criticality. You could have it for, with criticality. No, yes, no. It's the scaling playing against the criticality. I mean, otherwise you get other numbers. Then the thing that is a little bit...
The only thing that you have is the W5 in the equation, no? No, but it's also what is the expression that appears here. You see, because this will always appear. So what I will say, though, is that you are right to a point.
This can be used in subcritical cases, too. In fact, Zag and Merle used this in some subcritical cases. Now, what it buys you is unclear. Anyway, I think this is a new perspective on this kind of argument.
Okay, and if you look at the paper... I will flash the slides now.
So, oh, one thing we're going to do next.
The reason I used t plus equal to one here is because the calculations were done in my paper with Merle at t plus equal to one.
But going on, it's more convenient to, instead of using this picture, to use this picture, but this being the blow-up point.
It's just a little bit more convenient. And the effect it has is that this minus here becomes a plus because you're pointing in the opposite direction. Other than that, it's all the same. But I didn't want to redo all the calculations, so I just borrowed that.
Okay, so this is what I'm saying here. And I'm reminding you about the singular set and the weak limit. But now the blow-up point is t equals zero.
Okay? So the picture is that one.
So the estimate now had the power three-fourths. We've got it now with the power one-half, but they are equally good for what I'm going to do next. But one-half is better than three-fourths because it's smaller. And hopefully there's further improvements. And the only thing different now is instead of one minus t, you have t.
And that's the point of using that picture. t is nicer than one minus t. And the sine is the opposite. So I'll go through the proof here.
It wasn't complete, but it's a very lengthy thing and you don't really see clearly where things are coming from. I think this calculation is quite illuminating.
Okay? So now let's move on. So now we have to improve things. So the process is that you get something and the reason this is a very nice estimate
is I'm saying this again here is because this thing grows slower now in average than this log and so it has to vanish at some rate on average.
And so one can think of this as a Tauberian argument. Okay? So this is telling something about the Cesaro sums and we want something about the object itself so that's where we need to pass to a subsequence.
The classical Tauberian argument says that if you have Cesaro's mobility, for a subsequence you have convergence. Okay? And this is the reason why we now need to pass to a subsequence. Okay?
So it's actually a very classical thing and it's very easy to trace why. Okay? So let's move on then. So now we're going to do the actual Tauberian argument in this context.
Of course we're going to use a bit of modern technology to do that. We're not going to follow the original things. We're going to use the Hardy-Littlewood maximal function, for example. That actually comes in very handy here. Okay? So now there's some real variable type of arguments.
The next lemma tells me that I can find the sequence. Remember the time is now going to zero instead of to one. So now I can find a sequence of times
and I'm going to get two different sequences of times. And if you recall in the proof of the extraction of the scattering profile, we had two sequences of times. One was Tn and the other one was Tn minus alpha over 10.
Because we needed to do some integration and we needed to control both endpoints in the integration. And the reason for this is similar here. And what we will do is first produce these objects
such that the averages are actually going to zero and not only that but something enhanced where you can also integrate a little bit bigger in T and still take the average and a little bit bigger in X, still take the average and those go to zero.
And we want to choose these guys so that they are actually separated. You think of an interval and you split it into three parts and one of them is one extreme part and the other one is the other extreme part. So this is a very standard thing that you do in this kind of problem.
Except the numerology, I don't know. Don't pay too much attention about the fractions. It's the point of the thirds of the interval. So there's a separation between them and they're of the same size.
So we're going to produce two sequences with these two properties. Why is it important that they are the same size? Because you need that there are differences of the size of each one.
The difference is the length of the interval. You need that the length of the interval be. So how do we do this? So now we're going to use this Morowitz estimate first.
So we pick a big capital J and we look at t1 equal to 4 to the minus j and t2 equals 2 to the minus j. We will apply it in that case. So then t2 over t1 log gives me j to the three-fourths.
We could get j to the one-half but we don't care at this point. Okay? And then I'm splitting it into the intervals. So this is just a splitting of the interval between 4 to the minus j and 2 to the minus j into equal size intervals. Now since the sum is smaller than this by pigeonhole,
there's one that's smaller than the minus one-quarter which is 1 minus three-quarters because there's j over them. So this is just a pigeonhole argument. If all of them are bigger than this, I have j of them.
I have bigger than j times j to the minus a quarter and that's j to the three-quarters but it is smaller. So one of them is big. One of them is small. If they're all big, I get a contradiction. Okay? So there's one little j like that.
And now I look at the sequence made of the 2 to the j's and 4 to the minus j's where these are the chosen mu j's and I choose a decreasing subsequence. Okay? This is a sequence that's going to zero and so on.
Three-fourths is because you are using the estimate with the log of three-fourths. Right. That's exactly... Otherwise it's one-half. Right. Okay? Yeah. And this is precisely the way that you use such an estimate where instead of having... If you had j here, you couldn't do this.
There's no way to gain. Okay? So this is what we do. We get to the j to the minus a fourth and then we can make this decreasing and we get that this will tend to zero.
All right? So now we did the first part. Now I'm going to call g of t this function of t which is the thing that I control by the Morowitz estimate
and which I just gave better control. And now I know that these averages of g of t tend to zero. That's how I chose the mu j's. So since they tend to zero after passing to a subsequence and relabeling the indices,
I can assume that they are four to the minus j. Okay? This is just a simple argument. And now I look at the hardly little maximal function of g times chi of mu j up to two mu j.
Okay? So that's a function. And now I'm going to use the weak type one-one inequality. So the measure of the set of t in mu j and two mu j where the maximal function is bigger than two to the minus j is less than or equal to two to the j times four to the minus j
times mu j which is the L1 norm. So that's why I have four here and two here and left with the two. Okay? So this is just the weak type one-one inequality. So what this tells me is that the set where it's big
is getting very, very small. So there are a lot of t's in which it is small. And that's how I chose my t primes. Okay? And I have a lot of room
because I have a very small fraction of the total. Okay? And so that's why we can choose them like that. And now the G, remember, was integrated only up to t.
Why can't I go to c times t? Because between t and c times t, u equals v. And v is a regular solution and everything is going to zero for the regular solution. And the maximal function is exactly what you need
to control these things. This is exactly the maximal function. It's just the definition of the maximal function.
So now we have our two sequences of times. We will first use just each one of them, and then we will put them together.
Okay? So the first step is to use each one of them. And we will get a decomposition for one and a decomposition for the other. Then we will use those two decompositions to get some improved estimates, and then we will find a new sequence of times.
So you have to be doing things gradually. You can't get everything at once. So now we can give a preliminary decomposition.
So suppose we have a sequence Tn such that this condition star holds. Then we can have a preliminary decomposition.
So we have a J0, scales, centers, strictly less than Tn, Lorentz parameters, which are less than one, traveling waves, such that u is the regular part,
plus the modulated solitons, plus an error. But in what sense is this an error? It's only an error in the sense that its L6 norm goes to zero. Remember, I want the h1 cross L2 norm going to zero.
But first I'm going to just get the L6 norm going to zero. So this is the first step. So I'll explain how you do this. The L6 norm is only for epsilon-0, not epsilon-1. Right. Epsilon-1 is not in L6.
But epsilon-0 is. OK? Can you prove smallness instead of just norms? For free or not? No. No, we have to work very hard for the strictest norm.
We'll get that at the very end. Which is even less than mh. Yes. So what we will do, well, you will see. But this comes at the end. Well, no, at the step before the last one. And then from the strictest norms, we have to go to the actual energy norm.
OK? But you will see, this we'll see next time, that a lot of extra things are needed for that. Let's go with this for now. Let's try to see this. I mean, this is the starting point.
OK, so how do we prove this? We're going to use our inequality star. Remember this. More or less type inequality, but now we chose the interval as well. So the first point is we pick a cutoff function phi, which is 1 on B3 and support it on B4.
And now I'm going to look at u0n, u1n, which is u times phi of x over tn. OK? So I truncate my u. At time tn. The tn is the sequence of times for which I have this enhanced Moretz estimate.
OK. Now I claim that it's enough to prove the decomposition for this sequence. Why is that? Because the difference, this thing equals u minus v plus an error that goes to zero.
Why is that? OK, so let's start. Basically there are two cases. Suppose that we look at x bigger than tn. OK? If x is bigger than tn, u of tn and v of tn are equal. So this part is zero.
What happens with this guy? If x is bigger than tn, u is v. So v is a regular solution. And so for a regular solution, since I'm truncating up to a small size tn, it goes to zero. So that's the part where x is bigger than tn.
How about the part where x is smaller than tn? When x is smaller than tn, this guy is equal to u because of the choice of phi. So all I'm left here is v.
But v is regular and it's in the region where x is smaller than tn, so everything goes to zero. So that's how you prove this statement. So that tells me that all I have to do is handle this term.
R0 is the amount where you don't meet any other singular point. You can think of it as 1, if you want. So that they don't interfere with each other.
So now we go on. So, I'm sorry, but we have to use the profile decomposition at this point. So what we do is we do a profile decomposition. We decompose our sequence into these blocks.
They are linear blocks. Plus an error which tends to zero in the dispersive sense. So far so good. And I'm going to divide the t's, tj's.
I'll assume that they're either identically zero or the limit goes to plus or minus infinity. I always can do that. And I have, of course, the pseudo orthogonality conditions.
The first observation is that u0n, u1n goes to zero outside x bigger than tn. Why is that? Because it's u cut off, more or less, at tn. And outside tn, u is v.
So this is really v. And since I have the cutoff on v, the cutoff up to 3tn, then this is small. So this is true. So since the u0n's and since the u1n's are localized in x less than or equal to tn,
that gives me some control of the parameters in the profile decomposition. Okay, so this goes back to Bahuri and Gerard. You just have to take my word for this.
So what you get for free from this Bahuri-Gerrard, the accent got put in the wrong place, never mind. It was not a French-speaking typist. Okay, so what you get immediately from this, from the localization property,
you get that all the scaling parameters are controlled by a constant times t sub n. And the translation parameters are controlled by t sub n, because away from t sub n, nothing is happening. The thing is just zero. So everything is controlled like that.
And the other thing that you get, and this is kind of handy, I mean, it's not too terribly important, but it is handy here, is that this limit is non-zero for at most one index j. There can be only one where the lambda jn equals t sub n.
t sub n is what localizes the sequence, okay? And we will call it, pardon me? I'm sorry? There's only one for which this is not zero, okay? So there's only one for which the scaling is self-similar, okay?
That's what this means. And for that one, if it existed, we could change the profile again to make them equal by rescaling the profile. And now by extraction, this sequence is bounded, right?
Because of this property. So the limit will always exist. I'm not saying now that it's less than one yet. It's bounded. So I can assume by extraction that all these limits exist.
Now we will divide the profiles into three cases. The first one is, suppose that there is a tj0, that there is a j0 for which these two things are equal, okay?
What we will do is, in this case, the associated non-linear profile is actually exactly self-similar and with compact support. And because of the theorem that Merrill and I proved, those things cannot exist.
So there can be no self-similar profile. That's the first case. I will explain how you prove this, okay? I'm just trying to show you the big structure of the proof.
The second case is still that tjn is zero, but the lambda j is much smaller than tn. Remember, the j is always bounded by a constant times t sub n, okay? So they're either comparable or lambda j is much smaller.
And the last case is when tjn is not zero, but when it is not zero, it means that tjn over lambda jn goes to infinity, right? Because we said that we could change the tjns to be either zero or going to infinity, okay?
We will see that the profiles from case three can be put into the error. They automatically have small l6 norm, okay? I will explain that in a second. So we will not need to take care of these profiles.
But in this case, in case two, we will show that the lj's are all less than one and that the profile is actually a traveling wave. And once we have this, the decomposition follows, all right?
Okay, so once this is done, the result follows. For case three, the point is that for any linear solution, in the energy space, the l6 norm goes to zero
as t goes to infinity. And if minus tjn over lambda jn goes to infinity, that's where the profile, the linear profile, is evaluated at. So each one goes to zero, but we can combine them because we have a Pythagorean expansion
for the l6 norm to the sixth, right? And this is the proof that the l6 norm goes to zero. It uses finite speed of propagation, this proof. There are other proofs. This proof just uses finite speed of propagation
and the dispersive S threat. So because of these things, the third case can be moved into the error term.
So we just have to understand the first two cases. Now the other point is that in the second case, the case where we have the q's, there can only be finitely many and the number you can have depends only on the H1 cross L2 norm. Because if you recall, for all nonlinear elliptic solutions,
their gradient has a lower bound. The lower bound is the lower bound of w. So if you had too many of them by the Pythagorean expansion, you would violate the boundedness of the H1 cross L2 norm.
So that always gives you an upper bound on how many of the q's you can have. So now I'm going to try to explain why in the first case we have the sub-similar case,
which is not possible, and in the second case we get the solitary wave. So case one. So we're in case one. So that means that λj0n equals T7.
And Tj0n is zero. Now, in this case, it's not hard to show that the Cjn0, since they're going to zero, you can take them all equal to zero by changing the profile
and that the linear profile and hence the nonlinear profile have compact support of size one. So after using properties, standard properties
of the profile decomposition and the approximation theorem, you can see that the Morrowind's estimate gets inherited by this nonlinear profile. And what we get is the T derivative, the x over t plus one derivative,
the u over two dt. And this now has to be identically zero because you make the n parameter go. And the scaling gives you this. And this is true not everywhere but in a sufficiently big region. So now we get that this thing is identically zero.
Now, this is a first-order equation. So you can integrate it by characteristics. And so now you can say what this means. So we have this first-order equation,
and there's only one kind of solution. This is the formula for the solution for some function C. And since uj zero was compactly supported, C is compactly supported.
Now, we also know that uj zero is a solution of the nonlinear wave equation. So this, since it equals it, is a solution of the nonlinear wave equation. So this is a similar solution to the nonlinear wave equation with compact support. The theorem of Merle and I shows that it's zero.
So these profiles couldn't have appeared because they lead to a contradiction. Now, let's look at the other profiles, case two.
And let's say that the one we're looking at is the first one. Now, T1n is identically zero. Lambda 1n is much, much smaller than Tn. And L1 is this limit. Now, because of the fact that the lambda 1n is so much smaller than the Tn, this term does not contribute.
When you plug it in, that will go to zero by the scaling. And then you can prove that x over T, see where L1 goes, goes to C1 over Tn. The profile is concentrating around C1 over Tn.
So x over T becomes L1. So in the limit, in this nonlinear profile decomposition, you get that this guy verifies that this is zero.
So that means that now the first order equation is this first order equation. So that means that it's a traveling wave. And it's a traveling wave solution. And we had proved with Duqueira and Merrill that traveling wave solutions are exactly solutions of the elliptic equations.
Lorentz transformed according to this direction. So you have to use that theorem to show that L1 has to be less than 1. What we showed is that if you have a traveling wave, the speed has to be strictly less than 1, and it's a solution of the elliptic equation,
Lorentz transformed. So that's how we get that this guy is the traveling wave. And so this now just gives the decomposition in this preliminary form.
But it gives it for the two sequences of times that we constructed, because it gives it for any sequence of times for which this average goes to zero. This Morawitz-type average. And the reason you want these averages for each t is that in this computation of these limits,
you want to take weak limits. And so you need to use compactness. And if you go to two variables, L2 loc is compact if you're contained in H1 in space-time. And that's the reason why you need to pass to the averages.
To be able to use that local weak convergence gives strong convergence locally. But that works. In fact, this is something that
we had used with Ducayer and Merle in our first paper in the radial case, this trick of passing to two variables instead of one. Now we're going to improve the bounds on the errors
using virial identities. Okay? But we'll have to switch the sequence of times. So how do you do this? So the claim is now there is another sequence of times
such that for this sequence of times, we have a decomposition. And the error, in addition to having the L6 norm going to 0, has some further good properties.
The tangential part of the gradient goes to 0. Remember that outside t sub n, everything goes to 0. The gradient outside t sub n goes to 0, that's no news. But for any slightly smaller ball, it also goes to 0.
So the energy is concentrating near the boundary of the ball. The same is true for the t derivative. And then there's a third or a fourth property which is fundamental, which is that your solution is becoming outgoing.
There's a relationship between the spatial derivatives and the time derivatives. And this is precisely the expression in the Morawitz formula.
And that is going to 0. So that's the first thing that you do. So I will show you now how to do this. But I want to address your question about the Strickard's norm.
The Strickard's norm doesn't come at this time. What happens is that once you have these properties, then you can prove that the Strickard's norm goes to 0. But not before you have these properties. But in the composition, it comes with a small mess.
But there's the extra profiles that we're throwing in there. Remember that we threw in the profiles that were scattering profiles. That's the point is we have to kill those. And we kill those by using this.
The proof of that, I don't know how much of that I will describe, is in the same spirit as the extraction of the scattering profile. And what allows you to make that succeed is the fact that you have this property.
So you go back to that proof and you can extract what you're using. And it's precisely that you have these properties. So that's how you kill those profiles at infinity. All right. But let's not skip steps. Let's do this.
So that's what we're going to do today. The last thing we're going to do today is this. So to do this is where we need the two times. So what we will do is prove a further estimate at these two times for which we have the decomposition.
And the only way we can prove it is by using that you already have the decomposition, but fortunately it's enough to only have the L6 norm going to zero. Okay? Okay. So we recall that we have this.
That's the kind of what we get from the Morawiec estimate using the maximal function. And then we have these other two times for which we can put the maximal function here.
This is what we have. And at each one of these two times, therefore, by the previous theorem, we can do a soliton decomposition where the error now goes to zero in L6.
Okay? So we have all of this, all the orthogonality conditions, and this goes to zero in L6. Okay? Now I will use improved L2 estimates at those two times in which I have this decomposition.
Okay? So I take a small epsilon and I'm going to estimate the L2's norm of my solution in that ball of radius TIN on which I have the decomposition.
So in the decomposition I have the V, I have the epsilon, and I have the solitons. And the solitons, I split the L2 norm into the part inside this union
and the part outside this union. Okay? And those are the balls centered at the center of the soliton with the radius epsilon T sub IN, where the epsilon is the number that's given.
Now remember that I have an a priori bound on how big the j's are. They were a fixed number that depends only on the L2 norms, the supremum of the L2 norms.
Okay. So this part is just the decomposition and the triangle inequality. Now I'm going to use Helder inequality to go from the L2 norm to the L6 norm. So I go to the L6 norm and if you do the calculation, because it's three dimensions, it's L6, the volume of the ball is T to the cubed,
you get TYN and here TYN. Okay? That's just a Helder. The passage from L2 to L6.
Now, here I do Helder in each one of these balls. It's the union, I use each one of the balls and the balls are already as epsilon TIN, so I get epsilon TIN here.
Remember, these guys are completely orthogonal, so I can treat them as if the L6 norm of the sum is the sum of the L6 norms. Okay? So this is fine. And then the last term outside, I just use the fact that this is of radius TIN and I get that.
Now I'm going to recognize what happens to each one of these guys. This norm, because v is regular, goes to zero with n because I'm shrinking the support. This one, I know that goes to zero with n
because that's what I proved in my decomposition. That's the property that I have. This one, well, this is just a uniform constant now because I know how many I have and this one, remember, we have good pointwise bounds and the gradient pointwise bounds.
Now this one, because I'm away from the center by an amount epsilon, when I look at the L6 norm, I gain the power of epsilon. And so I have either little o of TIN for this term and this term
or epsilon TIN for this term and this term. So the conclusion, since this is true for each epsilon, is that this ratio goes to zero, right?
Because I had little o of TIN and epsilon TIN, but epsilon is arbitrary. So what I've now shown is that this L2 norm is going to zero with n,
faster than TIN. And this is something you want and you cannot prove it unless you have the decomposition already. But fortunately, you only need the L6 norm in the air.
Now we're going to improve. Now we're going to use that we have two times and I'm going to do a virial type argument. So to do the virial argument, what I do is I multiply my equation by u and integrate over x less than T and T in this interval.
So because I multiplied the equation by something, I get zero. So I got these three things. So I get all of these things. And now I'm going to integrate by parts in T and in x.
When I integrate by parts in x, what do I get here? No, it is a u. What I get is the flux times u. That's from the divergence in x. And then from the T terms, I get u dT u, u dT u.
And then I get what I didn't integrate by parts, which is this solid integral. So now I'm going to show that these three terms go to zero
faster than T2n over T1n. This is where you want the T2n minus T1n is of the size of each one. Pardon? In the first identity, dT u comes from the integration by parts.
No, no, because this is an identity. When we take the second derivative, it falls here, or it falls here, and then I subtract. And here, the Laplacian falls here, or then I have the gradient squared, and then I add it.
And then I get u to the sixth, because that's u to the five times u. So I really literally just straight zero. I didn't integrate. And now I integrate.
So the object of this is to try to show that this in average is small. And so I do that. I look first at the surface integral, which is this guy.
I use Cauchy-Schwartz, and then I get the flux to the power one-half. And I get the u squared to the power one-half. Here I could have used the Hardy term, but somehow I use the L6 term by using Helder here.
I get the length, and a little o of one, so I get little o of T to the two n minus T one. Okay, so that, because of the control of the flux, gives me that this goes to zero. Faster than the average goes to zero.
For this term, where do I get the gain? I get the gain from the u squared. But remember that this term appears only at the endpoints of the time interval. And it's at the endpoints of the time interval where I already know the decomposition, and so I can prove this L2S. So I get it.
So the conclusion is that the average of this quantity tends to zero. And I also had this.
And after passing to a subsequence and renumbering, something that goes to zero can be four to the minus n, and another thing that goes to zero can be eight to the minus n. And eight is twice four. Okay? All right, so what do we do next?
This one, you know, you can do things initially, you think, because it's a coercive quantity. It's a bigger than or equal to zero thing. This guy is not a coercive quantity. So now we have to use another argument. And the argument that we use to handle this non-coercive quantities
is an argument that Howard, Jia, and I had in a previous paper on wave maps in the equivariant case. Okay, so we pulled back another argument here.
So are you okay to go for five more minutes? Yeah? Okay. Let's go for five more minutes, see what happens. If people want me to repeat next time, I'll repeat some of the things.
Okay? Oops. So I look at this guy, and again, I use the maximal function, and I get that the maximal function, where it's bigger than two to the minus n,
is smaller than four minus n times the length of the interval. Okay, same argument as before from this estimate.
So now I'm going to see what to do with the other term, this kind of a little bit bad term initially. We know that this thing is bounded, so at least we know that. And what I'm saying is that this bound, together with this average bound,
implies that the thing has to be, that the set of points where this object is small is big, is substantial.
Okay? And this is a real variable argument. I mean, it's the standard kind of probabilistic argument, but the thing that you have to do, which is a little bit different than usual, is that you don't know that this thing has a sign. But you replace that by the fact that you know that it's bounded.
Okay? So you have to split the thing into the positive and the negative part, and then argue with those. And I gave the argument here. But let's assume that we believe this.
This is really a probabilistic argument, a real variable argument. So then since this, the measure where it's big is small, and the set of points where this is small is large,
we can combine and find a sequence of points where both things happen at the same time, where both favorable events happen at the same time. That the maximal function goes to zero,
and this thing here maybe will not go to zero, but the limit will be less than or equal to zero, because I have a sign here. So from these two things, okay, so here I'm telling you,
you show that the maximal function, you can find the sequence at which the maximal function goes to zero, and the lim sup of this thing is negative, because it will be smaller than 2 to the minus n for every n, so it will be less than or equal to zero.
The lim sup. So now what happens? Now what happens is that at this sequence, the n, because I know this, I can do the decomposition,
but just with an L6 error. Now I will use the fact that for a solitary wave, this thing is essentially zero. So I kill all the solitary waves.
So that tells me that the error has to have this property. Now remember, the error goes to zero in L6. So the L6 part is thrown out.
So here is the fact. For any elliptic solution and any Lorentz parameter, this quantity is zero. So this is a crucial identity that needs to be used here.
And how do you prove that? Well, you calculate. We know that this is zero from the elliptic equation, and then you see how the Lorentz transform affects this. And if you're patient and you do the calculation, you get that this is always zero.
Using the orthogonality of the parameters and the fact that the L6 norm goes to zero, and for the regular part, there's no contribution because it's regular and this is on a smaller and smaller set, so this goes to zero.
Then we get this thing. And the next thing is, of course,
we use the Morawiec thing. That tells us already that this goes to zero. Because we know that this in L2 goes to zero also, divided by Tn, whenever you know that the L6 norm goes to zero in the error, you get that this goes to zero,
then you get rid of this thing. Then you know that this thing is going to zero, and now we see what effect that has on the solitons. Remember that for the solitons, because they're traveling waves, this is zero. And you know that there's a connection between the L and the Cs and the T.
Therefore, and you know that this thing is concentrating. So X over Tn is really Lj plus a small error in the place where the soliton is concentrating. So that tells me that when I do this calculation on the soliton,
I essentially get this plus a little error, but this is zero because it's a soliton.
So here we're using where the Lorentz parameter and the translation parameter are linked to get this. So I think I'm going to go one more step.
So what we conclude at the end of this is that for this error, we have these two facts. Because on the solitons, you get zero. And on the regular part, you get zero. And you had it on the U.
So then I'll show you next time how from these two facts, we can get all the conditions that I stated at the beginning. So we will start with that next time, and then we will see that from all of these conditions,
you can eliminate the profiles that scatter too. To infinity in the error. And so then you'll get that the error also has this Trichot's norm going to zero. And then you have to come up with an argument to show that the energy norm goes to zero,
because that's what we're looking for. And here comes a new channel of energy argument. Remember that when we discussed the radial case, I said that the channel of energy is not true in the non-radial case. For the linear equation. What happens is that for the linear equation,
once you have the additional properties that these errors have, you can prove a channel of energy. And I will show you how to do that next time. And that's how you then show that the thing goes to zero in energy. But you need all of this preparation
to be able to get to the channel of energy. And it all matches. It was meant to be. OK, so we stop for now then. Thank you for your patience.
This works the same way for 3, 4, 5 and 6. Now, for higher than 6 there's a problem having to do with how much smoothness the nonlinearity has.
And then it should still be true, but it will be more technical. And I don't think we want to be more technical than what we are already. But it's only that point. So in your normalized estimate you don't really need that factor on the right. You need anything that would give you the little or the change in the letter.
Yes, there's quite a bit of room there. But I should say that the better estimate that you get there, the more control you will have on the sequence of times that you can pick. And the more control you have on the sequence of times that you can pick,
the more chance you have of passing to a general sequence. So that is the real reason why we're desperately trying to get rid of that log even. We would like to show eventually that this infinite integral is convergent.
And that may not exactly be true, but there will be some version of that that will be true. And then if that is true, then you can choose a lacunary sequence of times.
And if you can choose a lacunary sequence of times, then you have much more of a chance of passing to a general sequence. But many years may pass until that happens. What? Well, yeah, but it's been 10 years.