We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

4/4 On the Mathematical Theory of Black Holes

00:00

Formale Metadaten

Titel
4/4 On the Mathematical Theory of Black Holes
Serientitel
Teil
4
Anzahl der Teile
4
Autor
Mitwirkende
Lizenz
CC-Namensnennung 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Kerr-LösungStörungstheorieKonditionszahlVakuumInvarianteDiffeomorphismusNichtlineares GleichungssystemParametersystemAlgebraische StrukturKrümmungsmaßAlgebraische StrukturVakuumModulformPhysikalischer EffektAggregatzustandDelisches ProblemEinfach zusammenhängender RaumGrothendieck-TopologieGruppenoperationKontraktion <Mathematik>Metrisches SystemTorusNichtlineares GleichungssystemFamilie <Mathematik>Basis <Mathematik>KrümmungsmaßZusammenhängender GraphCoxeter-GruppePunktSortierte Logikt-TestInelastischer StoßStabilitätstheorie <Logik>FlächentheorieGibbs-VerteilungObjekt <Kategorie>MultiplikationsoperatorMinkowski-MetrikOrtsoperatorEinstein-FeldgleichungenDeskriptive StatistikMengenlehreInvarianteVektorraumDimensionsanalyseDerivation <Algebra>Analytische FortsetzungIdealer PunktÜbergangswahrscheinlichkeitLineare GleichungStationärer ZustandKerr-LösungRichtungMinkowski-GeometrieOrthonormalbasisTVD-VerfahrenRechter WinkelEinsComputeranimation
Nichtlineares GleichungssystemAlgebraische StrukturKrümmungsmaßFamilie <Mathematik>Kerr-LösungRichtungHauptidealGruppenoperationKoeffizientEinfach zusammenhängender RaumAlgebraische StrukturEinstein-FeldgleichungenElliptische KurveKurveNatürliche ZahlOrdnung <Mathematik>RelativitätstheorieSymmetrieDeskriptive StatistikDerivation <Algebra>ÜbergangBetafunktionKoordinatenKugelLichtkegelMereologiePhysikalisches SystemTermKonstanteNichtlineares GleichungssystemFamilie <Mathematik>Formale PotenzreiheKrümmungsmaßZusammenhängender GraphEnergiedichteGammafunktionPunktSymmetrische MatrixSortierte LogikKoeffizientRichtungSchätzfunktionStabilitätstheorie <Logik>Chi-Quadrat-VerteilungJensen-MaßWeg <Topologie>NichtunterscheidbarkeitAbgeschlossene MengeHauptidealMultiplikationsoperatorMinkowski-MetrikEinsMathematikKategorie <Mathematik>OrdnungsreduktionGesetz <Physik>Arithmetisches MittelForcingGerichteter GraphGruppenoperationMetrisches SystemVerschlingungStrukturgleichungsmodellSchnitt <Mathematik>Lipschitz-StetigkeitVierzigRechenbuchComputeranimation
Algebraische StrukturKerr-LösungHauptidealAdditionSchwarzschild-MetrikEinfach zusammenhängender RaumFundamentalsatz der AlgebraPhysikalische TheorieKrümmungsmaßZusammenhängender GraphAdditionKoeffizientRichtungArithmetischer AusdruckHauptidealMinkowski-MetrikGruppenoperationEinfach zusammenhängender RaumGeometrisches MittelGammafunktionSchwarzschild-MetrikArithmetisches MittelAuswahlaxiomÜberlagerung <Mathematik>KonditionszahlOrtsoperatorComputeranimation
Transformation <Mathematik>StörungstheorieEinfach zusammenhängender RaumKrümmungsmaßInvarianteStabilitätstheorie <Logik>Minkowski-MetrikTensorNichtlineares GleichungssystemKerr-LösungAggregatzustandPhysikalisches SystemEichtheorieRichtungHauptidealParametersystemWeg <Topologie>Maß <Mathematik>IkosaederApproximationEigenwertproblemEinstein-FeldgleichungenMathematikOrdnung <Mathematik>SymmetrieTransformation <Mathematik>Technische OptikEichtheorieStörungstheorieInvarianteVektorraumImpulsDerivation <Algebra>VektorfeldAuswahlaxiomBetafunktionBeweistheorieEinfach zusammenhängender RaumFunktionalGarbentheorieGrenzwertberechnungLokales MinimumPhysikalisches SystemPrimidealStichprobenfehlerTensorTermVertauschungsrelationKonstanteNichtlineares GleichungssystemNormalvektorKrümmungsmaßZusammenhängender GraphEnergiedichteQuadratzahlPunktKerr-LösungKoeffizientDifferenzkernSchätzfunktionStabilitätstheorie <Logik>Diagonale <Geometrie>FlächentheorieJensen-MaßDivergenz <Vektoranalysis>NichtunterscheidbarkeitKonditionszahlDifferenteHauptidealMultiplikationsoperatorSkalare KrümmungMinkowski-MetrikEinsWellenpaketPhysikalischer EffektDreizehnFreie GruppeGrundlösungGruppenoperationHyperbolischer RaumIndexberechnungKugelMomentenproblemAbstandVorhersagbarkeitPuls <Technik>QuaderAbstimmung <Frequenz>Kondition <Mathematik>SchwerpunktsystemBeobachtungsstudieProzess <Physik>Inelastischer StoßEreignishorizontVierzigRechter WinkelOrtsoperatorComputeranimation
Dynamisches SystemKerr-LösungStabilitätstheorie <Logik>EichtheorieNichtlineares GleichungssystemStichprobenfehlerEinfach zusammenhängender RaumLokales MinimumKrümmungsmaßGravitationVektorraumKlassische PhysikEnergiedichteKonditionszahlApproximationGleichmäßige KonvergenzTensorPhysikalisches SystemQuadratische GleichungRegulärer GraphStörungstheorieKonvexe HülleAlgebraische StrukturApproximationOrdnung <Mathematik>SymmetrieTransformation <Mathematik>WellengleichungEichtheorieStörungstheorieInvarianteKategorie <Mathematik>VektorraumErhaltungssatzDerivation <Algebra>VektorfeldAnalytische FortsetzungAuswahlaxiomEinfach zusammenhängender RaumGrundlösungKoordinatenLokales MinimumMereologiePhysikalisches SystemResultanteStammfunktionStichprobenfehlerTensorTermVertauschungsrelationWärmeausdehnungLinearisierungGüte der AnpassungNichtlineares GleichungssystemRegulärer GraphMaxwellsche GleichungenParametersystemRuhmasseLineare GleichungKrümmungsmaßZusammenhängender GraphEnergiedichteGammafunktionQuadratzahlSummierbarkeitKlasse <Mathematik>WellenlehreSchwerpunktsystemSortierte LogikRichtungSchätzfunktionStabilitätstheorie <Logik>Jensen-MaßKonditionszahlDifferentep-BlockKlassische PhysikZweiMinkowski-MetrikRauschenUniformer RaumPhysikalischer EffektDivergente ReiheGeradeMomentenproblemZentrische StreckungPunktArithmetische FolgeNeunzehnRegulator <Mathematik>BeobachtungsstudieMultifunktionZehnMultiplikationsoperatorPrädikatenlogik erster StufeWeltformelVorlesung/Konferenz
StörungstheorieEnergiedichteStabilitätstheorie <Logik>EichtheorieKonditionszahlTensorApproximationGleichmäßige KonvergenzPhysikalisches SystemQuadratische GleichungNichtlineares GleichungssystemRegulärer GraphKerr-LösungUnendlichkeitGeodätische LinieVektorraumKörper <Algebra>Konvexe HülleHill-DifferentialgleichungSpannweite <Stochastik>BeobachtungsstudieSkalarfeldSchwarzschild-MetrikTransformation <Mathematik>AnalysisBeweistheorieLineare AbbildungSpezielle unitäre GruppeOrdnung <Mathematik>SymmetrieTransformation <Mathematik>WellengleichungVektorraumDerivation <Algebra>Uniformer RaumVektorfeldEinfach zusammenhängender RaumGeodätische LinieIdealer PunktResultanteTensorTermNichtlineares GleichungssystemSpannweite <Stochastik>Lineare GleichungMathematikerinEnergiedichteVektorpotenzialKlasse <Mathematik>HorizontaleWellenlehreSortierte LogikSchwarzschild-MetrikBeobachtungsstudieDifferenzkernSchätzfunktionArithmetischer AusdruckJensen-MaßMultiplikationsoperatorMinkowski-MetrikNatürliche ZahlTopologieVerschiebungsoperatorGesetz <Physik>Gerichteter GraphGruppenoperationFamilie <Mathematik>Strategisches SpielSterbezifferKörper <Algebra>Geschlecht <Mathematik>Partielle Differentiationp-BlockRechter WinkelVorlesung/Konferenz
Nichtlineares GleichungssystemSchwarzschild-MetrikAnalysisBeweistheorieLineare AbbildungStabilitätstheorie <Logik>Kerr-LösungTransformation <Mathematik>EichtheorieKonditionszahlParametersystemGebundener ZustandMassestromDynamisches SystemKrümmungsmaßInvarianteImpulsSymmetrieOrdnungsreduktionVakuumpolarisationVektorpotenzialApproximationLie-GruppeMaß <Mathematik>MathematikNumerische MathematikSymmetrieTransformation <Mathematik>ModelltheorieEichtheorieUniformer RaumAggregatzustandDivisionForcingMomentenproblemPhysikalische TheoriePhysikalisches SystemResultanteThermodynamisches SystemTorusGüte der AnpassungNichtlineares GleichungssystemFamilie <Mathematik>Überlagerung <Mathematik>Stochastische AbhängigkeitNichtlinearer OperatorParametersystemRuhmasseOrbit <Mathematik>Formation <Mathematik>Körper <Algebra>WellenlehreSortierte LogikBeobachtungsstudieInelastischer StoßStabilitätstheorie <Logik>KonditionszahlVierzigMultiplikationsoperatorMinkowski-MetrikRechter WinkelEinsDifferentialgleichungEinstein-FeldgleichungenStörungstheorieInvarianteKategorie <Mathematik>Derivation <Algebra>DrehimpulsLinearisierungReelle ZahlÄhnlichkeitsgeometrieKrümmungsmaßGebundener ZustandKoeffizientt-TestSchätzfunktionJensen-MaßDifferenteVorlesung/Konferenz
SymmetrieOrdnungsreduktionVakuumpolarisationVektorpotenzialHill-DifferentialgleichungStabilitätstheorie <Logik>UnendlichkeitSchwarzschild-MetrikKonditionszahlStörungstheorieTheoremSymmetrische MatrixMultiplikationsoperatorMinkowski-MetrikAnfangswertproblemDiagrammEinstein-FeldgleichungenKinematikSymmetrieVakuumWellengleichungModulformVektorraumDimensionsanalyseDerivation <Algebra>OrdnungsreduktionVektorfeldArithmetisches MittelIdealer PunktKomplex <Algebra>Metrisches SystemPhysikalisches SystemPseudo-Riemannscher RaumResultanteSkalarfeldTermThetafunktionVakuumpolarisationNichtlineares GleichungssystemRuhmasseKrümmungsmaßZusammenhängender GraphVektorpotenzialVollständigkeitHorizontaleSchwarzschild-MetrikStabilitätstheorie <Logik>FlächentheorieDifferenteMultiplikationsoperatorKreisbewegungMinkowski-MetrikAchsensymmetrieDämpfungPhysikalischer EffektGruppenoperationTabelleZentrische StreckungEinflussgrößeQuadratzahlPunktBeobachtungsstudieProzess <Physik>Vorzeichen <Mathematik>Rechter WinkelVorlesung/Konferenz
Funktion <Mathematik>Technische OptikGeodätische LinieMinkowski-MetrikKonvexe HülleFlächentheorieGeometrieOrdnung <Mathematik>MengenlehreTechnische OptikFinitismusÜbergangArithmetisches MittelBetafunktionEikonalFunktionalGeodätische LinieHyperflächeLichtkegelMereologieResultanteSigma-AlgebraNichtlineares GleichungssystemGammafunktionPunktInnerer PunktSortierte LogikSchwarzschild-MetrikRichtungFlächentheorieJensen-MaßEreignishorizontCharakteristisches PolynomMultiplikationsoperatorTVD-VerfahrenMinkowski-MetrikRechter WinkelBootstrap-AggregationFolge <Mathematik>TopologieGanze FunktionFreiheitsgradGrundraumIdealer PunktKontraktion <Mathematik>PhysikalismusAdditionHorizontaleNichtunterscheidbarkeitDiagramm
RandwertMinkowski-MetrikRuhmasseDiffeomorphismusGruppenkeimHomöomorphismusMinimalgradFunktion <Mathematik>PotenzmengeFlächentheorieGesetz <Physik>AggregatzustandArithmetisches MittelAuswahlaxiomGrothendieck-TopologieGruppenoperationIdealer PunktMereologieFlächeninhaltNichtlineares GleichungssystemRuhmassePunktRadiusFlächentheorieMinkowski-MetrikEinsMathematikAllgemeine RelativitätstheorieInverser LimesSigma-AlgebraRichtungChi-Quadrat-VerteilungVorlesung/Konferenz
Minkowski-MetrikKugelRandwertRuhmasseAxonometrieÄhnlichkeitsgeometrieEllipsePhysikalisches SystemNichtlineares GleichungssystemNumerische MathematikOrdnung <Mathematik>SpieltheorieTransformation <Mathematik>Deskriptive StatistikModulformKombinatorÜbergangAuswahlaxiomFreiheitsgradFunktionalGruppenoperationPhysikalisches SystemZentrische StreckungGüte der AnpassungNichtlineares GleichungssystemNichtlinearer OperatorBinärdatenSterbezifferEnergiedichteFormation <Mathematik>PunktSortierte LogikBeobachtungsstudieBimodulProzess <Physik>KonditionszahlRandwertMinkowski-MetrikRechter WinkelEinsAnalysisEinstein-FeldgleichungenGeometrieEinfach zusammenhängender RaumKugelLichtkegelSigma-AlgebraSkalarfeldTensorTermRuhmasseKrümmungsmaßBetrag <Mathematik>Symmetrische MatrixKovarianzfunktionKoeffizientSchätzfunktionChi-Quadrat-VerteilungDivergenz <Vektoranalysis>Vorlesung/Konferenz
AxonometrieÄhnlichkeitsgeometrieEllipsePhysikalisches SystemNichtlineares GleichungssystemFunktion <Mathematik>Minkowski-MetrikFlächentheorieEinfach zusammenhängender RaumKrümmungsmaßStabilitätstheorie <Logik>ParametersystemMassestromSchätzungHyperebeneStetige FunktionNumerische MathematikOrdnung <Mathematik>WellengleichungSingularität <Mathematik>EichtheorieBewertungstheorieAnalytische FortsetzungEinfach zusammenhängender RaumFreiheitsgradHyperflächeKugelLokales MinimumMaßerweiterungMereologiePhysikalisches SystemSigma-AlgebraSkalarfeldStellenringÜbergangswahrscheinlichkeitEllipseNichtlineares GleichungssystemKrümmungsmaßZusammenhängender GraphGammafunktionPunktKoeffizientRichtungUmwandlungsenthalpieSchätzfunktionChi-Quadrat-VerteilungJensen-MaßKonditionszahlPolstelleRandwertMinkowski-MetrikRechter WinkelBootstrap-AggregationMaß <Mathematik>Transformation <Mathematik>ModulformPhysikalischer EffektGruppenoperationMomentenproblemWärmeübergangChirurgie <Mathematik>Güte der AnpassungParametersystemSterbezifferExistenzsatzBeobachtungsstudieInelastischer StoßGraphfärbungGibbs-VerteilungMultiplikationsoperatorComputeranimation
Innerer PunktHorizontaleTheoremVollständigkeitUnendlichkeitKonvexe HülleDezimalzahlKoordinatenAnfangswertproblemLie-GruppeNumerische MathematikOrdnung <Mathematik>WellengleichungAllgemeine RelativitätstheorieAsymptotikModulformVektorraumDerivation <Algebra>Uniformer RaumAuswahlaxiomBetafunktionEndlichkeitFunktionalGrenzwertberechnungGruppenoperationIdealer PunktKoordinatenLeistung <Physik>Metrisches SystemPhysikalisches SystemStichprobenfehlerTermTheoremNichtlineares GleichungssystemEinflussgrößeMaxwellsche GleichungenRuhmasseNormalvektorKrümmungsmaßZusammenhängender GraphSterbezifferGammafunktionPunktInnerer PunktSortierte LogikDifferenzkernSchätzfunktionStabilitätstheorie <Logik>Lie-AbleitungJensen-MaßRandwertZweiMinkowski-MetrikRechter WinkelKappa-KoeffizientBootstrap-AggregationMaß <Mathematik>SpieltheorieStörungstheorieGanze FunktionPhysikalischer EffektForcingLeckPhysikalische TheorieVertauschungsrelationGewicht <Ausgleichsrechnung>EnergiedichteHorizontaleVollständiger VerbandVorzeichen <Mathematik>MultiplikationsoperatorEinsComputeranimation
RuhmasseAusdruck <Logik>Gesetz <Physik>Minkowski-MetrikTheoremApproximationMathematikNumerische MathematikIdealer PunktVakuumpolarisationNichtlineares GleichungssystemNormalvektorGeschlecht <Mathematik>RichtungMinkowski-MetrikInverser LimesTheoremRuhmasseEinsComputeranimation
TheoremMinkowski-MetrikHorizontaleMetrisches SystemTransformation <Mathematik>Numerische MathematikOrdnung <Mathematik>Transformation <Mathematik>MengenlehreDerivation <Algebra>Gerichteter GraphGrenzwertberechnungLokales MinimumMaßerweiterungMereologieSigma-AlgebraStichprobenfehlerTermTheoremGüte der AnpassungKonstanteNichtlineares GleichungssystemParametersystemGewicht <Ausgleichsrechnung>NormalvektorZusammenhängender GraphEnergiedichteKoeffizientSchätzfunktionJensen-MaßMultiplikationsoperatorMinkowski-MetrikBootstrap-AggregationFolge <Mathematik>Arithmetisches MittelGrothendieck-TopologieMetrisches SystemPhysikalisches SystemZählenSterbezifferWinkelverteilungQuadratzahlKlasse <Mathematik>AbklingzeitKonditionszahlRechter WinkelAssoziativgesetzEinsComputeranimation
Metrisches SystemTransformation <Mathematik>KonditionszahlAussage <Mathematik>KugelFlächeninhaltRadiusÜbergangswahrscheinlichkeitPhysikalisches SystemEllipseNichtlineares GleichungssystemTermIterationKontraktion <Mathematik>ParametersystemTransformation <Mathematik>ModulformPhysikalischer EffektGesetz <Physik>Analytische MengeArithmetisches MittelFunktionalKugelMetrisches SystemPhysikalisches SystemTermVakuumpolarisationNichtlineares GleichungssystemEndlich erzeugte GruppeParametersystemAdditionPunktFlächentheorieEreignishorizontAbgeschlossene MengeKonditionszahlMultiplikationsoperatorRechter WinkelFigurierte ZahlUmfangIterationMatrizenrechnungEnthalpieThetafunktionEllipseAnpassung <Mathematik>MultifunktionEinsComputeranimation
Dynamisches SystemMathematikForcingGerichteter GraphLeistung <Physik>MereologieMomentenproblemTermKreisflächeProzess <Physik>Abgeschlossene MengeKonditionszahlGravitationEichtheorieKategorie <Mathematik>ImpulsDipolmomentGravitationswellePhysikalisches SystemProjektive EbeneVakuumpolarisationRuhmasseZusammenhängender GraphPunktSchwerpunktsystemRichtungRechter WinkelVorlesung/Konferenz
Transkript: English(automatisch erzeugt)
So this is a continuation of my lecture so let me review very first what we discussed at an introduction in which I talked about the final state
conjecture and then I mentioned that the final state conjecture which is a general conjecture about the large-time behavior of general solutions
of the Einstein equations in the asymptotically flat regime and it contains many other, it's a huge conjecture which contains many other simplified cases you could say but each one of these cases are huge conjectures themselves so rigidity is the statement that the only solutions of
stationary solutions of the Einstein equations in vacuum are the Kerr family and I talked a little bit about this. Stability is what we are talking about now which is if you make small preservations of Kerr you stay close
to Kerr. As a particular case which is now understood I talked about the stability of Nikovsky space and sort of the ideas behind. I'll mention more as I go and then today I'll talk about the black hole stability. I started already. The conjecture is
this one that if you look at the picture of the Kerr solution, this is a Kerr solution, you look at the exterior of a Kerr solution, this is a horizon, this is a scry, you have a space-like hypersurface and if you look at the induced metric on the
space-like hypersurface, you look at the initial data set corresponding to Kerr, make a small preservation, the conjecture is that you are going to converge to another Kerr solution. So it's another which is very important because the final states are going to be different from the original states and of course it by
itself is a huge mathematical difficulty to find these final states. So as I mentioned in fact a few times, if you look at, so these are of course the Einstein equations in vacuum, GMA is a Kerr metric. So it's a Kerr metric so it depends on two
linearized equations. So linearized Einstein equations and DGM over DM is a solution of the
linearized equations equal to zero on the right hand side which is non-trivial. So in other words you get essentially a bound state for the linear equation and the same thing if you do the derivative with respect to A. So these are non-trivial solutions of the linearized
equation corresponding to essentially zero eigenvalues. So you expect this to create a lot of problems. In fact you get even more problems because due to the deformorphism invariant you can do variations relative to deformorphism and you get a huge set of the kernel. So you find out
that the kernel of this linearized equation, Einstein equation, has this plus this. In other words the full dimension of the kernel is actually of the 4 times infinity plus 2. So it's a huge huge thing and that of course makes life very difficult. So now I talked a little bit about
the geometric framework. So let me recall very fast. First of all, this is something very general. Einstein equations in vacuum and not just Einstein equations in vacuum. You start with a null pair.
So this is very important because somehow it's a null pair. The null directions in general activity are fundamental. So you want a geometric description to reflect that fact. So you start with null pair.
So this is a null vector, this is another null vector and you normalize it so that the metric g is 3e4 is minus 2. Then you look at the horizontal structure induced by this. In other words you look at the space perpendicular to e3e4 and this does not have to be
integrable. Sometimes it is integrable and that's very useful but in general it's not integrable which creates additional difficulty but very interesting mathematical difficulties. And you define a null frame then to consist of the null pair plus an orthonormal basis
of this space which again does not have to be integrable. So at every point you have a collection of vectors of this type, two null vectors and then the ones which are orthogonal. Of course this space is obviously space-like.
When you have the frame you look at the connection. So you define the connection and as I mentioned last time you really have to decompose a connection into various components relative to the frame and you give them names and if anybody wants I can repeat what
is the definition of this one. You do the same thing for the curvature you get alpha, beta, rho, rho, star, beta, bar, alpha, bar. So of course for those who know the Newman-Penrose, this is like Psi 2 I guess. It depends on how you start. So this is like Psi 2,
Psi 1, Psi 0, Psi 2, Psi 3, Psi 4, Psi 5. Exactly. And everything is real here, right?
It's complex. Correct. So this is in a sense more geometric because I don't need to pick up a particular frame. So all these definitions are independent of this frame that I pick up here. But of course I can also complexify. There is a simple relation between these various descriptions.
And of course it helps when you talk about care, it helps to look at rho plus i rho star in fact. So even in our formalism this is. Then you write down main equations. In other words, you write down the Cartan equations which is derivatives of gamma plus gamma times gamma gives you the curvature. So this
is one system of equations at the level of the gammas. And then you have Bianche identities for r. All right, so the main equations are the Cartan equation plus the Bianche equation. And there's a thing that is important to mention and I mentioned last time is the S-foliations.
Of course, so these are foliations induced by E3, E4. Now if this thing is not integrable, you cannot talk about foliations. But very often this, for example,
in chartred Doremikovsky space you can pick up the null frame so that this is actually integrable. It gives you two spheres, for example, topologically two spheres, and these are the S-foliations. So an S-foliation means that at every point in space-time, the space-time that you consider, you have, for example, you have
null light cones going in this direction, another null cone going in this direction, and the intersection is a two-sphere. So let's say this is u equal constant, this is u-bar equal constant. Then when they intersect, you actually get a two-sphere,
S of u u-bar. And of course you have a frame then at every point, you have a frame which is generated by the family of light cones. So you have a frame on S, you have a frame which is we call E4, and there's a one which you call E3. So at every point on the sphere you have such a frame. And so S-foliation plays a very important role in the stability of
Mikovsky space, as I mentioned last time. All right, so this is a Kerr family again, so in Bohr-Lingus coordinates. So these are the coefficients of the Kerr solution.
Obviously we discussed stationary axis symmetric and so on and so forth. Anyway, here of course I want to put in evidence that there exists this null pair, E3, E4, which is defined in terms of the Bohr-Lingus coordinates, and this is called the principal
null direction. So it's a principal direction because it has some remarkable properties, which I will review again. Anyway, here are the basic quantities, again expressed now as we said, for example, the definition of chi a b is this one. You take e a and e b, which are
the ones perpendicular to E3, E4, and you take the derivative E4. Now this quantity, which has a geometric significance, is symmetric if the span is integrable, but otherwise it's not. So otherwise, for example, in Kerr, if you take, so Kerr is an obvious example,
if you take E3, E4 to be this null pair that I had earlier, then this is not integrable in this case, so the space is not integrable, and therefore these quantities are not symmetric and you get a lot of components. And again, it's very important to keep track of the
components because they have different behavior. The curvature components are defined very easily alpha has two E4s, beta has two E4s and one E3, and rho has two E4, two E3, and so on and so forth by symmetry if you interchange E3 and E4. The basic equation again are the null structure
equations, which relate the derivative of gammas to the curvature, and there are some type of equations which are derivative of gamma is equal to curvature plus derivative
of gamma plus gamma times gamma, so this is just a very sort of very simplistic description of what the equations look like. And then the null Bianchi equations, which are equations for components of the curvature, which formally look like this, a derivative in the E4 direction, so this is the E4 direction, this is the E3 direction,
derivative in E4 direction is derivative of r plus gamma times r, and so on and so forth. So the way to think about these equations, the way we think about both the instability of Mikovsky space and stability of black holes is that the E4 directions can be viewed as equations along geodesics, null geodesics,
or null curves if they are not exactly geodesics. And somehow if you know already r, so if you have information about curvature,
you can somehow hope to integrate this transport, so this can be viewed as transport equations. Of course they are more complicated because they could be derivative, this is a derivative on the right hand side. Anyway, but sort of very roughly one can think of it as transport equations. This, one can think about it as some kind
of elliptic equations on the lifts of the foliation provided that the foliation is integrable. And then these equations, one has to really understand them in a very different way, so these are much more complicated. These are the equations where the sort of the hyperbolic nature of the Einstein equation has to be taken into account.
And in the subject of Mikovsky space, we said that somehow these type of equations have to be understood from the point of view of doing generalized energy estimates
using the symmetries of Mikovsky space or close to Mikovsky space in order to derive decay estimates and so on. So this is sort of the main part in fact of any kind of construction of solutions of the Einstein equations. All right, so now the crucial fact in care is that relative to the principal null direction as with E3 and E4, then all components of the curvature
are zero with the exception of rho and rho star, which are given simply by this very nice expression. And then if you look at the Ricci coefficients, again some of the Ricci coefficients
are zero, but not all. There are still lots of components which are non-zero in CAD. In Schwarzschild, if you are in Schwarzschild, in addition you get that the E3-E4 is integrable, so that's very nice because now you can do a little bit of Hodge theory on two surfaces, which is place a fundamental rho and instability of Mikovsky space.
And then you have also that rho star, which is this one, is equal to zero. So in Schwarzschild, you just get one component of the curvature, which is this rho, which is 2m over r cubed, so minus 2m over r cubed, so it's also very easy to calculate.
And in addition, you get other components which are zero, so other components of the Ricci coefficients which are zero in Schwarzschild. So these are eta, eta bar and eta. In fact, the only non-varnishing components of gamma in Schwarzschild are trace chi, trace chi bar, omega and omega bar. So these are connection coefficients which, if you don't remember,
it doesn't matter. The important thing is to note that if you use a principal null frame, many things really vanish, so that's why principal null frames are so important. In Mikovsky space, so once again, you go from Kerr, Schwarzschild, and you get even more of
a simplification in Mikovsky space. In addition, you get that all components of the curvature are zero. You get that these two components of the Ricci coefficients are zero. So in fact, the only non-trivial components are trace chi and trace chi bar, which have very simple geometric meaning. So that's the situation in Schwarzschild. Is there not a choice in Kerr of a different E3 and E4 where they are integrable,
the orthogonal thing? Because the Boyer coordinates have a sphere, so I can take... Yeah, sure, but... The so-called Kino-Slate, that's what I'm talking about.
Well... Because you want the symmetry here. In order to still get the integrability, you mean. But you lose the diagonalization. Yes, you lose something. You lose and get something else. Yeah, sure. So there is always a trade-off that you might want to use. Yes, that's true. But still, I believe that these are
fundamental. I mean, maybe you want to construct, maybe you want to have this principal null frame or something close to the principal null frame, and from it you construct the other one, which is integrable, right? I don't know. Is the null frame null, which is integrable? Yeah, yeah, yeah. Certainly. Is the principal real? How do you define principal?
Principal is in terms of, of course, in Kerr. So it's principal in Kerr in the sense that the curvature diagonalizes with the exception of rho and rho star. So all components of the curvature are zero, et cetera, rho and rho star. Okay, I thought there were other frames which were the symmetric between past and future, but integrable, but maybe I followed.
Well, I mean, we can talk. All right, so here is now the point of perturbations. So I want to perturb, of course, Kerr. So in a simplest possible approximation, I want to think about having a solution of the Einstein equations
where there exists some frame, right, which is close to the principal null frame of Kerr, say, and such that all components which are zero in Kerr relative to the corresponding frame
are now O of epsilon, right? So I have an O of epsilon perturbation of E3, E4 of Kerr, and then therefore this is an O of epsilon perturbation of these various components of curvature and Ricci coefficients. So this is a definition. So it's a very simple definition. It's very naive, of course, but that's the simplest I can think of,
of what you mean by an O of epsilon perturbation of the spacetime. Now, the problem is, of course, you don't know which frame you are talking about. In fact, there are infinitely many frames that you can use. So if I have one frame which is good for which I have this, I can make a frame transformation.
I can make a general frame transformation which takes a null frame into another null frame. In other words, I go from E3, E4 to E3 prime, E4 prime. They change like this. E8 prime changes like this. And then these ones will also change, and I get another O of epsilon.
So in other words, there are infinitely many possibilities. Of having frames like this. So which one do I choose? And of course, as I mentioned last time, the gauge condition... If I don't have a correct gauge condition, I have no chance to prove stability of the care solution. So the gauges are fundamental. So finding the correct gauge is really the heart of the problem.
Okay, so now the remarkable fact about this, if you look at the way these things transform. So I want to calculate how every component transforms relative to these frame transformations. I find something remarkable. I find that alpha and alpha bar are all epsilon square invariant.
In other words, if I start with a frame, I calculate alpha and alpha bar in that frame, and I make this change. I observe that the difference between alpha prime and alpha is all epsilon square. The same thing with alpha bar. So these are all epsilon square invariant. So in a certain sense, these do not depend on the choice I make.
That's extremely important, as we shall see. At which stage do you construct a full coordinate system? Because the frame is not a solution at the end. Yeah, you also construct. You do it at the end? Yeah, essentially at the end.
Well, you have to do everything at once, in a sense. I mean, you cannot. Yeah, non-linear equations, everything has to. So in any case, the other observation is that for perturbation of Minkowski space, all curvature components are of epsilon square invariant. In other words, in perturbation of Minkowski space,
also I have rho and rho star are also of epsilon. If I do these kind of transformations, it's very easy to see that all components of curvature are of epsilon square invariant, and this is one major simplification of the stability of Minkowski space. All right, and I talked a little bit about it last time.
So last time I talked about stability of Minkowski space. So let me maybe mention a few things about stability of Minkowski space. So the fundamental point on the stability of Minkowski space is that exactly because the curvature somehow is almost invariant, it's all epsilon square invariant relative to perturbations,
I can look at the Bianchi identities. So we have, say, dr and this dr equal to zero. So if you remember that I said that this kind of pair, the Bianchi identity pairs, can be viewed as some kind of Maxwell equation.
There was also an energy type momentum tensor which has four indices, and such that the divergence of it is equal to zero. And then therefore you can construct energy norms
and analyze these equations like a Maxwell system, use the symmetries of Minkowski space, using in fact approximate symmetries, because obviously perturbations don't have symmetries anymore, but they have approximate symmetries, and that's how you treat the hyperbolic character of the Einstein equations in Minkowski space.
Once I understand this, then I have, of course, this has to be done together with something else, which is a construction of a frame, and you construct by using a time function t and an optical function u.
So in other words, a time function which is maximum, so this is a maximum time function, and u verifies the Eigenvalue equation. So in fact, actually, we solved the Einstein equation, so this is important. You solve the Einstein equation together with this u,
so g alpha beta v alpha u d beta u is equal to zero. So you really have to think that you solve both, and of course you also solve for t in the original proof of the stability of Minkowski space. So in other words, you have to construct this together,
and then once you have these two functions, which they give you an affiliation, because light cones, of course, they intersect with time, so u equal constant intersects with t equal constant in a two surface, and then I can use these two surfaces and u and t in order to construct a null frame,
which is perpendicular to the sections. This would be the section s t u, and then you define the connection coefficients from it, and then, very importantly, you construct vector fields. You construct vector fields which are built based on the frame,
and you use these vector fields, in fact, to take lead derivatives of curvature here. So you commute the vector fields with this, equal to, well, you'll get no error times, and so on and so forth.
That's more or less what I explained last time. The very important idea behind all this, which I explained last time, is that you can get decay not by using fundamental solutions, which is a very complicated thing,
and you run into lots of difficulties if you try to use it, but rather this vector field method, which I'll mention more later. It's a robust method which allows you to derive decay and at the same time to derive energy estimates and so on and so forth.
In any case, the difference between Kerr stability and stability of Minkowski space. Some null curvature components are non-trivial, and as a consequence, you cannot use this Bianchi system anymore. The Bianchi system will fail because there will be bound states for it.
If you try to solve this, you run immediately into trouble. So this kind of methodology unfortunately doesn't work. All other null components of the curvature tensor are sensitive to frame transformations. This is what I mentioned earlier. Alpha, alpha, bar are invariant,
but unlike Minkowski space where everything is, all curvature components are invariant up to epsilon squared. This is not the case here. Principle null direction are not integrable. That's another huge difficulty. And then you have to track dynamically the parameters of the final Kerr
and the correct gauge condition. This is of course the most difficult part, because if you are not in the correct center of mass frame, you don't have decay, and therefore you cannot conclude anything about the nonlinear equations.
You cannot close. Finding the correct center of mass frame and finding the way to track down the final parameters is one of the main difficulties in primitive stability, which of course you don't have in Minkowski space.
And then finally, this is another thing, which is that even if you look at very simple equations, the wave equation in Kerr for a scalar phi, they just look at the simplest possible equation, which you can think of it as some kind of simplified linearization.
I mean, it's much simpler of course than the full linear system satisfied by the linearization equations. But you can start looking at this, and this already has lots of difficulties, as I should discuss. All right, so these are the things that you have to worry about if you want to prove the stability of block calls.
So are there any questions? All right, so let me continue then. Okay, so there have been a lot of progress, and I'll try to talk about the most important steps.
And of course, obviously I will not be able to review absolutely everything that has been done. But in my view, the most important conceptual contributions to the understanding of the stability problem
starts with Staukowski, which proved in 1973, again based on the work of many other people from before, which I'm not going to talk about. But in any case, he showed that these extreme curvature components, alpha, alpha, bar, which are all fepsion square invariants, they are already more interesting than anything else,
they verify, again up to all fepsion square error terms, decoupled linear wave equations. So in other words, in linear theory, you get this alpha, alpha, bar, verify some equations.
Of course, the equations are not so simple, because you could have terms of first order, and you could have terms of second order, when you have derivatives multiplied by something. So these are the equations, and of course, something similar happens for alpha bar. But in any case, there are not coupled to the other components of the curvature,
and there are not coupled to the other components of the rigid coefficient, and so in that sense, these are quite remarkable. And it turns out it's not so difficult to show that. It has something to do with these invariance properties of alpha, alpha, bar. But the unfortunate thing about this alpha, alpha, bar,
is that the equations are non-conservative. You cannot find a good conservation law here. They are not derivable from a Lagrangian, and therefore, from the point, I mean, they are useful, for example, in terms of showing that there are no exponentially growing modes. So you can analyze this
and show that there are exponentially growing modes, but this by itself, as we discussed many times, this is far from being able to do anything in terms of nonlinear equations. But in any case, this has led to Whiting's result of 1989, where you showed that the Tchaikovsky linearized equations
have no exponentially growing modes. Some of it has been done by Tchaikovsky for a few modes, but Whiting was able to do this for all modes. I think this was the main contribution of Whiting. And then, later, it was Jakov, Schlapp and Rottman in 2014, where he actually proved a slightly stronger result than Whiting.
He showed some kind of quantitative mode stability for this equation. This, in fact, was then used, this result was then used in this remarkable work of Dafremos-Ronyansky-Rottman in 2015, which used a new vector field method I'll mention in a second,
and Jakov's result here deduced quantitative decay estimates for this. So what I mean by quantitative decay, remember that I said many, many times, it's not enough to just show that these equations are well behaved. You have to actually derive quantitative decay,
which you can later hope to use in order to close the non-linear terms, because you have to control the non-linear terms. For that one, you need decay. Anyway, so this type of results, for this, I'll mention a little bit more in a second, so maybe I'll continue right now. And by the fact that there are transformations
between the Tchaikovsky equation and the Regev-Willow equation? Yes, I'll mention it in a second. Yeah, yeah, no, this I'll mention. All right, so this is, I'll call it the first sort of important conceptual breakthrough. Another important breakthrough is the classical vector field method.
So while the first one is due to physicists, this one is due to mathematicians, classical vector field method is a non-perturbative method based on using the continuous symmetries of Mikovsky and adopted higher-order energy estimates,
which you build by using the symmetries to derive robust uniform decay and peeling. So this is what I discussed last time, that if I have, say, in the simplest case, just the wave equation in Mikovsky space, of course you could derive the decay properties of the solution using the fundamental solution,
but that is very hard to reproduce if you have perturbation of the metric here. In exchange, the vector field method is a method where you commute the summation of i with a class of vector fields, which corresponds to the symmetry of Mikovsky space, do energy estimates,
and then from the energy estimates you get the decay. Okay, so this is sort of a robust way to get decay without doing expansions or anything of that sort. All right, so that led also to peeling. In other words, you don't just get the decay estimates, you also get that various derivatives
of the solution of the wave equation has different decay properties, and this method also can be generalized to the Maxwell equation and more complicated system of equations where you get the peeling corresponding to that, again, without using the fundamental solution in any way.
You're just using the symmetries of Mikovsky space. Then there was another important thing, in connection with the classical vector field method, which is a null condition, which is a structural gauge-dependent condition on the quadratic part of a nonlinear system of wave equations, which ensures global regularity.
So you identify a certain structure of the quadratic value, it suffices to look at the quadratic terms in the nonlinearity, and you can immediately see, it's very easy to see whether the null condition is verified or not in that system of coordinates. But in general, and I mentioned last time,
if you have a more complicated system, this very much depends, this null condition, depends on the gauge choices you make. So in some gauge you can have a null condition, in another gauge you may not. So this is a gauge-dependent thing. Then there is a nonlinear Stavito-Mikovsky space, which is based on these ideas, so it uses generalized energy estimates,
approximate symmetries, and this allows you to get decay estimates for the curvature tensor, and once you get decay estimates for the curvature tensor, you also get it for the gammas by using the Cartan equation. Anyway, so that's roughly the classical vector field method in a nutshell.
Here you use the fact that they are killing vectors of the... Mikovsky space. Can't you use the fact that there is a killing tensor for curve? Well, people are starting to use these sort of things, but it's not yet clear how far you can go.
Yes, absolutely, but this has been used. So Blue and Anderson have used these sort of things. All right, so now... So this is now a new vector field method. So when you talk about black holes, the situation is more complicated because you don't have enough symmetries.
So the symmetries of care solutions are much fewer than the symmetries of Mikovsky space. So the same sort of things. You can still use the symmetries, but that's not enough. You have to use something else, and that's sort of what the new vector field method is. Again, this was developed by mathematicians in the last 15, 16, 17 years, and it's based on...
So let me try to explain a little bit because I think it's very interesting. So let's look at a picture of a black hole, right? So this is exterior of the black hole. This is the horizon. This is cry. I'm looking at the region, maybe slightly inside the horizon,
I mean inside the black hole, which is r equal rh. So let's say that this is, for simplicity, it's Schwarzschild. So I mean Schwarzschild. This is r equal rh is r equal 2m. This is horizon r equal to 2m. Then there is this other thing here, r equal r star, which is r equal 3m.
So as I mentioned earlier, this corresponds to null geodesics, which stay here forever in the middle. And then there is a sky and so on and so forth. So if you look at just... Suppose you want to analyze just the wave equation, but in Schwarzschild, equal to zero.
Just for the simplest possible linear equations that you want, and you realize that you have a lot of new difficulties that you didn't have in Minkowski space. The simplest is a difficulty exactly along the horizon, which is due to the fact that on the horizon, the vector field dt,
let's call it capital T, becomes null. If you look at the energies associated to this, the energy is always constructed based on this vector field, on d over dt. If you look at the corresponding energy, you get a degeneracy. So you get a degeneracy at the horizon.
So then you have to do something. So the other thing that you have is this trapped null geodesics. This trapped null geodesics leads to a huge difficulty, which is natural because you expect in geometric optics, you expect that there has to be something wrong here.
And in fact, if you look at the energy estimate that you want to do here, you'll see that there will be a degeneracy here, exactly along r equals 3m. And then, of course, there are all the issues at infinity. But in this part, you can argue as in Minkowski space. So the new methodologies that people have discovered
is that somehow it pays to still look at vectors. So it's still based on vector fields. You don't use a fundamental solution. It's still based on vector fields. But you construct a new class of vector fields, which are not necessarily causal. So for example, you can find a good vector field,
which takes into account the region near the horizon. This is called the redshift-type vector fields, which were discovered by Lefebvre-Zaronianski. I mean, they've been used by Lefebvre-Zaronianski. Then there is a region here at r equal r star, which is really the most difficult because of this degeneracy. This has been taken into account by lots of people
in the last 15, 16 years, and has led to methods in which... Excuse me. There will be methods based on so-called the Moravec vector field. So these are global estimates for solution
of the wave equation, which degenerate here and degenerate here. And because of this, you have to combine this type of vector fields. You have to combine it with vector fields, which are good here in this region, and vector fields, which are good here in this region.
And this is sort of a much more engineering type of approach than the old vector field method, which was more global. Here, in every region, you find something that works, and then you put them together somehow. You always have to have something which is global.
In this case, it's the Moravec estimate. But again, the Moravec estimate may degenerate here and here, and therefore, you have to combine it with something in this region and something in this region. So again, as I said, this region is more like in the case of Minkowski space. Anyway, this was sort of a design type of vector fields,
which led to the ability to prove results for the wave equation in Schwarzschild. If you have care, it's even more complicated. So instead of Schwarzschild, you have care. It's even more complicated, because near this region, near the horizon,
there is, in fact, an entire region, which is called the Ergo region, which I mentioned a few times, in which this becomes actually space-like. And that leads to many more analytical difficulties,
which have been resolved. In particular, they have been resolved in this result that I mentioned here. Okay, so that's a situation. The new method had emerged in these last 15 years in connection to the study of boundedness and decay for this type of equation. There were many partial results,
starting with software in blue in 2003. So this is already 15 years. And then many, many others. But the final result was proved by Dafermos Aronionski-Schlapp and Tarach Rotman, which deals with a full range A less than M in care. All right, so now, third breakthrough,
important breakthrough in our understanding today, is the result, so this is what you mentioned, the result of Chandrasekhar, that there exists a transformation which takes this alpha, which verifies a Tchaikovsky equation, which is non-conservative. It takes it into a tensor,
a new tensor P, which can be calculated from this. In fact, it involves two derivatives of alpha. You need two derivatives of alpha to get P. And this P verifies a Regge-Wuller type equation. In other words, it's wave equation at P plus a potential times P is equal to zero.
So this is an observation which was already made by Chandrasekhar, but the full use of this transformation in terms of actually getting real estimates is due to Dafermos-Holzsak and Rodnianski in 2016,
where they get a physical expression for this equation, starting, I mean, the equation, the second transformation was based on modes, but anyway, this is maybe not so important. The more important thing is that they used the equation, they used their methodology,
the methodology that has emerged in these last 15 years that I mentioned earlier, in order to analyze the decay rates for P for this equation. They get uniform decay rates for this. And then once you have a full understanding of P, you can go back, you can revert and go back to alpha
and get estimates for alpha. And of course also estimates for alpha bar because there is something similar from alpha bar to P. Anyway, and then this is used as a first step to prove the linear stability of Schwarzschild, which I mentioned in a second. Okay, then recently,
there have been even more interesting developments, which is that something similar can be done to control the Tchaikovsky equation, even in care if A is sufficiently small. So for sufficiently small, there is now a way of complementing this observation of Chandrasekhar or something slightly more complicated,
but which still allows you to analyze and to get the decay estimates for this type of equation where you are already, sorry, in other words, you start with alpha Tchaikovsky equation and you get a system, something more complicated,
which you can analyze and then you can go back and get estimates for alpha. So this can be done now for care for small a, which is clearly very important. By the way, contrary to Rodion's key, Tchaikovsky is written with a Y at the end. Ah, okay, sorry. Okay, good, I will change.
All right, so these are, again, new results of Dafermo Sotsegar-Onyanski and a student of Anderson in last year called Ma. Okay, so now, Linea Estabito Schwarzschild,
so this is once you understand this type of transformations, you can show that the Schwarzschild care is, in other words, zero when a is equal to zero. So Schwarzschild space is linearly quantitatively, in the sense quantitatively in the sense that you get real decay estimates,
uniform decay estimates, which is immensely important if you want to do non-linear theory. Once we mod out the unstable modes related to this two parameters family of nearby stationary solution, this is what I mentioned at the beginning, and linearized gauge transformations.
So in linear theory, of course, it's much easier to do this, but once you do that, you can show that everything decays appropriately. You get bounds and decay for all the quantities. Okay, so this is done by using
the Chandrasekhar transformation. You derive from it, you get an equation which I mentioned you can analyze, and then from it, you get alpha alpha bar. And then once you have this, then you see, so these are gauge independent in some sense, alpha alpha bar. At least in linear theory, they are completely gauge independent, but not the other quantities.
So the other curvature quantities are not, and also, of course, they reach a coefficient. So reconstruction means that you have to now find appropriate gauge conditions. So this is only going to work if you now impose gauge conditions. You find appropriate gauge conditions to derive bounds and decay for all other quantities
of the linearized Einstein equations of Schwarzschild. So this is basically what's done. All right, so now there are some additional results based on different approaches by Hung, Keller, and Wang in 2017, and then based on wave coordinates in
Hung, Johnson in 2018 by Hung and Johnson. Okay, so summary of what we understand so far. We have tools to control, in principle, the main curvature quantity p. So remember, p is obtained by going from alpha to p by sort of a second-order operator.
All right, so this verifies a nice equation. But of course now, in nonlinear theory, so this Chandrasekhar transformation verifies a nice equation here with zero in linear theory. But in nonlinear theory, of course, there would be a huge number of turns
on the right-hand side, which are very complicated, in fact. But at least we know that they are going to be quadratic. So they are quadratic in small quantities which vanish in Schwarzschild. All right, so this we have tools in principle to control the invariant quantities alpha alpha bar,
because again, if I know p, I can go back to alpha alpha bar. So what remains to be done? Find quantities that track dynamically the mass and angular momentum. Find an effective dynamical method to fix the gauge problem. Determine the decay properties of all important quantities and close the estimates of the full nonlinear problem.
So now what time is it? Let's see. Ah, okay, so I can go on. All right, so let me talk a little bit about the nonlinear problem.
Going from linear to nonlinear, so we understand something about linear stability of Schwarzschild. And now we want to go to nonlinear.
And in the first approximation, you may also want to do first Schwarzschild, because it's a little bit simpler. As we have seen, Schwarzschild is much simpler. All right, so now the major difficulty, I mean, there are lots of difficulties, of course, to go from linear to nonlinear.
But in particular, one of the unpleasant things about doing nonlinear theory, nonlinear stability of Schwarzschild, is that if I start with initial data which are close to Schwarzschild, I'm not going to converge to Schwarzschild again. I'm going to go to the final state which will also have angular momentum, right?
So the final state, AF and MF, even if I start with 0M here, a perturbation of 0M, I will converge to a final state which has an angular momentum. And therefore, I cannot really study,
it doesn't seem like I can study stability of Schwarzschild without understanding the full stability of Kerr, at least for small a. And then Kerr has many other complications, and you would like to separate complications because otherwise you will never be able to do anything if you try to do everything at once. So we really want to separate Schwarzschild still.
So the question is, is there a way to impose conditions so that the final state is still Schwarzschild? So it turns out that there is a simple way to do that, which is by imposing some symmetries on the solution.
So symmetries, and that's what I want to talk now. So I want to assume that my initial data have certain symmetries. So in other words, I want to look at the restricted stability of Schwarzschild. And let me recall a little bit
how you take into account symmetries if you are in general relativity, so for solutions of the Einstein equations. So let's assume that we have a spacetime which verifies the vacuum equations,
but I want to assume also that there is a kinetic vector field z. So z would be a kinetic vector field, and I want it to correspond to a rotation in fact. So assume that I have a rotation in a kinetic vector field. Then there is a very general construction
which is done by taking this g of zz, which you call x, and forming an nth potential. So nth potential is a scalar x plus i y.
y can also be defined very easily by, y can be defined by taking the derivative of t, and then you actually take the Schwarzschild and you multiply by t again, and that gives you the y.
t is z, yes, sorry, z. So z. Okay, so once you have that, it's very easy to see that this combination, which is called the nth potential,
verifies a wave equation. So you get x times the dimensional phi is d mu phi times d mu phi. And moreover, you can also show that the original metric can be reduced
into a component h which is now only 2 plus 1 dimensional and this complex scalar phi, and that together they verify a system of wave equations like this. So Ricci of the reduced metric is expressed in terms of the phi, and the wave equation with respect to the metric h of phi
verifies an equation like this. All right, so this is the simplest thing where you assume axial symmetry and you get a simplified system of equations. But this is not good enough because in reality I want to start with Schwarzschild. In the case of Schwarzschild, y would be 0. In fact, it turns out that
if you start with y equal to 0, you stay 0. So that's what is called polarization. So axial symmetric polarized means that I also assume that y is equal to 0. And then if you start with this component 0 initially, it stays 0 for all time. That's easy to see.
And therefore, you ensure that you stay polarized for all times. And in that case actually, so in the case of polarization, the metric, the spacetime metric takes this very simple form which is x times d phi squared plus g a b dx a dx b.
In other words, it coordinates t, r, theta and phi. So you see that relative to the metric, this is completely decoupled for the other components.
And therefore, I can think about this as being my true metric now which is only two plus one dimension. It's Lorentzian and it's two plus one dimension. So there is some kind of reduction to a lower dimensional situation. And the equations for the curvature of this metric,
so I'm looking at r for this metric, which is a reduced metric, is coupled with phi through this simple equation, d a d b of phi times this. The derivation of phi verifies this. This is the kind of coupled system that you have to satisfy.
You also see that the scalar curvature of this metric is equal to 0 from this very simple fact. This is what I want to do. The important thing here is that you stay in Schwarzschild. So if I start in Schwarzschild,
where y is indeed equal to 0, and I stay in Schwarzschild. This is a simplification which really allows me to talk about sub-ideal Schwarzschild. Actually, as we shall see, it turns out that these equations are really not important. You might think that this is what I should use now,
because I have simplified the equation, I have just a simple wave equation here, and then all the metric g can be recovered from this equation, if I know phi. Of course this system is coupled, but it turns out that actually it's not very helpful.
Everything that I'm going to discuss now is done in general, more or less. Only you have to remember that at some point, in some situations, I have to take into account polarization. Polarization is going to be used, but most of the time it's not used.
Most of the time I have to use the same kind of thing that I would have to do in any other stability of care, for example. All right, so this is maybe a step result, and maybe we can take a short break. Here is a result that I have with Jeremy Seftel, which is that small axial polarized perturbations,
axial polarized means in the sense of a given initial conditions of an exterior Schwarzschild metric, have maximum future developments converging to another exterior Schwarzschild, which is given by this final mass m infinity,
which is of course different from the one I started with. And this is a picture I'll talk more about after the break. This is a picture of the space we construct. Actually, we start with initial conditions on two null hyper surfaces.
I'll explain why we are allowed to do that after the break. So you have to imagine that you have initial data here and here, and you construct the space-time all the way to Scry. The Scry is complete.
From Scry you see the horizon. Of course, the horizon you can only find out after you constructed the whole space-time, because it has to come from this point at infinity on Scry. So you construct the horizon. Here you have, let's say, something like an apparent horizon maybe.
And then in addition, I have to construct this time-like surface t, which has a certain role which I'll mention next time. That's basically the Penrose diagram of the space-time we construct.
And now I think it's a good time to take maybe three or four minutes break, and then I'll explain this result. So this is the statement, and I'll try to describe more in detail what's going on. The geometric features of this construction is, first of all,
you have an optical function, which is u. So the optical function, I recall, is a solution of the Eikonal equation. But of course, whenever you talk about... Maybe I should erase things here, because there are too many...
I don't know yet the solution, so being optical means what exactly? Yeah, so let me explain.
So as I mentioned earlier, we have to solve the Einstein equation together with some...
Sorry, g alpha beta, g alpha u, g beta u is equal to zero. And in fact, there will be two such functions, u bar also.
Namely, the way to think about it is that when I solve this equation, of course I have to initialize it somewhere. This one we initialize it on the initial data, because we have an initial data set, but this one has to be initialized also somewhere, and we initialize
here at i+. Now in reality, we see that it's actually initialized in physical space, but in a first approximation, you can think about it being initialized here on i+, on scribe, and then the level sets of U are at 45 degrees, and they go all the way where they meet this T.
So T is some time-like surface, which is not too far away from R equal to 2m, which is 2m0, which is the original Schwarzschild. We can still talk about the Schwarzschild. This is of course not the event horizon anymore of the space time we construct, but it's an event horizon of the one we started with. So T is somewhere to the right of the
horizon, and it will be below 3m. We make sure that it is below 3m, though it's not that important. But in any case, we certainly take it below 3m. So again, U is initialized here,
and then from here when I reach T, I start the U bar, which goes in this direction. So of course, we cannot go all the way with U, because you would not be able to cover the entire
region of interest. So as a consequence, we just go to T, which is the time-like surface, and then we move in this direction. So this will correspond to U bar. So optical function U bar in M int. So M int is whatever is to the left of T. M x is whatever is to the right of T. So T
is some kind of time-like surface which is used in order to distinguish between a region where we have to go this way and the region where we go this way. So we have an outgoing geodesic variation from here and an incoming geodesic variation from here,
and we define null frames. You can define null frames here based on this null geodesics. And if I have a light cone, I have the null geodesics, and let's call it, say, E4,
and then I can define an affine parameter which is E4 of s is equal to 1. So E4 of s is equal to 1 will give you a foliation of all these light cones, and therefore will have a null frame
here and the same thing here. We'll have a null frame here. So this is a way to define the gammas in the exterior region, gamma in the interior region. Okay, so that's enough about this. Now, in reality...
The data are characteristic Cauchy data? Yeah, okay, so this is something that I will explain in a second why we are allowed to do that. The reason is that if I start with a space-like hypersurface, sigma zero, then I know from a result with Nicolo that if I go sufficiently far
towards infinity, so this corresponds to I0, if I go sufficiently far towards I0, then the data here becomes sufficiently small so I can construct a piece of space
all the way to a null cone. Therefore, this part we can assume that it's already understood, and then this part here, from here, also it's sort of a finite region where, again, instead of looking here, starting with a space-like hypersurface, I can look here.
So therefore, I can assume that my data is given on null hypersurfaces. Okay, so the data again is here and here. The important point now is that the space-time is constructed by a bootstrap procedure. I don't know yet that I can reach infinity,
so I have to keep enlarging my space-time until I reach infinity. This is done by a bootstrap, so somehow you have to think about the fact that at any given time the space-time under consideration is only this space-time.
So I'm only going up to a finite c bar star and the finite c star. But then, in addition, I have a space-like hypersurface which we call sigma star, which is this one, and the initialization, instead of being done here, because I have not yet reached
the sky, I'm going to do it from here. In other words, I construct the u foliation this way and the u bar foliation this way. So the space-time is constructed like this, and then I'm going to enlarge the space-time. I'll show that if I reach a certain
stage, in reality, because of my a priori estimates, I can go a little bit further. So this way I go all the way to infinity. So the idea of the bootstrap is that you make certain assumptions and then you show that actually you can do much better, and therefore there is no reason to stop here. You can go further. That's how it works.
The key features of the construction is, first of all, the Hawking mass plays a fundamental role for defining the final mass. This is a well-known concept in general relativity.
You define it by using trace chi and trace chi bar. So remember that we have all these quantities, chi, chi bar, eta, theta, eta bar, and so on and so forth, and trace chi, trace chi bar are just simply the traces of chi and chi bar. Of course, the situation
in which we are now is one in which we have a foliation by two surfaces. So all these quantities can be easily defined. Then the Hawking mass is obtained by taking
this concept mh divided by r. So r, I should say, on any two surfaces, you define r to be the area radius. In other words, 4 pi r squared is equal to area of the corresponding two surfaces at that point, at that particular point. So at every point you
have also an r, and therefore this is defined this way. Of course, this is the integral on the corresponding surface, so I have a surface here, take the corresponding surface, I take the integral, this defines the Hawking mass. By the way, Yau is supposed to have defined improved version of the equation called mass. Has it been useful in mathematics or not yet,
or do you think it is not useful? I don't think it's useful. I mean, I will explain to you why it's not useful. I don't think it's useful. But don't tell it to him, because you'll get very upset. Ah, sorry. Okay, so let me say it again.
Yeah, so it's somehow, you have to tie it to real constructions, otherwise it's too general. So in that sense, it's interesting, but it's hard to imagine at this stage how
it will be useful. So in any case, it's not useful here. So m infinity, you can then define, once you go all the way to infinity, to scry in other words, you can define the final m to be just the limit
of mh. So on any u, you get an mh here, and you take the limit in this direction as u goes to infinity, and then you get the final mass. So the final mass is what you get here.
But of course, after you have constructed the whole thing. But this can be defined. The beautiful thing about the Hawking mass is that it can define it locally. You can show equations for it, which are quadratic on the right hand side, so they are very robust.
And the fact that the limit exists once you have constructed the space, of course, you have to construct the whole space time to do that. But once you construct the space time, you immediately identify the Hawking mass. So the Hawking mass, sorry, the final mass, which is this one by taking the limit. So again, you take the limit as r goes to infinity, and then you take the limit as u goes to infinity, you get the final mass.
Okay, now here is the most important part. The most important part is how to construct u. Because now I have to be more specific. Before I said they constructed from infinity, but in reality you construct from sigma star. So I have to make choices on sigma star
to initialize u. In fact, actually, I even have to construct sigma star, as it turns out. Right, this space-like hyper surface. And I want to do it, and this is the concept that we introduced, that this space-like boundary is foliated by
what we call GCMS spheres. So these are generally covariant modulated spheres. So I don't know, if you don't like the name, please tell me, because we can still change. So generally covariant modulated spheres.
So modulation makes sense, right? Generally covariant makes sense, so the two together makes sense. So what does it mean? It means that you use a full degrees of freedom of the covariant group of deformorphies in order to fix spheres in which certain key quantities associated to these things pick up specific values,
right? Like zero, for example. I would like to make certain things equal to zero. And you know, the reason is very simple, because you want to go, you see, you go in this
direction, right, in order to construct, I mean to get estimates for the rich coefficients everywhere in the sky star. But you know, if I start badly here, there is no way I can derive anything. So I have to initialize, I have to find good initializations on sigma star,
or good initializations on the sky in some sense, right? But now you have to be very specific, what are these good initializations? So this is what we call GCMS spheres. And let me, I'll say a few more words about this. OK, so here is what this is. So this requires more of an explanation. You could just call them, you know,
nice spheres, or good spheres. People use good spheres. Yeah, but this is more impressive, I think. GCMS. No? Well, OK, we'll discuss it at lunch.
But OK, so you see, yeah, so you have a two sphere, and then the corresponding light cone that starts from it, right? And I want to arrange these things, so certain key
quantities are zero. So for example, on any two surface, there are certain operators, which are called Hodge operators, which are elliptic operators, which come naturally in
the equations. So if you write down the actual equations in terms of, by the way, this is, you don't quite see it in the Neumann Penrose. I mean, it's much better to use a geometric approach to see the character of this equation. But anyway, you can see it in any,
in the end, you can see it everywhere. Yeah, yeah. Right, but unless they are integrable, unless they see as an integrable, they are not going to be able to use them. Here you are integral, so these are, OK, anyway, so the operators are defined like this.
So there is a d1, d2, d1*, and d2*. So d1, d1 takes one form, so it takes one form into scalars. And d2 takes two forms, well, symmetric, sorry, symmetric traceless.
Because this is what comes up in, when you write down the Einstein equations in the Neumann Penrose formula, it's more so if you want, if you write down the equations,
you get symmetric traceless two tensors. And d2, so d2 will take, say, a tensor like this, psi a b, which is symmetric traceless, and takes covariant derivative db. So psi goes into this. So it takes, in other words, a two tensor into one tensor. So this is one form.
The one forms are not transverse, they do not satisfy divergence equals zero, they are just one form. Yeah, just one form, yeah, absolutely. Yeah, okay. So this is precisely what the f. Yeah, right, it's just that they are not integrable, so you cannot use it,
you cannot use it as elliptic system because they are not integrable, right? Usually, in situations, in our case, yeah. So what you are saying is that I could use those definitions. I could use, so they correspond exactly to those operators.
Yeah, this I agree, yeah, absolutely. This I agree, yeah, right. It's just that the way we do it, have a more geometric description, but anyway, this is kind of irrelevant. So once you have d1, d2, you can take the duels, d1 star and d2 star, right? They will go from scalars to one form, so d1 star goes to scalars to one form,
and d2 star go from one form to two forms, right? Okay, so the operator d1, d2, d1 star, d2 star. So this you can say that they are coercive on spheres.
So coercive, and these are not. So these are the kernel, non-trivial kernels,
and this plays a very important role in this analysis. Okay, so anyway, what we want is that you take trace chi on this sphere, right? So I have on this here, I take this trace chi S, which corresponds to the trace chi of this,
and take d1 star of it, and then d2 star of it. So I can take, for example, d2 star of trace chi bar is equal to zero,
I can take d1 star of trace chi equal to zero, and then I can take also d2 star, d1 star of that mu. Again, mu depends on S, so it's equal to zero. So what is mu? Mu is a quantity, which I'm not going to write down,
because I don't think there's any point, but it's sort of the mass aspect function. I'm sure that you know what it is. So it's a mass aspect function, which is defined using, so it's a combination of rho and so on and so forth,
and some connection coefficients, but let me not be very precise here. But in any way, it's something at the level of the curvature. The only thing that you need to, I can't quite explain why you take these ones,
but it's extremely important to make some conditions. And you see, you take essentially something which is at the level of three. You have three such conditions, which correspond to the number of degrees of freedom of the transformations that I wrote down before, which goes from one frame to another frame.
So again, this has to do with this fact that priority, no frame, I don't have any way of choosing a particular frame. And it's right here that you make a choice. You make the choice by using the frame transformations. You make the choice in such a way that these three things are satisfied.
This leads to a huge Hodge system, which will involve a lot of, I mean, it's a very coupled system, which will be a system for f, f bar and lambda. So in other words, let me call it like this, you'll get some equation like this, d of f, f bar and lambda. And you show that it's coercive.
The most important thing is to show that it's coercive. To show that it's coercive, you also have to take into account also something about these kernels, because the kernels of d1 star and d2 star are non-trivial. So you have to really mold out the kernels and so on. So there is a lot of work that needs to be fixed. But the idea is that you use a full number of degrees of freedom of your
gauge transformations, local gauge transformations, in order to construct such spheres. But at the end, the quantities here are scalars, trace of... So are these scalars at the end constant, I mean,
uniform on your sphere or not? They have a valuation such... Yeah, so for example, trace chi s, it will be in fact 2 over r s. I didn't write it down, but there is also... So this one is constant, but of course, the other one can have a kernel, right?
But within the kernel, they are constant? They are fixed, but in order to fix them completely, you need something else, which I didn't... Yes, yeah, so they are totally... They are completely unique. They are uniquely defined in the end, right? Everything will be uniquely defined in the end for any of such...
Okay, so then of course, you have to construct, you have to show that such things exist, because the way... Okay, so let me say something about the bootstrap. So the way the bootstrap works...
Actually, let's look at the picture. The way the bootstrap works is that you assume that... You start with initial data, you use local existence, so you can always go a little bit,
and then you keep going until you reach a maximum. You cannot go any further, right? In principle, at some point, you might stop because there are singularities and so on and so on. If you have instabilities, you cannot go forever. So you assume that you stop somewhere, right?
But then, on the sigma star where I stop, I assume that the spacetime is such that on sigma star, I have this GCMS, so I have this condition satisfied on sigma star.
And as a consequence, I can get very good estimates, and this is of course a huge long step. I can get good estimates for all the connection coefficients and all the curvature in this region. And then because these estimates are good enough, I can extend the spacetime a little bit, so I can go slightly further.
I can use the eufoliation to go a little bit further and a little bit further in this direction, a little bit further in this direction. So in other words, I construct a slightly bigger spacetime. But of course, as I construct this slightly bigger spacetime, it's not at all clear that the new boundary sigma star verifies its GCMS conditions.
Because I do an extension coming from here, there's no reason why this should be satisfied. So what I have to do in this region that I have constructed, where I have extended the previous spacetime, I have to show that there exists a new sigma star which consists of these GCMS conditions.
That's where you have to actually do most of the work to show that these things can be found. I'll come back to this in a second, but for the moment, this is clearly the most important part of the construction.
These GCMS are constructed, as I said, based on solving a large elliptic Hodge system, and also a couple of those transport equations, as I should explain later on. Now, this fact that together with the knowledge of alpha, alpha bar, so again, alpha, alpha bar, remember alpha, alpha bar in principle are determined from that p
that I mentioned earlier, which comes from the Chandrasekhar transformation, and which itself does not depend much on the gauge condition. I can imagine that at least in principle, this alpha, alpha bar are determined, and then together with this GCMS condition,
I can show that all other connection and curvature components are controlled. By controlled, I mean controlled with specific decay rates for each component of curvature and connection.
Sorry, just to understand these GCMS spheres again, are you saying here you show that there exists a frame in the frame of freedom to have the trace of chi satisfying this? Yes. Or do you have also to construct how the sphere is located in space?
Yeah, so there are two parts in the construction. First of all, again, I assume that I already have extended the previous space time. The previous space time consisted of some sigma star here, which had GCMS,
but once I extend it, I don't have them anymore, so I have to construct a new one. So, in this region, I have a lot of control on the extended,
so let me call it gamma extension and curvature extension. So I extend the gammas from before in this region and r in this region, so I have a lot of control here, right? But what I don't have is GCMS. So what I show is that a TV... And I have coordinates, yes.
It's essential, yeah. It's essential that I also have coordinates. So in other words, I have also coordinates here. You have to change the coordinates, yeah. So I'm going to change the coordinates in such a way that it will be like that, yes.
So these I should also call, right? So once I have these, I also have coordinates. It's not such a big deal. Now, I use everything that I have in order to construct a new GCMS. So how do I do that? I take a sphere of the old foliation, of the foliation which has been extended,
so this would be a sphere like that. I take its south pole and I construct a new sphere, which is GCMS. No, no, no, no, the whole sphere has to be constructed, of course, yeah, exactly. No, otherwise the frame will be just a linear theory,
but non-linearly I have to construct the whole sphere. So you construct the whole sphere and then you also construct the sigma star, which consists of GCMS. And this is your new boundary and then you go this way
and you show that this can continue forever. Okay, so the space time M, space like hypersurface sigma star, and the two geodesic foliations are constructed by a continuity argument,
which I already mentioned, starting with the initial data layer, right? So that's the one I said. The initial data layer was constructed in a joint work with Nicolo in 2001, 2003. And then you derive sufficient decay for gamma R,
in other words, for the connection coefficients and curvature coefficients. And then you close back to the main wave equation for P. So the whole point about this equation is that now I have something on the right hand side, exactly in the same way as what we discussed last time.
I had to solve this system of equations by taking Lie derivatives with various vector fields. When I commute, I'm going to get D of Lie X R and delta of Lie X R
is something which is very complicated on the right hand side, because it's something which depends on the deformation tensor of X and also curvature. And this, of course, could kill you, because these are some kind of system of wave equations. It's a Maxwell type system, but with the right hand side,
and the right hand side could be terrible. If you don't have enough decay, if I don't have enough decay for the right hand side, I will not be able to close. The same thing happens here. If I go back to this Chandrasekhar equation, actually it's a Regge-Wheeler type equation, because I'm in non-linear theory,
this term here, which is quadratic, can be still extremely complicated, and if I don't have enough information about gamma and R, I will not be able to close. And therefore, somehow, the essential point of the entire construction is that I have to derive sufficient information,
sufficient decay information about gamma and R, so that this error term here does not create any problem for estimating p. And of course, the estimates for p are connected back to estimates for gamma and R, and so on and so forth. Anyway, this is a usual kind of bootstrap.
All right, now let me mention at least some of the main statements in the theorem. So this is a little bit more precise. So you start with initial conditions in the boundary layer. Some norm, I'm not going to specify the norm here, because it's a little... I mean, there's no point.
These norms are relatively complicated, because they involve power so far. But different components have different powers so far. So I start with initial data, which is less than some epsilon zero. Epsilon zero has to be sufficiently small in order to prove the stability. The conclusion is that there exists a future globally hyperbolic development,
whose complete future is non-infinity i plus, and the future horizon, which verifies. So now I want to say something about norms in space-time. So there are various types of norms. Again, I'm not going to make them precise, because they are too technical.
But it's maybe useful to remember that there will be norms in which I have pointwise decay for various quantities. In fact, these quantities will come here in a second. So this norm measures decay in various quantities, and k small refers to the fact that you take only a small number of derivatives.
Actually, it's important here. You have to take quite a lot of derivatives. I'm going to distinguish between small and k large. Well, this k large is much bigger than k small. So decay is only...
I only need decay, very precise rates of decay, for a small number of derivatives. Small can still be about 50 number of derivatives. I mean, the number of derivatives here is not... I don't have to be very precise about how many I take, except to say that they are still a finite number at the end of the day.
Anyway, so this is a quantity which measures decay of my various quantities, and this is a quantity which measures, for a large number of derivatives, measures the energy, and that does not have decay. It has only powers of r. This is a norm which has only powers of r, no decay in it, in terms of weights,
and the whole thing has to be less than C epsilon zero. You see k small is half k large plus one. In particular, I'm saying something about these norms. In particular, this norm tells you that the curvature alpha and beta, for example, the highest components,
decay like one over r cubed times u plus two r to the one half, and this is a small delta, and either like this or like that. So here you have more decay in u and... Anyway, this is somehow the rate of decay with respect to both r and u.
So in particular, if I'm on u equal constant, the decay is just r to the seven half. This is exactly consistent to the stability of the Minkowski space. That's exactly how we had, the stability of the Minkowski space, we had exactly r to the seven half for this component alpha and also beta. And then there are the component beta bar, which decays only like one over r squared,
component alpha bar, which is the component that goes all... The radiative component. This is the radiative component. This is the one you see in LIGO, right? The only one you see in LIGO is this one over r, and then there are components of the Ricci coefficients, the kappa hat and zeta and so on and so forth.
They all have very precise rates of decay. It's extremely important to be very precise, exactly because of the reason I mentioned here, that you have to control this term at the end of the day. And the delta decay is strictly positive? And this is a strictly positive number, which is small. You can also take it larger, actually. But we didn't... But you can impose it?
Because recently I had discussions about this thing, that the decay in general of alpha bar would be one over u, not... Without the r? Without the delta... No, no, I'm speaking as a function of u. Ah, yeah. After multiplying by r, so you get... You're just u.
One over u for large u. And this is very important in four dimensions, that it's not larger, not faster than one over u. Four alpha bar. And maybe also chi bar, yes. Four alpha bar. So here, is it something you impose, that there is one plus delta? Because physically, the tail effects impose
that it cannot go faster than one over u. You mean for what, for the two-body problem or for... Yeah, for solutions linked... Okay, but which have quadrupole moments. At some stage. So I wonder whether you impose this as a choice of faster decay?
Anyway, it's consistent to do that. Yeah, it is consistent to do that, but it would be interesting to... Well, it has something to do with your initial conditions. Your initial conditions are such that you can also get that. But I'm curious about what you say, so maybe we should discuss.
Yeah, okay. So in any case, this is what you have. In interior, everything decays like u bar to the one plus delta c. So u bar, remember, it's the optical function corresponding to the interior.
So these decays, they all have uniform rates of decay, which is normal in the interior. M infinity, as we said, is defined like this, and you get that M infinity is close to M0. So in other words, you don't get too far away from the original mass. On the future horizon, you can get an asymptotic of the future horizon,
r is equal to 2M infinity plus something which behaves like this. In Mx, rho is not... Remember that rho is obviously not small. It has to have a correction. This is a Schwarzschild value of rho, so you have to take it away.
So you take rho plus this, it's less than these quantities, and so on and so forth. So there are all sorts of very precise... In fact, you have no choice when you do stability in general relativity. You have no choice. You have to be very, very precise with all components.
You have to get the correct decay, both in r and in u, with a lot of precision. Okay, this is now the coordinates. You asked me about the coordinates, so this is how the coordinates look like. So you can construct coordinates such that the final metric has this form,
with M infinity here, and in the interior it has this form. You have the Bondi mass law formula. The Bondi mass, of course, is the limit of M u r as that goes to infinity. So this is the standard thing that you get.
Final Bondi mass is exactly the M infinity, which we already discussed. You know that any Polish people say that it should be called the Bondi trout man or trout man Bondi? Fortunately, Christian is not here, so he would have complained, I'm sure.
So do you do that? You call it Bondi trout man? There was a paper of trout man before Bondi, that's true. Okay, doing the same thing. Ah, okay, then it should be called.
But then it should be called Bondi. But then it should be called Bondi trout man. Okay, then I'll change. Okay, so this is the formula, Bondi mass. Okay, so now the main thing is the intermediate steps.
So the theorem number one... Okay, so you have to start. It's a long... Unfortunately, the construction takes a lot of space to do it. But conceptually, it's not too difficult.
I mean, once you understand what's going on, it's not too difficult to describe. So in the first approximation, you start with initial data, right? So I have initial data, which is less than epsilon zero. And I look at this equation, which was the Chandrasekhar equation, right? Which is, I call it Q frag here, but it was before called P, right?
So P and Q frag are the same thing. So I call it Q frag because in their polarized case, this P, which is actually a two-tensor, reduces to something simple. Okay, so you show that solutions of this equation
verify this norm, which... This is a norm which involves everything, including decay. The decay rate for Q frag is less than epsilon zero. So again, this type of norm, I'm not going to write it specifically, but it's something that has to do with this kind of behavior, right?
Which I think I don't have to say much more right now. And here you have suddenly plus 20 derivatives. Right, because you have to do the bootstrap... Okay, sorry, I didn't mention the bootstrap. So you make a bootstrap assumption about...
So the bootstrap assumption... Let me write it here. So the bootstrap assumption...
Can you lose the derivatives that you did to start with many? Right, so I make a bootstrap assumption. So you see, you have an epsilon zero which corresponds to initial data, right? So this is something that I can make small, right?
And then I have a bootstrap assumption which is an epsilon, which is going to be larger than this typically, right? And also I have K small and K large. So the bootstrap assumption has to do with K small and K large.
So for example, these decay norms are only for K small, right? Okay, so now, however, when I... And then there is another one here. So these are decay norms and these are energy norms.
So energy type norms, right? Which are bounded in terms of this parameter epsilon. All right, so what I have to do now, you see, when I look at this equation,
it's an equation with error terms on the right-hand side. So in principle, these error terms will look like epsilon squared times some decay rates, which are very important in order to close, to be able to estimate this Q, in other words, to get this type of estimate. So you get automatically epsilon zero because epsilon squared,
I can always make it to be strictly less than epsilon zero. And therefore, somehow I beat the bootstrap constant. But at the same time, I lose a certain number of derivatives. In other words, sorry, I gain, excuse me.
So I gain derivatives because originally, I had the bootstrap assumption for K small and now I get actually K small plus 20. The reason I want to do that is because in the process, I keep losing derivatives. At the end of the day, I want to get back to exactly K small. In other words, I want to beat the bootstrap assumption.
I want to show that the bootstrap assumption is not only verified, it's you made an assumption, but at the end, you get something even better. And the norms are regularly weighted Sobolev, no? Yes, they are weighted. Yeah, they're weighted in R, weighted in R. Squares of derivatives. Correct, yeah, exactly.
And okay, so you see, that's why you want to get a little bit more in K small because you are going to keep losing. So this is, in the first approximation, what you show is that Q-Frac has a good estimate in terms of epsilon zero, which is your good parameter because this is what you control in terms of the initial data.
Now, the next two theorems is to show that once I have Q-Frac, I also have alpha alpha bar. So this is what I mentioned earlier, that alpha goes into that P, and if I know P, I can go back and get estimates for alpha.
And I lose a certain number of derivatives when I do that. But nevertheless, I'm still larger than K small plus 15. And then I go, okay, so this is now the hard part.
Because here I have to use the GCM's constructions to get from these estimates for alpha alpha bar to get the estimates for all Ricci and curvature components. And to show that these are still losing epsilon zero, you lose another 10 derivatives, but in the end you get K small plus 5 is less than epsilon zero,
while the bootstrap assumption had K small less than epsilon. So you obviously have improved the bootstrap assumption. At this stage. And then you have to do something about this other norm. So this is the norm that involves the energy.
I didn't say much about this, and I'm not going to say, but you have to do something more. And then finally, you have to extend the space time. So you see, up to now, all these theorems concern the space time, which I call the bootstrap space time,
the one that tends in sigma star. And the sigma star consists in GCMS in these type of conditions, which I said are extremely important. So with those conditions, I'm able to derive all these estimates, which are improved estimates. They are better than the bootstrap. And now, since they are better than the bootstrap, it means I can go further.
Right, so these are the theorems 7 and 8, which says that, well, first of all, you define U in R plus to be the set of value of U star such that an admissible space time exists for U up to U star, verifying BA. Admissible space time is a space time
that satisfies all the assumptions. Bootstrap assumption plus the fact that sigma star consists in GCMS. Right, that's exactly what I mean by admissible. Admissible is a space time, which ends in sigma star, which consists of GCMS, and which verifies all the bootstrap assumptions. So then I look at the maximum value of U star
for which such a sink exists, which verifies these things. I use, and in theorem 7, because of the previous six theorems, and because I have improved the bootstrap assumption, I can show that there exists a delta 0, which allows me to go a little bit further.
And then once I go a little bit further, I show that in fact I can go for all time because otherwise I reach a contradiction. So that's basically the idea. All right, so here is a construction of GCMS. Let me go very fast over this because I think this is, in a sense, conceptually, this is the most interesting new part. So as I said, assume that you have a metric
in your space time, which looks like this, and assume that you have control on these coefficients. I have control on this, control on this, control on this, so that's very important. I have control. The control comes from the fact that when I extend my space time in theorem 7,
when I do the extension, I do the extension and I show that I still control all the metric coefficients and all the reaching coefficients. Okay, so then I look at all the possible frame transformations. Now I have to be more careful. I have to look at the full set of transformations.
Actually, I put here lower-order terms, but even these lower-order terms are important, in fact. So a general transformation looks like this. So given f, f bar and a, which are all of epsilon, I get the general transformations of this type. All right, so now here is what I want to do. I want to, I start with an a0,
which is a surface corresponding to this foliation. There is a foliation by u and s. For every u and s fixed, there is a specific surface. I start with that surface, and I want to make a deformation of it that goes from a 0 to s. Here, I'm going to use polarization because the construction in general, we haven't done it.
Here, we actually have used polarization. Polarization means that every deformation can be described in terms of function u and function s, which depend on theta, on the parameter theta. They don't depend on phi, in other words. Because of the polarization. So then, I have to construct u and s,
and the frame here. What I have to do is to find a frame f, f bar and a, corresponding to s, and the capital U and capital S, such that my conditions, the condition I want, the GCMS conditions, are verified on s.
Okay, so proposition, given a 0. Here is actually a slightly different version of this GCMS, but it doesn't really matter. The idea is exactly the same. Here, I assume that I have an a0 close to a small value of r. So r0 is like 2m0 plus 1 plus delta H.
And I'm going to go from the frame e3, e4 and e theta. I go to a new frame, which is a frame adopted. The important thing is that I have an a0,
I have a deformation. But on this deformation, I want to have a frame, which is e3s, e4s, e theta s. While originally, I had e3, e4, e theta. So at every point on this deformation, I have the old frame, and I'm trying to go from the old frame to a new frame,
but this frame should be adapted to s. In other words, the e theta should be tangent to s. So these two should be transversal and e theta should be tangent. So I have to make sure that my construction takes into account this. And also, I want to have some GCMS condition. Again, it doesn't quite matter which conditions you choose.
The important thing is that I have three conditions. I have here kappa s is 2 over r s, and these ones are 0, for example. And then I define this adaptive null transformation, which means that the psi takes the original e theta
and 2 e theta of s, which is tangent to my surface. This leads to... This is a compatibility condition that I have to write down. I'm not going to write it down here, because these equations are rather complicated. But in any case, everything can be expressed
in terms of equations which tie u and s to f, f bar and a. There's a system of equations that tie u and s to a, f and f bar, which are kind of transport type equations. And then the equations on a, f and f bar,
so these are the main things, these equations, in addition, are tied to the GCMS conditions. And the GCMS conditions give you an elliptic system. In other words, I should write it here. So schematically, things look like this.
You have an a0, you have this deformed surface f, you have u and s, and here you have f, f bar and a, which solve an elliptic system, f, f bar, a.
It's an elliptic system, let's say, a complicated elliptic system on s. At every point on s, these are defined. And this corresponds to the transformation that goes from, so at every point here, so this is a two surface, at every point I have the old frame and the new frame.
So the new frame is given by this, is obtained from the old frame by this. So the GCMS condition becomes just this. And in addition, there are equations that are of the type, say, u prime is connected with f and f bar
by some complicated equation, and the same thing s prime is connected by this, by a complicated equation. So these are some kind of transport equations. So it's a system, what I have to solve, I have to find us and af and f bar, which verify a coupled system between transport equations, which relate u and s to f and f bar,
and this elliptic system. So this is what you have to do. So this leads to an iteration where you see, you have at every point, you have to iterate like this, at every point you have un, sn, an, fn and f bar n,
starting with a trivial q0, the trivial q0 is just a trivial deformation of s0 to s0. And then un, sn defines a map, so see, this is what's complicated, because for every iterate, you define a map from a0 to a surface sn,
so this is some deformation, and on it, I define now this elliptic system, which is this one. So on sn, I define an plus one, fn plus one, f bar n plus one to verify this elliptic system,
where d is some kind of a hodge system corresponding to this surface. And then I construct a new pair, so this is a way to define the new, at every point in my iteration, given that this is already known, I define an plus one, fn plus one, f bar n plus one
by solving this type of hodge system, and once I have these things, I find un plus one, sn plus one by solving a transport equation. So of course, what's difficult is that the sphere here, type n is different from the sphere at time n plus one, so I'm going to have a zero, I have an sn here,
and I have an sn plus one here, and somehow I have to compare these two surfaces, and of course, the only way to compare them is to take the pullback to a zero and compare the corresponding matrix on a zero, and so on and so forth. So it's a complicated procedure, but it's conceptually pretty clear what you have to do,
and so this is a contractual argument, and so on and so forth, so I think this is probably a good place to stop. And is it important that you are close to two n? Because at some points, they appear the condition that you were close to two n.
Yeah, no. So this is a simplified version, which is close to two n, but in what I described, actually it happens far away. Oh, okay. So power so far will be important, and so on and so forth. Yeah, right. It's true, yeah. But I wonder,
I mean, this cannot be too different from some of the things that you have to do, right, in terms of center of mass frame. I mean, the fact that the center... So basically, we're taking into account that the center of mass frame changes, right, with dynamics. Where is the condition that you are mass centered, that you have no dipole?
Well, it's exactly the GCMS, right? The GCMS conditions are exactly not, right? That's exactly where you say that you are centered. Because essentially, some quantities are uniform without a dipole component that will be a displacement of the center of mass, just to understand.
Right. Exactly, yeah. But, right. I mean, obviously you have to, in your calculations, you have to do that too, of course, at every point in the... Right, when... Ah, it would be good to talk about, to see...
To see... Because in general, when you have a system which is radiating with your source, with bodies, the thing is going to recoil. I mean, you emit gravitational waves, and there are more gravitational waves, momentum emitted in one direction.
So at the end, your source is moving in one direction, and the gravitational waves compensate for this. So we do not keep the center of mass fixed in the gauge. Of course, yeah. Yeah, which is... But here you keep it, you adapt your... Oh. We don't, we are not in a center of mass frame
for the, for the source, okay? If you want to describe... The central part of the space. But here, maybe you don't project a linear momentum because of the polarization property. Yes, yeah. Right.
Right. Yeah, so that will be... That's going to be more difficult, but it would be nice to talk to you about this.