We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

3/4 On the Mathematical Theory of Black Holes

00:00

Formal Metadata

Title
3/4 On the Mathematical Theory of Black Holes
Title of Series
Part Number
3
Number of Parts
4
Author
Contributors
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
TheoryInitial value problemGame theoryGeneral relativityWave packetPerturbation theoryPhysical lawNichtlineares GleichungssystemFamilyUltraviolet photoelectron spectroscopyPoint (geometry)SpacetimeStationary stateStability theoryMechanism designEinstein-FeldgleichungenStatistical hypothesis testingVacuumConstraint (mathematics)Kerr-LösungLecture/Conference
Perturbation theoryKerr-LösungCondition numberStability theoryFluid staticsSymmetric matrixSchwarzschild metricExtension (kinesiology)SpacetimeInfinitySet theoryTurbulencePerturbation theoryInfinityHyperflächeCompactification (mathematics)MereologyResultantParameter (computer programming)SpacetimeInterior (topology)Equaliser (mathematics)Stability theorySurfaceEvent horizonEvoluteMultiplication signBoundary value problemTotal S.A.Social classObservational studyInsertion lossBlock (periodic table)Computer animation
Stability theoryKerr-LösungSpacetimeInfinityAsymptoteGeodesicHorizonEvent horizonNichtlineares GleichungssystemEigenvalues and eigenvectorsNegative numberFamilyLinear mapState of matterStochastic kernel estimationInvariant (mathematics)DiffeomorphismContinuous functionTrailProof theoryMechanism designDynamical systemGauge theoryStrategy gamePerturbation theoryCondition numberVacuumParameter (computer programming)Algebraic structureCurvatureAlgebraic structureDifferential equationEigenvalues and eigenvectorsEinstein-FeldgleichungenGeometryPerturbation theoryInvariant (mathematics)Einstein field equationsEntire functionDimensional analysisDerivation (linguistics)InfinityVector fieldDiffeomorphismPlane (geometry)GeodesicHyperplaneHyperflächeLight coneTerm (mathematics)Stochastic kernel estimationThermal expansionLinearizationNichtlineares GleichungssystemFamilyParameter (computer programming)Food energyBound stateDistribution (mathematics)Point (geometry)SpacetimeStationary stateMany-sorted logicSchwarzschild metricDirection (geometry)Stability theorySurfaceEvent horizonDegree (graph theory)Multiplication signRight angle1 (number)MoistureMaß <Mathematik>ModulformCausalityPhysical lawState of matterSheaf (mathematics)Group actionPrice indexLeakMereologyPhysicalismThermodynamisches SystemNormal (geometry)WaveObservational studyBlock (periodic table)Position operatorComputer animation
CurvatureNichtlineares GleichungssystemAlgebraic structureStability theoryAlgebraic structureGenetic programmingGeometrySymmetry (physics)Matrix (mathematics)Combinatory logicCategory of beingVector spaceBeta functionConnected spaceFundamental theorem of algebraPrice indexSphereCurvatureConnectivity (graph theory)Gamma functionDistribution (mathematics)SpacetimeSymmetric matrixAutocovarianceCoefficientDirection (geometry)Latent heatAlpha (investment)Second fundamental formIdentifiability2 (number)Riemann curvature tensorRight angleCohen's kappaSequenceMaß <Mathematik>Network topologyDimensional analysisReduction of orderCausalityPlane (geometry)Directed graphGroup actionMereologyResultantAreaFood energyPoint (geometry)Process (computing)CollisionSurfaceCondition numberLecture/Conference
Direction (geometry)Principal idealConnected spaceCurvatureCoefficientKerr-LösungAdditionSchwarzschild metricStability theorySymmetry (physics)General relativityWave packetModulformTime domainDimensional analysisArithmetic meanCurve fittingDivision (mathematics)Connected spaceExpected valueDirected graphGroup actionMereologyPermutationTerm (mathematics)Thermal expansionNichtlineares GleichungssystemFormal power seriesCurvatureConnectivity (graph theory)Vector potentialGamma functionDistribution (mathematics)AdditionPoint (geometry)SpacetimeMoving averageMany-sorted logicObservational studySurfaceIdentical particlesCondition number40 (number)Different (Kate Ryan album)Multiplication sign2 (number)Position operator1 (number)GeometryPerturbation theoryVector spaceDuality (mathematics)HyperflächePower (physics)FamilyLinear equationNormal (geometry)Thermal radiationHelmholtz decompositionCoefficientDirection (geometry)Chi-squared distributionMinkowski-GeometriePrincipal idealSecond fundamental formRight angleComputer animation
AdditionSchwarzschild metricKerr-LösungPrincipal idealConnected spaceStability theoryPerturbation theoryTransformation (genetics)CurvatureInvariant (mathematics)Maß <Mathematik>Differential equationHarmonic analysisTransformation (genetics)Wave equationGauge theoryPerturbation theoryInvariant (mathematics)Well-formed formulaEnergy levelAxiom of choiceConnected spaceExpected valueLimit of a functionLogarithmTerm (mathematics)Quadratic equationNichtlineares GleichungssystemParameter (computer programming)CurvatureConnectivity (graph theory)Helmholtz decompositionSquare numberAdditionPoint (geometry)SpacetimeWaveCoefficientStability theoryAlpha (investment)Condition numberDifferent (Kate Ryan album)CalculationMultiplication sign2 (number)Riemann curvature tensorFirst-order logicConcentricMaß <Mathematik>MathematicsWave packetModulformCausalityPhysical lawDivision (mathematics)Forcing (mathematics)Price indexMereologyPrime idealScaling (geometry)SummierbarkeitMany-sorted logicObservational studyRight anglePosition operator1 (number)Computer animation
Initial value problemNumerical analysisWave equationModel theoryConservation lawINTEGRALDimensional analysisDerivation (linguistics)AnalogyBeta functionFunctional (mathematics)Limit of a functionCompact spacePseudo-Riemannscher RaumSigma-algebraTerm (mathematics)CommutatorQuadratic equationNichtlineares GleichungssystemOperator (mathematics)CurvatureConnectivity (graph theory)Food energyDistribution (mathematics)SummierbarkeitGlattheit <Mathematik>SpacetimeKörper <Algebra>WaveEqualiser (mathematics)EstimatorStability theoryCorrespondence (mathematics)Alpha (investment)Identical particlesMinkowski-GeometrieMultiplication signSpacetimeApproximationCausalityConfidence intervalArithmetic meanGroup actionComplex (psychology)MereologyMetric systemPhysical systemScaling (geometry)Goodness of fitWeightNormal (geometry)Square numberPoint (geometry)SequelProcess (computing)Position operatorLecture/Conference
Food energyNormed vector spaceVacuumState of matterStability theoryAutoregressive conditional heteroskedasticityTime zoneMortality rateVector spaceKörper <Algebra>Derivation (linguistics)Differential equationNumerical analysisStatistical hypothesis testingSymmetry (physics)Transformation (genetics)Wave equationPerturbation theoryVector spaceConservation lawDerivation (linguistics)InfinityInequality (mathematics)Vector fieldMathematical optimizationFundamental solutionConformal mapSuperposition principleTerm (mathematics)CommutatorScaling (geometry)Linear equationNormal (geometry)Food energyBound stateAdditionPoint (geometry)SpacetimeWaveMany-sorted logicEstimatorParity (mathematics)Alpha (investment)Condition numberSinc functionMultiplication signRichtungsableitungLie groupDimensional analysisPhysical lawFundamental theorem of algebraLine (geometry)MereologyThermal expansionSurgeryNichtlineares GleichungssystemWeightCalculationCycle (graph theory)Right anglePosition operator1 (number)Lecture/Conference
Food energyTime zoneMortality rateVector spaceKörper <Algebra>Convex hullDerivation (linguistics)Stability theoryVacuumState of matterNichtlineares GleichungssystemAlgebraic structureDifferential equationPartial differential equationSymmetry (physics)VacuumWave equationPerturbation theoryDimensional analysisDerivation (linguistics)State of matterResultantTerm (mathematics)Thermal expansionLinearizationGoodness of fitNichtlineares GleichungssystemHelmholtz decompositionFood energyAbsolute valueWaveDirection (geometry)Equaliser (mathematics)EstimatorStability theoryCondition numberElement (mathematics)Multiplication sign2 (number)General relativityModulformCausalityForcing (mathematics)Functional (mathematics)Group actionTable (information)AreaNormal (geometry)Connectivity (graph theory)Mortality rateAdditionExistenceSequelObservational studyNominal number1 (number)Lecture/Conference
Stability theoryVacuumState of matterGeometryGauge theoryNichtlineares GleichungssystemVector spaceKörper <Algebra>Einstein-FeldgleichungenFood energyTensorMaxwell's equationsAlgebraic structureApproximationDifferential equationEinstein-FeldgleichungenGeometryHarmonic analysisMathematical physicsSymmetry (physics)Wave equationGauge theoryVector spaceEinstein field equationsINTEGRALDimensional analysisDerivation (linguistics)InfinityVector fieldDuality (mathematics)Limit of a functionFundamental solutionPermutationCommutatorSymplectic manifoldNichtlineares GleichungssystemTensorfeldMaxwell's equationsNormal (geometry)Food energyBound stateSpacetimeKörper <Algebra>WaveCartesian coordinate systemObservational studyCorrespondence (mathematics)Alpha (investment)DivergenceCondition numberClassical physicsMultiplication signVolume (thermodynamics)MathematicsNumerical analysisModulformGroup actionMaxima and minimaExtension (kinesiology)MereologyPoint (geometry)CuboidCollisionChemical equationNominal numberIdentical particlesLecture/Conference
Nichtlineares GleichungssystemStability theorySet theoryCurvatureConstraint (mathematics)Hill differential equationMaxima and minimaSpacetimeApproximationMaß <Mathematik>Mathematical singularityGauge theoryProduct (business)FinitismusCausalityCloningEnergy levelConstraint (mathematics)Equivalence relationArithmetic meanDivision (mathematics)Functional (mathematics)Group actionHyperbolischer RaumComplex (psychology)Coordinate systemMaxima and minimaMereologyPhysical systemPiQuantum stateResultantLocal ringTerm (mathematics)Nichtlineares GleichungssystemRange (statistics)Parameter (computer programming)MassCurvatureConnectivity (graph theory)Vector potentialComplete metric spaceAdditionExistencePoint (geometry)SpacetimeMany-sorted logicObservational studyInsertion lossProcess (computing)Stability theoryExpressionSurfaceIdentical particlesClosed setCondition numberBlock (periodic table)EvoluteMultiplication sign2 (number)Right angleDifferential equationSet theoryAsymptotic analysisPerturbation theoryDerivation (linguistics)Vector fieldBeta functionCompact spaceRadical (chemistry)Sigma-algebraTheoremDivisorEuklidischer RaumWaveSchwarzschild metricAlpha (investment)Minkowski-GeometrieDifferent (Kate Ryan album)Second fundamental formPosition operatorLecture/Conference
Condition numberStability theorySpacetimeDifferential equationSymmetry (physics)Gauge theoryBuildingModulformPerturbation theoryCategory of beingDerivation (linguistics)Energy levelBeta functionEikonalFundamental theorem of algebraFunctional (mathematics)GeodesicLimit of a functionHyperflächeMetric systemTerm (mathematics)LinearizationLogical constantNichtlineares GleichungssystemMassCurvatureConnectivity (graph theory)Food energyGamma functionSpacetimeKörper <Algebra>WaveMany-sorted logicDirection (geometry)Equaliser (mathematics)Stability theoryCorrespondence (mathematics)Chi-squared distributionAlpha (investment)Condition numberMinkowski-GeometrieBlock (periodic table)Multiplication sign2 (number)Position operatorTransverse waveInitial value problemCurveNetwork topologyOpticsProduct (business)Decision theoryArithmetic meanGroup actionTable (information)Heat transferCovering spaceWeightSequelProcess (computing)Graph coloringAnnihilator (ring theory)Right angleLecture/Conference
Stability theoryTheoremCondition numberAreaAlgebraic structureApproximationWave equationGauge theoryVector spaceInequality (mathematics)Vector fieldAnalogyBeta functionConnected spaceExpected valueFunctional (mathematics)Gravitational waveConformal mapLight coneMaxima and minimaMereologyTerm (mathematics)TheoremWaveformAreaLogical constantNichtlineares GleichungssystemMaxwell's equationsAdaptive behaviorCurvatureConnectivity (graph theory)Mortality rateFood energySquare numberSpacetimeMany-sorted logicDirection (geometry)Equaliser (mathematics)EstimatorStability theoryRadiusExpressionSurfaceAlpha (investment)DivergenceIdentical particlesCondition numberMinkowski-GeometrieDifferent (Kate Ryan album)Multiplication signStandard deviationDifferential equationGame theoryGroup actionContent (media)Power (physics)Metric systemMilitary basePoint (geometry)Moving averageGradientCycle (graph theory)Computer animation
Stability theoryCurvatureFood energyTensorConnected spaceTransformation (genetics)Kontraktion <Mathematik>SpacetimeInitial value problemSymmetry (physics)ModulformCombinatory logicAverageMomentumConservation lawINTEGRALProof theoryConnected spaceGrothendieck topologyPrice indexKontraktion <Mathematik>Moment (mathematics)Nichtlineares GleichungssystemSimilarity (geometry)Length of stayNormal (geometry)CurvatureConnectivity (graph theory)Food energyGamma functionSpacetimeLiquidMany-sorted logicEstimatorDivergenceBlock (periodic table)Multiplication signRight angleVacuumPositional notationGauge theoryInvariant (mathematics)Category of beingDerivation (linguistics)Vector fieldBeta functionDuality (mathematics)Limit of a functionFundamental solutionMetric systemTensorTheoremSymmetric matrixWaveCoefficientAlpha (investment)Bootstrap aggregatingComputer animationDiagramLecture/Conference
Stability theoryProof theoryCurvatureFood energyCoefficientNormal (geometry)TensorVector spaceKörper <Algebra>AreaTheoremInterpolationMathematical analysisInitial value problemNumerical analysisWave packetTime domainState of matterProof theoryFunctional (mathematics)Complex (psychology)ResultantTerm (mathematics)NumberLink (knot theory)Logical constantNichtlineares GleichungssystemFamilyCurvatureStrategy gameConnectivity (graph theory)Mortality rateSquare numberAdditionSpacetimeSequelObservational studySurfaceCondition numberMultiplication signRight angleApproximationDifferential equationGeometryAsymptotic analysisVector fieldEnergy levelDoubling the cubeHyperflächeCompact spaceMaxima and minimaSigma-algebraStandard errorBound stateGamma functionWaveCoefficientDirection (geometry)Equaliser (mathematics)EstimatorStability theoryBootstrap aggregatingLecture/Conference
Stability theorySet theoryConstraint (mathematics)CurvatureMoving averageInterpolationNumerical analysisNeighbourhood (graph theory)Insertion lossMultiplication signDerivation (linguistics)Coordinate systemPower (physics)MereologyTerm (mathematics)1 (number)Game theoryGauge theoryPerturbation theoryReduction of orderState of matterSeries (mathematics)Free groupProjective planeResultantNichtlineares GleichungssystemRange (statistics)MassSpacetimeProcess (computing)Stability theoryRotationRight angleMathematical analysisInfinityBeta functionCompactification (mathematics)Conformal mapTheoremLinearizationParameter (computer programming)CurvatureGamma functionSocial classGradientCartesian coordinate systemMany-sorted logicSchwarzschild metricEstimatorAlpha (investment)Boundary value problemPosition operatorComputer animationLecture/Conference
Transcript: English(auto-generated)
Maybe I should start. This is a third lecture on the mathematical theory of black holes. I discussed this so-called tests of reality for black holes, which
were to do rigidity, then stability and collapse. Rigidity is a statement, so they are all statements
in fact about the Einstein equations in vacuum. Rigidity is a statement about the fact that
stationary solutions. One looks at stationary solutions of the Einstein equations. In the asymptotically flat regime, we are interested in space times which are asymptotically flat, stationary solutions and of course we find this explicit family of solutions which
is called the Kerr family. The questions are here if there are other stationary solutions besides
the Kerr family. Stability is something about the stability of the Kerr relative to small perturbations and the collapse is the issue of how you can actually form these kind of stationary solutions through the mechanism of collapse. In other words, you start with initial conditions. Everything here can be viewed from the point of view of the initial value
formulation. You start with initial data is 0, k0, verifying the constraint equations and in the case of stability you are interested in making small perturbations
of a Kerr solution. Initial data being the one of Kerr and you wonder whether the perturbations will destroy the original Kerr solution and of course the issue of collapse is that again you
start with some initial data which are free of trapped surfaces and you form a trapped surface in the future and as we discussed last time, trapped surfaces are a very good substitute for black holes. If you have a trapped surface, you will almost certainly have also a black hole.
Anyway, these are the things which I did before and then I started to talk about I talked a little bit about rigidity and now we're talking about stability. So this is a conjecture stability of the external Kerr. Again, you see here
the Kerr solution. This is the external part of the Kerr solution starting at the event horizon.
So r equal r plus is the event horizon which is the boundary of the black hole region. That's a black hole and this conjecture is interesting only in the outside of the black hole. So in other words, you start up with a Kerr. You take a space like hypersurface.
You look at the restriction of the Kerr on the space like hypersurface. That will give you an initial data set. You make a small perturbation of it. In other words, you change the initial data set by a little bit and you look at the evolution and the conjecture is that the evolution
will converge to another Kerr solution. So you are starting with the original A&M Kerr and you get a new one at the end. There will be two Kerr solutions at the end? No, at the end, just one. In other words, you have started with something here
but at the end, you are going to get a different one. So it's not going to be the same Kerr. The horizon will change a little bit. It will not be the same horizon and so on and so forth. This is a statement I made last time which I think Slava was not happy with it so I made
it a little bit more clear. What I'm saying here is that, sorry, this was not the statement actually. This was fine. I meant that, okay, so these were results which we'll discuss later
on in more detail so I'm not going to get into it. I said that lack of exponentially growing modes is not enough to conclude anything about the nonlinear stability. Of course, as an example, I mentioned the emergence of black holes or the emergence of turbulence.
I'll say more about this later on. We talk about the Kerr solution of course. Again, you see the explicit solution. The fact that it's stationary and axisymmetric, it's very explicit. In the case when the parameter a is equal to zero, you get Schwarzschild.
Here is again the way the Kerr solution looks like. I mentioned that there are important things to remember. First of all, the values so far, there are interesting values so far which give you exactly when r plus is a solution of this delta equal to zero, you get the horizon.
Then r larger than r plus is exterior. r less than r plus is interior of the black hole. r is the boundary at infinity. This is done by a conformal compactification.
And r minus doesn't appear here because it's something that has to do with the interior of the black hole. You see it's here. Of course, there are lots of interesting things to be
said about the interior but I'm not going to get into that. So again, here you see the exterior in more detail. You see again the horizon and the scry which is infinity, the null infinity. You see the vector field T which
corresponds to stationarity. So this is in those coordinates, the bilingual coordinates, it's exactly d over dt. And you see what happens here is that as you approach the horizon, t actually becomes space-like. And this leads to all sorts of phenomena, both physical
and mathematical. Another thing that I mentioned last time and I'll mention again later on is the presence of trapped null geodesics. In other words, a typical null geodesic
in this picture will be at 45 degrees and it's moving either towards infinity or it's moving in the black hole. In both cases, you are not going to see them anymore. So if it moves in the black hole, it will never come back. If it moves at infinity, it never comes back either. So those are good in some sense. Unfortunately, there are some other ones
which are called trapped null geodesics and which sit here for all time. So they sit in a region of bounded r for all time. So they don't go to infinity and they don't go to the black hole. And they lead to all sorts of issues that have to do, that can be seen already
from the point of view of... So the region r is less than 3m in the Schwarzschild. Right. So in Schwarzschild, it's exactly r equals 3m. So in Schwarzschild, it will be a hyper surface r equals 3m, which is here. But in Kerr, it's a little bit more complicated. So you can have
many trapped null geodesics in an entire region of r. I mentioned, and maybe I'll repeat very fast, I mentioned a general discussion about stability. We have non-linear equations. We have some stationary solution, which is phi zero,
and we perturb it. Orbital stability, we discussed, that is the situation where the perturbation stays small for all time. Asymptotic stability means that the perturbation actually is going to zero, right? Linearized equations, we discussed when you look at the first term in the expansion. So you look at, essentially,
what is called Frechet derivative of n. Applied to psi, this is a linear equation, linearized around phi zero. And then again, you can have all sorts of discussions about mod stability, boundedness and quantitative decay. Mod stability is, for example,
the statement that there are no exponentially growing modes. They had to do is decomposing the linearized equation, the solutions of the linearized equation, decomposing them into some kind of eigenvalue expansion. And then we can show for every mode, you can show that you have stability. In other words, you can show that the modes don't grow.
They don't become, for example, exponentially growing. Then just having no growing modes does not even imply boundedness. In other words, you can have no growing modes for psi, and yet psi doesn't stay bounded for all time. That will, of course, create huge problems from the point of view of non-linear stability.
To prove non-linear stability, you need what is called quantitative decay, and I'll explain that later on more explicitly. Then we looked at the possibility that there exists a family of stationary states around phi zero. So phi zero is just one among many continuum, in fact, of stationary solutions.
And then we saw that at a linearized level, d over d lambda or phi lambda, evaluated at lambda equal to zero, is actually an eigenfunction corresponding to zero eigenvalue. The same thing happens if you look at diffeomorphism, which keep the equations
invariant. In other words, psi lambda, if phi zero is a solution, phi zero of psi lambda is also a solution. Again, you differentiate and you get a huge kernel as a consequence. So this is what we discussed last time.
To prove non-linear stability, you have to do many things, but in particular, you have to really understand gauges. In other words, the fact that an equation is invariant under diffeomorphism leads to the need to actually find the correct
diffeomorphism. So that's one issue. Final state, you have to find the correct final state, and this can only be done dynamically. Anyway, we'll discuss about this again later on.
So this is care stability now in the case of the actual Einstein equation. I discussed that the issue in general. In the case of the Einstein equation, you can see that the linear Einstein equations, of course, are reach equal to zero if you linearize around a stationary solution, which depends on care, which depends on these two
parameters. These are the linearized equations and you see that the derivative of g with respect to m. In other words, if you vary the parameter m, you get the whole family of eigenstates. Corresponding to zero eigenvalue and the same thing if you differentiate with respect to a.
You get a two-parameter family of solutions. If you also look at the fact that the Einstein equations are diffeomorphism invariant, in other words, invariant relative to any diffeomorphism, then you also see that you have a huge kernel which corresponds to that. The full dimension of the kernel is four times infinity plus two.
Now let me start talking a little bit more concrete and start talking about the geometric framework that one needs in order to understand this problem. To start with, and I mentioned this earlier, in Lorentzian geometry more generally,
but certainly in general relativity, the directions which are important are null directions. They are important, why? Because they correspond to null geodesics and because most of the energy is transmitted along null geodesics.
Null geodesics are supposed to be much more important than time-like directions. Especially from the point of view of stability, it's extremely important to follow somehow the way the decay of waves, for example, is very much dependent on
the behavior along null directions. Null directions are very important. Because of this, when you talk about frames, you start with two such null directions, E3 and E4, they are both null, and you also take g of E3, E4 to be equal to minus two.
So in other words, you normalize the frame. Now, once I pick up a frame, I should say not a frame, but a null pair, once I pick up a null pair, I can take the orthogonal complement to the null pair.
So at every point, I'm talking about something at every point. So at every point, I'm going to have a distribution if you want. So at every point p, I take the space perpendicular to these two. If I am in four dimensions, this is a two-dimensional plane. So it's a two-dimensional plane at every point, which is, of course, space-like.
And this is what I call a horizontal structure. So this is a horizontal structure. Now, this horizontal structure can be integrable or it may not. So if it's integrable, typically if it's integrable, it might generate two surfaces.
Like, for example, if you look at the intersection between two null cones, a null cone going this way and a null cone going this way, then at every point at the intersection, you have a two surface,
and you have automatically an actual null pair, which is given by this. One which is tangent to this null hypersurface, and the other one which is tangent to this null hypersurface. So a very natural way to define foliations is to take the intersection between a null hypersurface and another one,
or the intersection between a null hypersurface and, say, a space-like hypersurface. That will also give you an intersection. And again, you can talk about a vector going this way and another vector going this way, which is both orthogonal to these two surfaces. So anyway, what I wanted to say is that this horizontal structure can be integrable
or non-integrable, and we'll see examples of both. Okay, once you have this horizontal structure,
I can also take in the space perpendicular to this, I can take vectors e1 and e2, which are perpendicular to both of them, so they are in the horizontal structure, and I can, so they will be perpendicular to these two, and also, let's call this ea, a is one or two,
and I'm going to assume that g of ea, eb is delta eb. Okay, so this is, again, a normalization, that I pick, and as a consequence, I get what is called now a null frame.
So a null frame consists of this pair of e3, e4, and then these are the vectors ea. Here I wrote it with capital A, but there I wrote it with little a, doesn't matter. Okay, all right, so now once you have the frame, we do what's done always in geometry. You look at the connection coefficients, connection coefficients, right?
Okay, so how do you define the connection coefficients? Well, typically, you take the derivative, the covariant derivative, with respect to, say, e alpha, so e alpha can be e1, e2, e3, and e4, right?
So alpha, in other words, stands for these indices, and I take d alpha of e beta, and then another one, e gamma, I take g of this, right?
So this is a vector field, and I pair it with another vector field, and this is what it's called the coefficient gamma, beta, and then alpha, gamma, or gamma alpha, actually, right? So these are Christoffel symbols.
All right, so you get the Christoffel symbols. In this particular case, when you talk about null frames, you can identify various Christoffel symbols. So this is something quite different from Riemannian geometry,
where typically in Riemannian geometry, all directions are the same. So it doesn't matter too much. You don't need to identify specific connection coefficients, typically. But here you do, and so let me write down some which are extremely important.
So for example, if I look at this, say, e4, if I look at e4, which is a null vector, and I take, say, ea, okay, so I take the ea's in this direction,
so this is, remember, a is one or two. If I take ea of e4, and then eb, so again, eb is like the same, b is one or two. So this is a connection coefficient, as you see here,
which is called chi a b. It's called a null second fundamental form, because typically, whenever you have a two surface, in other words, if this distribution here is integrable, and it corresponds to a two surface,
then this is a null second fundamental form. I mean, it's a second fundamental form in the usual sense, and it's null because it corresponds to an e4. So I can do the same thing, symmetrically, I can do the same thing with e3, and then I get what is called kappa bar a b, right? And again, so this is the null second fundamental form, so you have a null second fundamental form in e4 direction,
and a null second fundamental form in e3 direction, okay? So these are kappa and kappa bar, which you see there. You have kappa, and I didn't write kappa bar, but that's by symmetry. And then you have many others. So for example, I wrote there, say, xi. This xi is g of d e4, e4 of e a.
So this is a one vector. Yeah, by the way, I should say here, which is very important, so you see this a and b, you can have the indices 1, 1, 1, 2, 2, 1, and 2, 2.
Now, in that particular case, when e4 is, when the distribution is integral, in other words, when these two things are integral here, then actually this second fundamental form, like any last second fundamental, like any, sorry, like any fundamental forms,
like any second fundamental form, I should say, they have to be symmetric. It's easy to see that the symmetric comes because of the fact that this is integral. In general, it's not true. So in most cases, these components will be equal by symmetry, but not necessarily the other one. And not necessarily always, because as we'll see,
examples in a second, in interesting situations, you may not have integrability. In any case, that's a situation. This is now a vector on the two sphere or on the horizontal structure, if you don't have integrability. And the same thing for xi bar, where you put here e3, e3, de3, e3.
So for example, if e4 is geodesic, if de4 e4 is geodesic equal to zero, then these coefficients will be automatically zero, which is again something that you can choose to do in various situations. You can make them to be zero or not.
All right, so as you see, you can do many other combinations and you get all the other connection coefficients. We call it connection coefficients or Ricci coefficients, gamma. Okay, now what about the curvature? So look, so these are Christoffel. You said Ricci. Sorry?
No, Christoffel or Ricci. Yeah, but Ricci is usually, Christoffel is used for coordinates, right? And Ricci is used for, it's the same thing more or less, but here it's a frame and in the other case are coordinates, right? Okay, so next you go to curvature and the curvature has four components,
I mean, it's a four tensor, right? And because of the fact that Ricci is equal to zero, it means if you are in one plus three dimensions, it means you have here exactly 10 components relative to the frame. There should be exactly 10 components.
And this is the file tensor, exactly, right. All right, so then you write down, again, various possibilities. I can take, for example, R, Ea, E4, Eb, E4, right? So this is one component which is called alpha.
So again, it depends on two indices and it's very easy to see that it's symmetric because of the Ricci condition, this is symmetric and therefore it's not only symmetric, actually, it's also traceless. So if I look at delta Ab, alpha Ab, I get also zero, okay?
And again, this is because of the Ricci, because of the properties of the Riemann curvature tensor in Ricci-Flat. Similarly, I can take alpha bar Ab, if I replace C4 by three, and then I can also do this, I can take R, Ea, E4, E3, E4,
and I take one half and I call this beta. Right, so you see it's another, so these are two components here, we have two components here, right, because this has two components because it's symmetric and traceless in two,
so it's a matrix, a two by two matrix which is symmetric traceless. This is the vector, so it has two components. Then I can do the same thing, where I replace E3, E4, E3, so by symmetry, actually, this has a minus, but this doesn't matter,
minus beta bar A, right, so I put an underline here to illustrate the symmetry. And I have to have two more components, which are R, E3, E4, E3, E4, right, so that's what they call rho,
actually four rho, so this is one over four rho. And then finally there is another one where I do exactly the same thing with a hodge dual. Right, so I take the hodge dual and I do the same thing and I call this rho star. Okay, so these are the 10 components,
as it should be. And these decompositions are crucial because every component behaves differently, right? Okay, and so it's extremely important to get familiar with these kind of decompositions. All right, so then you can write down main equations,
right, so I have curvature, I have connection coefficients, main equation, well, the equation I'm just going to write symbolically, maybe I'll say more later. But normally you are going to have some, the typical equations look like this, d gamma plus gamma times gamma is equal to curvature.
And then you have Bianchi, so dr is equal to zero, these are the Bianchi identities. So in fact actually I should be careful, I should write it like this. If I look at components and I take d delta and I take cyclic permutations of this,
this is I get zero, right? This is the Bianchi identity, so this is what I write here. Not the contracted one. Not the contracted one, these are the full ones. And this one of course is the usual relation, the Cartan, the relation between gamma, Ricci coefficients and the curvature, okay?
So again, because we are decomposing these components, I have to be careful to also decompose this relative to components and I get a lot of equations this way, okay? And to work in this business, you really have to understand very well these kind of equations.
All right, now I think I can turn off the light. Is it here? This one? It's fine? Okay. All right, so now I go to
the Keir family again, just to remind you, but the new thing here that I want you to remember, somewhat at least, is that in this coordinate, so this is explicit formulation of the Keir metric, you find out these vectors e3 and e4,
which are null, so it's easy to check that they are null, and they are called principal null directions for the reason that you'll see in a second, okay? So these are the kind of e3, e4 that I want to pick, okay?
So in Keir, this is a pair that I'm interested in. Now, observe that if I were in Minkowski space, if I were exactly in Minkowski space, in other words, a and m would be zero, then e3 would be precisely dt minus dr, and e4 would be dt plus dr, right?
So these are very simple null directions that obviously are important to understand the radiations of, say, linear equations in Minkowski space. This plays a fundamental role from that point of view.
Otherwise, they are much more complicated, but they are still null. Okay, so now, again, I'm just repeating what I said before. I have e3 and e4, I have the span of e1, e2, which are perpendicular to e3, e4. I define the connection coefficients, and you see some of them, okay?
So they all play an important role. And then you have curvature components, which I mentioned, which are these ones. All right, so now here is an important thing, a crucial fact, is that if I look at the principal null frame,
in other words, the frame that I wrote down in Borel-Linguis coordinates for the K-solution, if I look exactly in that frame, I find out that all the components of the curvature are zero, with the exception of the so-called middle components, rho and i rho star, which can be complexified like this.
If I put them together, I get this very simple expression, minus 2m divided by r plus i a cosine theta to the power 3. So here there is some miraculous thing happening if you complexify. And then these other components, psi, psi-bar, chi-hat and chi-bar-hat are zero.
So these are Ricci coefficients. I didn't tell you what chi-hat is. Maybe I should say it now. So chi-hat a b is a chi where I subtract the trace, so chi a b and I subtract delta a b times a trace. So the trace of chi is delta a b chi a b.
And this plays also a very important role. Okay, so if I am exactly in care, then you get a lot of cancellations.
And it's because of this that the principal null frame is so important. These are the components of what? Of the curvature? So these are components of the curvature, right? So these are all the curvature components that we discussed. These are the ones that we discussed here. And in care, they are exactly all zero with the exception of the rho and rho star,
these two here. And in addition, you get a lot of Ricci coefficients to be zero. Now, it's interesting, however, and this is important to point out, that in care, E3, E4, the perpendicular to it, in other words, this distribution is not integral.
It's not an integral distribution, right? So in other words, you don't get two surfaces. No, E1, E2 are not, right? This is the perpendicular is exactly the E1, E2. So they are not integral, which is quite a remarkable fact about the care solution.
And again, another reason why it's so complicated. This is not integral. So it doesn't fit into sort of normal geometric patterns somehow, right? Clearly, the principal null directions are extremely important, and yet they are not integral, which is kind of bad. So in particular, this chi 1,2 and chi 2,1
are different, okay? So they are not the same, right? So you don't have the symmetry that you usually get for second fundamental form, I should say. So second fundamental form, I remember for any surface, if you have any surface, not necessarily a two surface,
any surface, maybe three dimensions and so on, you can define the normal, you take the normal, and you take the induced second fundamental form. The second fundamental form is always symmetric if this is a true hypersurface. If it's not, if it's just a distribution, it's not going to be symmetric. All right, so that's unfortunate, but that's the way it is.
Actually, I should say it's fortunate, because it makes us do interesting things. So in Schwarzschild, we have in addition, so if I am in Schwarzschild now, and I look at the same null pair, but in Schwarzschild, so I take a to b0, in other words, you get that actually this is integrable. So in that case, you get integrability.
So you get spheres, yes, right. In addition, you get the draw star is equal to zero. In other words, this last component that comes from the Hodge dual, this is actually zero. And you also get additional components. In addition to this, you also get these other components, which are zero.
And in fact, the only non-vanishing components of gammas are Treschke, Treschke bar, omega, and omega bar. So Treschke is what I just defined. Treschke bar, omega, and omega bar. These are the only things that I should say, omega is defined like this, is d4, d4, d3, I think one quarter.
And omega bar is defined by symmetry. Okay, so that's what it is. Now, if you are in Minkowski, in addition to all that, so in addition to Schwarzschild, you also have that omega, omega bar, and rho are zero.
So all curvature components are zero. Obviously, in Minkowski space, the curvature is zero, right? It's a flat space. And you also have these components omega, omega bar equal to zero. So therefore, the only thing which is not zero are this. And these are also very simple geometric meaning, these two expansions. So these are called expansions actually. And they played a role in what I talked about last time,
I mean on, what was it, on Friday. Because they were connected with the definition of a trapped surface. The trapped surface was defined in terms of these two quantities, which are called expansions. Okay, so now you want to talk about perturbations.
So I want to take a care solution and perturb it a little bit. And the expectation is that somehow you are going to get something which at least in the list will stay close
to the original care solution you started with, right? So from that point of view, it makes sense to start talking at the simplest level, to talk about all of epsilon perturbations of care. So what is all of epsilon perturbation of care? Well, everything that vanished in care,
I now assume that is all of epsilon. In other words, I assume that there exists a frame, E3, E4, E1, E2, which is close to the frame of care in some way. And I assume that relative to this frame,
all components which were zero in care, exactly zero, are now all of epsilon. It's reasonable, right? You expect that things are not going to deviate too much. And epsilon is a parameter which I control. Now, what is the problem with this definition? The problem is that there are lots of frames which achieve this, right?
So if I have a frame for which I have this, in other words, if I have a frame of that type in which these components are all of epsilon, but then I have a lot of others which are also of epsilon, because I can take any frame transformation
which takes a null frame into another null frame. So I start from E3, E4, E1, E2, and I get to E1, E2, E3, the primes, E3 prime, E4 primes. And I can write down all the possible frame transformations.
I write here all of epsilon squared terms, because actually the formulas are more complicated. There are many other terms. But in any case, I just want to concentrate on the all-epsilon terms. So in other words, if f, f bar and log lambda are all of epsilon, then if I transform the original frame into this new frame,
then these conditions will be preserved, right? So it means I have infinitely many such spaces which are all of epsilon. So what frame do I choose? And this is of fundamental importance because of what I said. The frame is going to be to play a fundamental role. Otherwise, if I don't understand what is a correct frame
in which I do my calculations and I prove my stability result, I'm not going to be able to do anything. So this is the first important fact that there are lots of frame transformations which preserve these conditions. And lambda f and lambda f are scalar? Uh, well, lambda is a scalar.
And f is this indices, f a. So this is a is one and two. And f bar a is a equal one and two. So there are five parameters in some sense. All right, so now the one thing which is important to remark,
this is the first important remark in connection with this stuff, which is quite remarkable actually, is that the curvature components alpha alpha bar. So these are the components which I defined here if you remember. So there are components which are obtained by taking two e4s.
These components are all of epsilon squared invariant. This is a huge observation in some sense. I mean, it's trivial to prove, but it's huge because it tells you that at least some of the components of the curvature are actually invariant up to all of epsilon squared.
So they are not just all of epsilon, they are all of epsilon squared. So if I take any frame and I take a transformation like this, the difference between alpha and alpha prime is going to be all of epsilon squared. Then the same thing with alpha bar. So that's a huge fact because it tells you that at least I can put my hands on some facts which are almost invariant.
Being all of epsilon squared invariant, it means that at least at a linear level they are totally invariant. In other words, the choice that I make on my frame does not going to affect alpha and alpha bar at a linear level. So this is clearly a huge observation even though it's trivial.
You can really write down, it's not a big deal, it's just a simple calculation to show that. But it's clearly very important. Okay, so this is what it is. Now, sorry, there was another observation here. Another observation is that if in addition
I'm dealing with perturbation of Minkowski space, so I'm not just... So this is a general perturbation of care. If I look at perturbation of Minkowski space, then in fact all curvature components, so everything here, including rho and rho star,
they are all invariant quantities in that case. So clearly the stability of Minkowski space is simpler because of this, because the decomposition of curvature, you can think of the decomposition of curvature into components, does not depend up to terms quadratic in epsilon. They do not depend on the particular choice of frames I make.
So these are invariants. From the point of view of nonlinear theory, I can view them as invariants. So these are invariant. So in other words, the full curvature tensor is invariant. If I do perturbation of Minkowski space, if I am perturbation of care, I only have alpha alpha bar to be invariant. By the way, so this is true about curvature.
If I look at the other components like these ones, then of course they are totally non-invariant. I mean, they are far from being invariant. So all these other connection coefficients are far from being invariant. They definitely change in a major way whenever I make a transformation.
And this will play an important role when I make my final choices of gauges. To first order in epsilon. It's invariant to first order terms in epsilon. Second order, epsilon squared. Second, right? Up to these terms. No, no, no, but 0s are 1s. Oh, these ones, yeah.
So in other words, they are not invariant, because they change even at a linear level. All right, so now what I want to do now is to actually spend a little bit more time about stability of Minkowski space, and then I'll come back and talk about the care solution.
Okay, so what time is it? All right, so stability of Minkowski space. All right, so for this I have to, in order to discuss it, I want to start with very simple things.
Thank you. So I want to start with a very simple discussion. So first of all, if you remember when we talked about the epsilon equations for each of g equal to 0, remember that I said at some point that if you pick up coordinates which verify the wave equation equal to 0,
the wave equation relative to g, in other words, if you look at... So these are harmonic coordinates, or wave coordinates, we call them. If you use this kind of coordinates, then the equation takes this form, alpha, beta, d alpha, d beta of g,
and here it's any component. Of g, and is equal to going to be an f mu nu of, or a non-linear term, let's say n of g, and first derivative of g, and it's quadratic, so this is quadratic in first derivatives of g.
So this is the kind of equation you get. So you see, it's a non-linear system of wave equations. So obviously, you have to understand this if you are to do anything that has to do with stability,
you have to understand the long-time behavior of solutions of this equation. So let me simplify a little bit and look at something simpler, but which symbolically is very similar. So suppose I look at Lorentzian metric g alpha beta of phi, d alpha, d beta of phi,
and here I write just a function of phi, and first derivative of phi. So you see, I replace this g, I replace it by a phi, so that I get a scalar equation, just to simplify things a little bit. So this is the kind of equation I get. I can simplify it even further.
So let's say this is type one, this is type two, this is type three. I can assume that actually this is just a Minkowski space, Minkowski metric, so in which case I'm getting the Minkowski metric, so download version of phi in Minkowski space is equal to N of phi and first derivative of phi.
So that's a reasonable model problem. If I want to understand this, I first want to understand this, to understand this, I have to understand this, to understand this, I first have to understand this, because it's much simpler. So this is the kind of equation that you should be able to control
if you want to prove stability. So for example, if I'm to prove the stability of Minkowski space, then I have to start with the initial data, which are close to the Minkowski metric. Mu nu is a Minkowski metric, right? So in particular, relative to this model problem,
I'm interested then in discussing the initial value problem, say the simplest initial value problem, which is when phi is exactly equal to zero, so phi equal to zero is of course a solution of this. This is quadratic here, quadratic in d phi. So this is clearly a solution, phi equal to zero is a solution.
So this solution corresponds in some sense to the Minkowski metric in terms of this analogy with the equation here. So here I want to perturb the Minkowski metric where all the components are. So these are minus one, one, one and zero everywhere else.
And here I want to perturb phi equal to zero, right? So it's a very reasonable approximation, very reasonable model problem. So I want to prescribe now t equals zero, I want to prescribe phi to be, say, epsilon, some small parameter, f of x and dt of phi.
I want to be epsilon times g of x. Well, let's say f and g are some smooth functions with compact support for simplicity. So anything, very reasonable functions, test functions. All right, and I want to show that if epsilon is sufficiently small, I would be interested to prove stability. I would like to show that if epsilon is sufficiently small,
in other words, if I perturb around the solution phi is equal to zero, let's call it phi zero, if I perturb around this, I want to get a global solution which converges back to this one, for example, right? Okay, so this is the kind of question that you want to ask.
Is it true that this problem is stable under small perturbation, in other words, for a small epsilon? All right, so this is the kind of thing that I want to talk about it.
All right, so how do you deal with this kind of issue? So let me actually simplify even more and look at down version of phi. It's just dt of phi squared, right?
So this is simplification number four, which is the one I have in there. And again, you start with initial conditions, which are this one, small, so for small epsilon, I want to understand what happens. So how do you deal with this problem? Well, you see, if I didn't have this term here,
then the only thing that allows me to understand the solutions of the wave equation are energy norms, energy, right? So the simplest kind of energy identity is that if I take e0 of phi to be one half integral of dt phi squared
plus d1 phi squared plus dn phi squared, I assume that I'm in Minkowski space of n plus one dimensions. So if I take e0 of phi evaluated at time t, so I'm integrated on t equal constant, so I'll call it sigma t.
Sigma t means t equal constant, and integrated with respect to dx. Sorry? There are n coordinates, so it's dx1 dxn, right? dxn, right? So I'm integrating with no time coordinates, so the time is fixed. So I'm integrating, in other words,
the picture is that this is t equals zero, this is t equals t zero, and this is the t-axis, and this is the rn-axis. So I'm integrating at fixed t, and the conservation law tells you that
e0 of phi time t is the same as e0 of phi at time zero. So this is just conservation of energy, which is something very easy to deduce. Okay, but of course, my problem is that I have something on the right-hand side. So just this conservation law by itself is not enough,
and in fact, what one does is you look at higher derivatives. You commute these equations with derivatives, so you get, this is a flat wave operator I can commute here. I commute here, right, and then I apply the energy estimate for the new field here,
and it's what I call es of phi, right? So this is the energy for s derivatives, right, which is the same thing. It's e0 of phi of, let's say, d alpha of phi.
So it's a sum for epsilon of alpha less or equal to s. In other words, I take all derivative up to order s, and this is my new norm, which is the generalized energy norm, which I call es of phi. So es of phi, again, if I were exactly in Minkowski space,
this would be true, es of phi times zero, right? So this would also be conserved. In other words, I have a lot of conserved quantities if I'm exactly in flat space. But if I'm not in flat space, I have to do something about this term, right? So you have to do, it's not a big deal.
One can prove the statement that one can prove. So I can prove the statement that the energy, the full energy for the entire system, the nonlinear system, remains bounded by the same energy at time zero,
provided that I have a certain condition satisfied, which is very easy to see. When you do this energy estimates now with the right hand side, it's easy to see that you can control all the energy if you can control this quantity. In other words, if you can control that the integral from zero to t of dt phi in L infinity is bounded by, say, one, okay?
So as long as you control that, you are fine. Okay, but you see, this is highly non-trivial because you need, in particular, that the L infinity norm decays, right? So in order for this to be integrable, it has to decay at least like t to the minus one minus something, right?
So the L infinity norm has to decay. So is it true that the L infinity norm, that you can show that the L infinity norm is decay? And here, it's the major technical complication in all this business, even when you do perturbation expansions, a la the TIBO. I was hoping that TIBO is here, but it's not. Even if you do those perturbations, you have the same kind of difficulties, right?
So in order to prove anything, it's not enough to control the energy. You also have to control this quantity. All right, so how do you control that quantity? Well, traditionally, and this, I'm sure, is done by TIBO, so when he does his calculation,
traditionally, this is done by looking again at the original linear equation and writing down the fundamental solution. So using the fundamental solution, combining the fundamental solution with a non-linear part, it's a mess. It's very complicated. In any case, in linear, if you have just this equation,
it's actually not too difficult to see from the fundamental solution. If you write down the fundamental solution, it's not so difficult to see that the solutions will decay like t to the minus n minus one over two, right? In the finite of the parity of n? In every dimension, you get exactly
t to the minus n minus one over two. That's the optimal thing, optimal decay. Of course, the superposition of waves, some waves decay faster, but you'll always get waves which decay only like t to the minus n minus one over two and nothing better, right? And then, of course, this is a problem,
and in particular, it's a problem because you see that if n is equal to three here, which is an interesting dimension, this will be divergent. That integral will be divergent, so you will not be able to do anything. But even more complicated is how do you ensure that this salinity will not decay at all
because it's extremely complicated now. If you use a fundamental solution, it will be a huge mess. It can be done, and people do it in asymptotic expansion, but there is another way of doing it, right? Which is, I think, much better, and so that's the one that I want to describe very fast. This has to do with what's called the vector field method.
So the vector field method is based on the idea that somehow you should not commute only with normal derivative. You have, in Minkowski space, you have in addition, you have in addition to, yeah, maybe I should keep this here. So you have in addition to the usual derivatives
which commute, right? We have that dt, dx1, so dxn. I'm going to call it d1, dn. They all commute with a wave equation, right? So you have the alpha version is zero,
and that's why you could form this higher energy estimate. But there are other vector fields which commute. So there are, in fact, a lot of vector fields. For example, there is the vector fields xi, dj,
minus hj, di, or xi, dt, plus t, di, and there is also t, dt, plus xi, di. See here, I'm summing over i, right? So i is from 1 to n. Okay, so this vector field I'm going to call s.
This is actually Lorentz boost, right? This is what we could call Li from Lorentz, and this is an angular, right? So these are generated by rotations, these are generated by boost, and these are generated by scale transformations. It's very easy to see that all these vector fields commute.
So any of these vector fields, let's call it x, commutes with the dynamic version. Either it's zero, or actually in the case of this one, you get minus two dynamic version, right? So in particular, it takes solutions to solutions, okay?
So because of the, sorry? These are symmetries of the dynamic version. Yeah, but they are, in fact, symmetries, of course, of the space dynamic, of the Minkowski space. They are killing, so all these vector fields are actually killing, so these are all killing in Minkowski space, and these are conformal killing, right?
So they are all very useful, all right? So they are useful because they commute, and therefore I can put these vector fields here. And instead of looking at generalized energy norms of this type, I can take any vector field here, any number of, with any number of vector fields, all right? So this allows me to define some kind of Sobolev space,
right, this would be some kind of Sobolev space, which is a generalization of the usual Sobolev space, because I'm willing to contain all sorts of vector fields. Okay, so the consequence of all this is that this generalized energy norm, which I have there,
this generalized energy norm, which I have here, is actually conserved for the wave equation. For just the wave equation, it's conserved. So if I take solution of the wave equation, then this Es of i is also conserved. So it's another conserved quantity. And the remarkable thing about this quantity
is that it allows you to show that the solution decays, in fact. So if you remember, I said that the solution should decay like e to the minus n minus one over two, but you see that here we get even more. By using this method, since Es of i is conserved,
it means that if it starts by being bounded at time t equals zero, it's going to be bounded at later times. And therefore, I can assume that Es of i is bounded. I'm using S larger than n over two. The S test for the number of vector fields I take here. So I take S derivatives, integer, larger than n over two.
It's exactly the n over two that comes in the Sobolev inequality. And I'm able to control the infinity norm in terms of the Sobolev norm, which is larger than n over two. But instead of having just bounded S, I also get decay now. And the decay comes from this vector field.
This is the whole point because it allows you to reduce decay to energy. So instead of doing decay using the fundamental solution, which is extremely complicated and almost never works, I can incorporate information about decay in my basic energy norms. And of course, energy is much more robust.
Energy estimates are much more robust. We use it all the time in PDE. So anyway, this is what happens. Conclusion is that now you get this kind of decay rate. You see that in this picture, when t is exactly absolute value of x,
so if I look in Minkowski's space, so I have t equals zero here, and this is t equal R, t equal absolute value of x, right? Absolute value of x equal to R. So if I am looking at solutions of the wave equation
along null direction, right? So if I look at the behavior along null directions, I see that the best behavior is exactly the one given by, so this will not be useful because t is absolute value of x, so this is b of four, the one, and I'm getting just t minus n minus one over two.
So I get exactly the decay which I have here. But this gives me additional information because it tells me that if I'm inside the light cone, so in particular, if absolute value of x is much less than t, then I get another t to the minus a half from here, so I get the decay. Even better, I get t to the minus n over two, right? So as a consequence, this is a much better way
of understanding decay if I am to deal with nonlinear problems. And once I use this type of estimate, yeah, by the way, there is even more, which is extremely important in what I'm going to say. If I, see, phi decays only like t to the minus n minus one over two,
but if I take the frame, so remember that I have this frame, which is E3 is L bar, which is dt minus dr, and E4, which is L, which is dt plus dr. So if I take E4 of phi,
or if I take the other elements of the frame, E4 or Ea, so I have the other E1, E2 here, for the null frame. So these are orthogonal to these two. If I take the derivative in this direction,
in fact, I get t to the minus n minus one over two minus one. So I gain an order of decay for both of them. And the only one which does not improve is E3 or E4. So E3 or E4 is t to the minus n minus one over two. So, in fact, even E3 derivative improves,
but improves in this component. So instead of being one plus t minus x minus one over two, it would be minus three over two. But it means that near the light cone, it still does not improve. Okay, so that's a remarkable amount of information
that you can get from this very simple functional analytic methods, which are based on symmetries. Once you have that, now you can go back and analyze these type of equations that we discussed. Derivative of phi is f of phi, derivative of phi and second derivative of phi. So I look at, in other words,
I look at very general perturbations of just the wave equation, derivative of phi equal to zero. And I'm looking at the vacuum state phi equal to zero. I'm looking at the stability of the vacuum state, phi equal to zero. It turns out that phi equal to zero.
So, again, if you remember my discussion of Slava last time, that he couldn't believe that exponentially growing modes is not good. The fact that you don't have exponentially growing modes is not good for stability. So here, you see, if you look at the dimension of phi equal to zero,
in other words, if you look at linearization around zero, this is the equation you get, which is, of course, stable. It doesn't have any exponentially growing modes. Not only that, but it also decays. If I'm in dimension three, it decays like e to the minus one. If I'm in dimension one, it decays like e to the power zero. So it's bounded in any case.
But if I perturb it, if I look at the non-linear problem and I look at general preservations, in fact, these are unstable. So, for example, if I look at these equations, dt phi squared, this would blow up in finite time.
And it forms, in fact, shock waves. So shock waves can be the result of a perturbation of a very simple state, like phi equal to zero. And of course, I mentioned also turbulence. In the case of the Euler equations, again, you start with u equal to zero,
and you can end up in finite time. In a very short time, you can end up with solutions which you have absolutely no control, which are extremely unstable. Nevertheless, the state u equal to zero, the linearization around it gives you bounded solutions. So there is no issue of exponentially growing modes
or anything like that. It's much less, and you still get... All right. So anyway, in dimension n equals to three, typically most equations are unstable, and that's the case exactly of this equation. In dimension less than two, it's even worse.
Dimension one is terrible. If you are in dimension four, it gets better. You can actually prove something. Dimension is critical. But of course, since we are interested in dimension three, dimension three, most equations are bad. You need equations which verify structural conditions on the non-linearity.
In order to have existence, to have stability of this vacuum state, I need some kind of condition on this, a structural condition on this. This is called the null condition. If the null condition is satisfied, which I call here, if it verifies the null condition,
then phi is structurally stable. What is the null condition? I'm not going to go into a formal definition. I'll just say something very simple, which I mentioned here. You see, relative to a null frame, if you look at the composition with respect to null frames, you see that derivatives in these directions improve,
and it's only this direction is very bad. Therefore, you expect a non-linearity of this type to be bad. But any non-linearity where you have, say, E3 multiplied by E4 or Ea, this will be okay. The null condition is just a way of saying
that the worst possible directions are not present when you do a decomposition of the non-linearity relative to the null frame. It's a very simple procedure. You take the non-linear equation, you look at the non-linear terms, you do an expansion in terms of the null frame. And if you see the presence of these bad states,
you are done. It means the null condition is not verified. And if you don't see it, then you say the null condition is verified. Now, of course, things are more complicated, but that's roughly the simplest way to say that. Now, let me, actually, this goes.
Ah, I'm sorry. I should have realized that I can do it this way. Okay, so dimension n is equal to critical. Null condition is something extremely important, which will play a role in what I'm going to say. And I'll finish the first hour. What time is it?
Ten past three. So it's a good time to take a break. So I'll finish with this fact. Geometric non-linear wave equations verify some gauge-dependent version of the null condition. So this is a remarkable fact. Many equations are interesting in mathematical physics. They are derived from a geometric Lagrangian
to verify the null condition. So, for example, the Einstein equations verify the null condition. But, and there is an important but, the null condition, you see it's a condition on the non-linearity. It's not about the linear equation, it's about the nonlinear. It's a structure condition on the non-linear equation.
And the complicated equations, like the Einstein equations, do verify the null condition, but only if you take into account the gauge condition. So the gauge condition is essential. So only if I mold out the equations by the de Formorphis group, and I look at, in other words, I look at the correct framework,
the correct gauge-dependent framework, I will see the null condition. So, for example, in what I mentioned earlier here, if I look at the Einstein equations in these kind of harmonic coordinates, they don't satisfy the null condition. So the null condition is just simply not true.
There is something else, which is called the weak null condition, which is still verified in this context, but the null condition is not verified. And, nevertheless, the Einstein equations definitely verify the null condition, and that's the reason why the Minkowski space will be possible.
All right, let me finish with this vector field method before taking the break. So the vector field method is now a general method of studying non-linear equations, which to some extent, you could say, is certainly not dependent on perturbations, right?
So it's a robust method of deriving estimates, decay estimates, by reducing it to energy-type estimates,
so, in other words, integral estimates, which is based on symmetries, or approximate symmetry, or other geometric features. To derive generalized energy bounds, in other words, generalized energy bounds, I mean, these kind of norms, which have those vector fields, but it could be even more complicated than that. To derive energy bounds and robust
and infinity quantitative decay, because, as we saw, if you have bounds on the energies, you're also going to have an infinity quantitative decay, and it applies, so this method applies not just to the wave equation, as I showed here, but it applies to tensor field equations like the Maxwell and Bianchi-type equations
in Minkowski space. So, for example, the Maxwell equations are such an example where you can still use exactly the same techniques in order to get decay. So, in other words, you commute this equation by taking a lead derivative relative to the same vector fields.
You can commute, and you get the same equations verified by the lead derivatives, and then you create norms based on these derivatives, and from it you read the decay. And then you can treat non-linear problems as a consequence. All right, so I'll stop here. Okay, so the vector field method.
So, again, the vector field method you could think of it as a non-perturbative tool to study classical field equations, right? Something that does not require any more to talk about expansions and fundamental solutions. It's non-perturbative,
and it's very general. It can be applied in many situations, and in particular it applies to Maxwell equations, as I said, but it also applies to the Einstein equations in the following sense. So if you look at a solution of the Einstein equations,
so the reach of G is equal to zero, then if you look at the Bianchi identities, so you take cyclic permutation equal to zero, right?
It's very easy to see from here if you take the divergence it's easy to see that this also implies that the divergence, so there is a derivative with respect to alpha equal to zero, which I can write as delta of r is equal to zero.
But at the same time, it's not so hard to see that something similar happens with r star, and I can derive also, so in other words, if I take the Hodge dual, and I remember the Hodge dual is defined
alpha, beta, gamma, delta is epsilon, where epsilon is a volume form, epsilon alpha, beta, mu, mu, and r, mu, mu, gamma, delta. There's a one half here, right? So there is a very simple way of defining the dual, so the Hodge dual verifies the same thing,
delta of r star is equal to zero. Yeah, only with respect to alpha, exactly. So you see that formally, you can write down the Bianche identities, you can write it like this, dr equal to zero, this corresponds to this,
but then you have also this other one, delta r is equal to zero. And this is very similar to what you had in the Maxwell theory. Okay, so now you can start doing the same thing
that we did for Maxwell equations, you can start doing the same thing by taking the derivative with respect to vector by commuting with various vector fields. So I take x1, xn, and I hope that this goes inside and therefore after that, I'm going to treat these equations very much like the Maxwell equations.
So the Maxwell equation in Minkowski space, of course. Of course, there is a problem, because I need it in order to do this in Minkowski space, I need the symmetries of Minkowski space, right? So I need killing, I need the axis to be killing or conforming vector fields. And of course, if I take a general solution of the Einstein equation,
there is no way I'm going to have killing or conforming vector fields, right? So the only thing I can hope, and this is what I'm going to talk about in a second, is that this x1, xn are approximate killing vector fields. So in Minkowski space,
you have the killing plus conforming killing. So in perturbations of Minkowski space, I have to take approximate killing plus conforming killing.
So what does that mean? Well, it's very simple to define in a way, because in Minkowski space, so in general, killing vector field means
that the derivative with respect to the vector field of the metric is equal to zero, right? Okay, so this in general, I'm going to define it as deformation tensor pi of pi. So the deformation tensor of x. So pi x alpha beta is in fact d alpha x beta plus d beta x alpha.
So this is from the killing equation, right? So this is equal to zero implies that x is killing, but of course I cannot expect to have killing vector fields if I do a general perturbation of Minkowski space. I won't have them, but I might hope to have this sufficiently small. So of course, now if I take say d of the x of r,
it's not going to be zero, it's going to be here some complicated expression involving r and pi, something involving a product between curvature and the pi, the deformation tensor, right? So let's call it pi x,
the deformation tensor of this vector field. And this now is of the same order of complication that what I had here when we treated the version of pi is the derivative of pi squared, right? Because now this is a complicated term that I have to control. And in order to control it, I need decay now.
So there is no way I can prove anything, any stability result, if I don't know how to control this one. But to control this one, I need to control the decay, at least of one of the factors. In fact, I need to control both factors. I need to control the decay of both factors, both the curvature and the pi's. That's the only way I would be able to actually control the curvature.
So the crucial thing when you do stability of Minkowski space is to control the curvature. You have to control the curvature without which you don't control anything, right? So let me now go through a discussion of that.
So let's talk about stability of Minkowski space. So I'm trying to solve the Ricci Flat. I start with the initial data set, which consists, as usual, a three manifold, a metric, which is now a Riemannian metric, right?
And a second fundamental form. And they verify the constraint equation, which I'm not going to write because it's not relevant. But the constraint equations, of course, are by themselves very interesting. Okay, so now I can also impose a gauge condition,
at least initially, which is... Well, sorry, not only initially, actually. I impose a gate condition, which is trace k equal to zero. So you see, I can impose four coordinate conditions. You have to think about when I do wave coordinates, I had four coordinates here, right? Because alpha can be zero, one, two, three.
So there are four possible coordinates. Here, I'm using the full coordinate. I prescribe all the coordinates. Here, I only prescribe one coordinate, which is, in fact, t, it's a time coordinate. So in other words, I'm starting with sigma zero, and I'm constructing a foliation by sigma t,
which is a time function. So by time function, I mean a function defined on the manifold such that the level surfaces are... So first of all, this is the difference of zero, and the level surfaces are space-like, exactly. I can think of this t as being one coordinate condition.
I have four conditions to make, and I can make one, and I assume that this is maximal. So assuming that it's maximal, it means that if I look at the second fundamental form, trace of the second fundamental form relative to the induced metric here should be zero.
So that's the maximality condition. So this is what is done in Slawidjo-Minkowski space. It turns out that it's not so important, but at the time, we saw that such a time function is fundamental. So then you have the constraint equations plus trace k equal to zero,
and now you look at the initial data set for Minkowski, which is exactly R3. This is the Euclidean metric, E, and zero, second fundamental form. So that's how you start, and you assume asymptotic flatness. So in the definition of asymptotic flatness,
now you have to be careful. In other words, asymptotic flatness is always taken relative to a system of coordinates outside a large compact set. So I'm at sigma zero, let's say. I take a sufficiently large compact set, I look outside, and I look at the system of coordinates on the initial data.
I look at the components of the metric relative to that initial data, which is gij, and that's where you see the mass, one plus two m over r delta ij. So gij minus delta ij, but you have to subtract also a term which is like r to the minus one. So this is long range for those people
who know the Coulomb potential, that's exactly this r to the minus one component, which is a very slow decaying component, and in front of it there is mass, and the positive mass theorem tells you that for general initial data this m is positive. This is a famous theorem of Shon and Yau.
So this positive mass theorem is only tied to the constraint equation. It has nothing to do with the evolution, it's just an issue about the constraints. So these are the assumptions, and in addition, I want a smallness condition.
In other words, I want to make a preservation of the flat initial data, so I want to be initial data which are close to this one, and you can make that precise, there is no point in going through this. So I start up with this initial data set, so it's a small initial data set, so it's close to the 3D initial data set,
of Minkowski space, and then I look at the maximum global hyperbolic development of this, and I want to... The asymptotic flatness is the initial data. Asymptotic flatness is the initial data, but it's carried by the evolution, so the evolution will carry this asymptotic flatness. So the question is,
what is the character of the maximum development? In other words, we know that there is a maximum development, in other words, some local existence result that tells me that I can go up to something, but it could be that my spacetime terminates in finite time relative to proper time of a given parameter, of a given observer,
and that, of course, is unacceptable, and stability should mean that, at least should mean that the spacetime reconstruction should be complete. So this is a theorem, which is a theorem of 1993 between Christo Düll and myself, that says that any asymptotic flat initial data set close to the flat one
has a complete maximum development, which converges back to Minkowski. So here, you are not converging to another parameter. You could have converged to another black hole, for example. You can go from Minkowski to a Schwarzschild. This doesn't happen, fortunately.
In time, it could develop a black hole later on, and then it will be converging to Minkowski. But it converges to Minkowski. The statement is that actually you stay in Minkowski space. This is very, very important, because remember that this example, the version of phi is dt phi squared.
This blows up in finite time, which means that there is an instability here, which may lead to a black hole, in the case of the Einstein equation. In this case, it's a shock wave, but in the case of the Einstein equation, it could be something that might lead you to another black hole, or it could lead you to another singularity. Who knows? Anyway, because of these kind of examples,
it was not at all clear that such a statement is correct. The physicists, as always, they have their own way of simplifying the problem, and they would say that, yeah, well, of course it should have been stable, but you still have to give a reason for that.
And the kind of reasons they usually gave were not satisfactory. For example, one of the reasons that was talked about is that since the mass is positive, you cannot have, the Migosky space has to be stable, but in reality, you can give lots of examples where you have positivity of mass
and you don't have stability. The simplest one is, again, the Burger equation. If I look at ut plus uux equal to zero, then I see that u equals zero corresponds to the energy u squared, so if I look at the energy, this energy at t equal constant
is actually conserved, and it's positive, so you have positivity of the energy for any initial data, and yet, of course, u equals zero is not stable. This forms a shockwave in a very, very short time,
so perturbations of all the epsilon form a shockwave by the time epsilon to the minus one, so this is very, very far from being stable. And of course, also, piece lava is not here. You can see that the linear case is just ut equals zero,
which obviously doesn't have exponentially growing modes, nevertheless, it's unstable, right? So u equals zero, the linearization is exactly ut equals zero, which, of course, all solutions are bounded. So you are very far away, the fact that you have some kind of linear stability
doesn't tell you anything about the non-linear equation. Okay, so the Mikovsky space is this one, so now, how do you construct it? You construct it together with a gauge condition. So okay, so let me make a little bit, let me come back to something I said earlier.
So remember that I said that the level of the curvature, so we have the frame, so you have the frame that is the C3, E4, and D alpha, Ea, or E capital A, depending on how I write, and then we have the gammas,
which are the Christoffel's symbol, or the Ricci coefficients, and then we have the curvature, right? And here we have components, right, like chi, chi bar, and so on and so forth, and here we have components alpha, alpha bar, beta, beta bar, and rho and rho star.
And we think in terms of perturbations of this, right? So these are all Phebsian perturbations that I discussed earlier, and remember that we said that these are invariant, when I do the stability of Mikovsky space, these are invariant, which is a very important fact. So because of this curvature itself,
the components of the curvature do not depend much on how I pick up my frame. But on the other hand, the gammas are going to depend in a fundamental way. And I cannot control the curvature if I don't control the gamma, because in the Bianchi identities, if I write down the Bianchi identities, because the covariant derivatives depend on the gamma,
so the control of the gamma is essential also in controlling the curvature. But somehow the good thing is that I don't care too much when I treat the curvature, I don't care too much about which gauge I choose, right? Okay, this is after the fact. At the time, that was not exactly the way we saw it.
All right, so you have to pick up a gauge, right? And the gauge consists on two things. This time function that I already mentioned, which was maximal, but more importantly, this optical function, which is properly initialized. So this is much more important, in fact,
because in order to understand decay, so you start out with initial conditions, you construct a time function, so a foliation by a time function.
But remember that the decay properties of curvature, of waves in general, depends on these null directions. Because in null direction, most of the energy is transmitted in null direction, and you get the worst decay, and outside, you get much better decay.
So it's extremely important to construct some way by which you keep into account what are the null directions. And this is where you construct this optical function. You construct an optical function, which is defined like this.
So you construct the space time to get those foliation by a second function. So you have t, and now you have a second function, u, which should be viewed as like u equal constant should be like light constant in Minkowski space.
The way to make that sure is to solve the so-called Eikonal equation. So I solve g, so this is a metric, d alpha of u, d beta of u is equal to zero. So this, if you remember from the very beginning of my lecture, as I said, that's a way to see null hypersurfaces. u equal constant, if I solve this equation,
u equal constant is a null hypersurface. So that's how I'm going to construct the u, but of course, this by itself is a non-linear problem, because this is quadratic in derivative of u, and of course, it also depends on the metric. So when you actually solve the Einstein equations,
you have to think about solving r alpha beta equal to zero, together with this one. These two are the fundamental building blocks in the construction of the space time. You construct both this and this. And from these ones and from the t, you get the intersection.
So the intersection that you see here, which I'm going to call s, t, u. So these are two surfaces. And from here, I can construct the null frame. So once I have t and u, I have s, t, u, and then I have the null pair, e3 and e4, e1, e2,
which I construct very easily. I mean, this one, which is e4, is generated by u. So it's a geodesic null vector field associated to u. The other one, which is like this, I construct it based on this one
and the fact that I already know this derivative transversal to t equal constant. Okay, so this is dt. Right, so it's some kind of symmetry. I can construct the other one. So from these two, I can construct the second one. So I get both e3 and e4 now. And then, of course, the e1, e2 are just tangent
to these two surfaces. So this is a frame. So this gives me the frame. E3 plus e4 is dt. I'm sorry? E3 plus e4 is dt. E3 plus e4 is essentially dt, correct. Yeah, I have to normalize it also a little bit, but it's in the direction of dt, correct.
Okay, so this is my frame. So you see, once I have these two, I can define a frame, a null frame. Once I have the null frame, of course, I can define the gammas. And then I can write down the equations for the gamma equal to curvature.
So somehow, the way to think about now is that I have two type of equations. I have equations at the level of the curvature itself and the equations at the level of the gammas, right? So somehow, if I know the curvature, I can determine the gamma by just integrating these equations.
So that's sort of a big thing that needs to be done. But the first and the most important thing is to actually determine the curvature. And that's where the fact that the curvature, that is all Phepsilon-squaring invariant, is going to be very important. So how do you deal with the curvature?
Okay, so let me write it here. Okay, so the curvature is the crucial thing. And there I have to think about, so we have the Bianca identities dr equal to zero,
and these other equations, which are divergence equal to zero, right? So these are, I can think of it as some kind of Maxwell equation, right? It's more complicated because it has more indices, but it's similar to a, it's formally similar to a Maxwell equation. So it's always satisfied that...
Yeah, yeah, yeah. So if Ricci is zero, then this is a Bianca identities, and this one is a divergence, which follows from this and this, right? So you get something which looks very much like the Maxwell equation. The idea is actually to do this as the Maxwell equation.
In other words, I have to start taking the derivative relative to vector. First of all, how do I define my vector fields, right? So it turns out that the vector fields can be defined using Tu and the frame. So for example, I mean, just to give you an example, if I want to define this t dt plus xi di as an example,
I mean, the analog of this, this would be u times e3 plus u bar times e4, okay?
Where this one is contracting by u plus 2t. So if I have, in other words, if I have u and t, I can define this one. e3 I have, e4 I have, and u I have, and this is the exact analog of s, for example, okay? So this is a kind of construction that I'm going to make, and I can define other vector fields like this, right?
So in other words, I construct the vector fields to be intimately tied to these two functions and the connections associated with these two functions. The important thing is that you see, there are only two functions, not four that I have there. I only need to construct two functions. And it's a much more geometric construction,
because I know exactly what I construct, and I know the reasons I construct. So t is maybe not so important, but clearly u is fundamental. Okay, so once you've done that, you now have the vector fields, and you can start commuting, right? So you are going to get d of lix of r, and d of lix of r, so this delta.
But of course, here you are going to get pi's, complicated expression involving the pi and curvature, right? So this is going to be, again, the hard part. All right, so now the way to deal with this is in our first approximation, we ignore.
You assume that somehow you are in flat space. You get estimates for the curvature, which are going to be energy type estimates, from it you get the decay rates using this kind of global Sobolev inequality that I mentioned earlier, something similar to that.
So you get the decay rates for the curvature. So you get the various components of the curvature decay at different rates. For example, alpha will decay like t to the minus seven half, and beta like t to the minus seven half,
and rho and so on. Alpha bar, which is the lowest component, behaves only like t to the minus one. So see, this is exactly the alpha bar, if you remember, contains E3, twice E3. So in other words, this is a bad direction, E3 is a bad direction. For this reason, alpha bar is the worst component, behaves only like t to the minus one.
t to the minus one is exactly the behavior of the solution of the wave equation. So you cannot do better than this. So alpha bar is what you observe when you do LIGO experiments. All the information is carried by this because this decays very slowly. All the others decay too fast to see.
So this is what you're going to see in LIGO experiments. Obviously, it's a curvature that carries gravitational waves. Okay, so that's sort of the general philosophy. So now let me go through a little bit of... We have the gauge condition, which I explained, robust decay based on the vector field method
to get the k for r, construct approximate Keening and conformal Keening vector fields adapted to the foliation. And there has to be a null condition. So the null condition has something to do with the structure of this term because you want to treat these terms here. You are forced to decompose everything.
You have to look at the components of this and hope that the worst possible decay rates that come from here and they come from here are such that they are not similar to what I mentioned earlier, this E305 squared.
You have to avoid having components of this type. And if the construction is geometric, then you would avoid it. And indeed, my construction here is geometric. So that's sort of the general philosophy. Let me go a little bit more into this. OK, so this is a theorem.
You construct this maximum foliation. You construct a null foliation by these null hyper surfaces. You construct an adaptive null frame, which are these ones, E4, E3, and the EAs, which are tangent to this intersection between t equal constant and u equal constant. So this foliation by two surfaces is a compact two surfaces.
So it's integrable? In this case, it's integrable, right. So because it's the stability of the Minkowski space, the expectation is that it's integrable. If I do stability of care, I expect to get something which will be non-integrable. All right, so you get Stu, which are the intersections. You define r to be the area radius of this Stu.
Sorry, is that something to do with what I do or not? Now I don't do anything, so it's probably not me. Okay, so you define r in a geometric way
as being the connected area radius of these two surfaces. So r should be along null directions. r has to be like t, right, because you expect this u equal constant
to be similar to the t minus r equal constant in Minkowski space. But of course, there is a deviation here, but the deviation you hope is not too big. So in any case, t and r have to be comparable. And here is what you get in terms of r. Alpha behaves like r to the minus seven half, seven half, r to the minus three,
rho star, r to the minus seven half, r to the minus two, r to the minus one. This is the component that, again, that's the one that you see. It's being seen by LIGO. So this is called incomplete pinning because somehow Penrose had sort of an ad hoc way
of deriving the decay estimates for the curvature, making certain assumptions. So based on certain assumptions, he was able to find much stronger decay. So he would find, for example, r to the minus five here, r to the minus four, r to the minus three, and so on. But in the stability of Minkowski space,
we proved much less, but it seems to be much more consistent with what is actually going on. In fact, the strong pinning is not generic. Okay. It's now hypersurface. No, I mean, STU is a waveform at time t.
Yeah, so you have this Cu here, right? So this is now hypersurface, which corresponds to equal constant. And then you have t equal constant. And then you have STU equal constant, which are these ones, the intersection. So these are the, if you want, yeah, these are the waveforms. STU are the waveforms, right?
So okay, so that's the kind of decay you get. This is again another picture. Initial data sigma zero. t equal constant will be seen here. These are the null cones, which here I call H, but I call it C before, but these are the null cones.
And these are the intersections. And you see there are some, I mean, obviously these are not spherical anymore, because the gravitational waves distort the waveforms. But this is how the space-time you construct looks like. So the role of curvature,
again, which I mentioned before, all null components of R relative to adapted null frames stay within all of epsilon of the initial values. These do not depend on the frame, effectively gauge independent, and that's why you can analyze it by this thing that I told you here.
So this method, which I mentioned here, will not unfortunately work when you do stability of black holes. And the reason is because not all components are full of epsilons going violent. You get certain things which stay there forever, and therefore you are not going to be able to get decay in this way. You have to do something much more drastic. But in any case,
in the case of the Stavridio-Mikowski space, you have a uniform treatment of all components of curvature by using a tensor which is associated to this equation, which I just mentioned, right, these two here. I don't know why, yeah, anyway. I mean, this is a method of notation.
I mean, this is not a very good notation. I should have delta R here. In any case, you can associate to this equation something which is called the Bell-Robinson tensor, which is a remarkable four tensor which plays the role of an energy momentum tensor. So you see, if you talk about Maxwell equation,
df equal to zero and df star equal to zero, then there is associated to this equation, there is an energy momentum tensor which is called T alpha beta is f alpha mu f beta mu plus the dual.
So you can define the dual so you get something completely symmetric. So this is maybe one half here. This is the energy momentum tensor. And the energy momentum tensor has this remarkable property that we all know. This comes from Noether theorem. I mean, it's connected with Noether theorem.
This divergence is equal to zero. So it's symmetric and divergence is equal to zero. And once you have this, the way you get conservation laws according to Noether theorem is that you look at the vector field X, you take this contraction T alpha beta X beta,
you get a one form, right? Because you have just this index. You take the divergence of this one form, And you get here, if D alpha falls on this, you get zero. If it falls on this, you get T alpha beta times, well, let me write it this way,
T alpha beta. I'll put the indices up. T alpha beta D alpha X beta plus D beta X alpha. But because T is symmetric. Exactly. So I put a one half and that's the symmetry of T. And this is exactly the lead derivative of the metric G, right? So if X is killing, you get zero here,
and that gives you the conservation law by integration. You integrate, you get conservation law. Otherwise, you have to take that into account. This is exactly the pi X. So for the answering question, there is something similar, but remarkable because it's not a two tensor. It's a four tensor. So there is the Bell Robinson tensor,
T alpha beta gamma delta. So it's a four index tensor. And it looks like this. It's again, maybe a one half. There is an, I'm not going to put all the indices. There will be a combination of indices, R times R plus R star times R star in a correct way of putting the indices.
You have two contractions. So you have two contractions. Exactly. You have two contractions and you are left. So this is four, this is four, two contractions and you are left with four. So this will be a four. It's fully symmetric. It's fully symmetric. And it's, so this is for reach equal to zero.
It's fully symmetric and it's traceless. So the trace relative to any two indices is zero. And it verifies the divergence. So D alpha T alpha beta gamma delta is exactly identically equal to zero. So in other words,
it plays the role of an energy momentum tensor, except that now instead of taking just T alpha beta gamma delta, you can play with three things here. You can take x1 alpha, x2 beta, x3 gamma. And then you do this.
And again, you see the same thing. Because of the symmetries, you are going to see the pi, the deformation tensor of this vector field showing up on the right hand side. So in particular, if all of them are killing or conform a killing, you get a conservation law. Of course, you don't expect, for solutions of the Einstein equations,
you don't expect to have killing. So you always have something on the right hand side. But this is what I wanted to say here, that you can define energy, generalized energy estimates, just like we did for the wave equation, from which you get, once you have those energy,
type estimates, you derive decay for each component of the curvature, you can derive the decay it has. So this is sort of the effective invariant way to treat the wave character of the Einstein vacuum equation. So again, this is remarkable because, again, you don't need the fundamental solution. I mean, typically, Thibaut is not here.
But if we ask him, we'll see that that's what he does. He uses a fundamental solution of course. Here, you don't have to use a fundamental solution at all. It's sort of a very robust way of deriving the decay. And of course, deriving energy estimates also. OK, so finally, proof is based on a huge bootstrap with three major steps.
So it has to be a bootstrap because otherwise you can't do anything. So you start by making certain assumptions of the curvature norms. So these are invariant curvature norms involving these vector fields. You assume that they are bounded for all time, and from it, you get precise decay estimates
of the connection coefficients of the TU foliation based on the equations which I wrote here. So which were this one. So d gamma plus gamma times gamma is R. So if I make a bootstrap assumption, if I assume something about R, then I can derive estimates for the gammas. Once I have the estimates for the gammas,
I can use N to derive estimates for the deformation tensor of this Keeling and the approximative Keeling vector fields because as you remember, these Keeling vector fields are defined based on the geometry of the foliation
and therefore based on the frame and based on the rich coefficients of the frame and so on. And then once I have this, then I can go to this step, I can do this step, which is the most complicated one, I can do this step and show that indeed, I can get bounds for the curvature.
But of course, to do that, I have to control the error terms which are generated here, which require lots of things, in particular decay for all components, but also requires in addition the null condition. If the null condition is not verified, I'm dead, right? Because I will not be able to control these error terms. Just like in the wave equation,
I recall in the wave equation, if I have dt phi squared, and I try to implement my strategy, I will not be able to because this blows up in fact, right? So the same thing here, in order to work, you have to have this null condition satisfied and fortunately it is. And therefore,
this is how you get bounds on the curvature and therefore you can go back here and close the whole loop, right? But of course, this unfortunately takes quite a while. The original proof, we had about 600 pages, I think now maybe less, 580. Nowadays, of course, it can be done much faster, right?
So there are other proofs which are much faster. Okay, there is another result which I think would have been of interest to Thibaut. I'm not sure, since he's not here, I'm not sure that it's worthwhile spending too much time. This is something that,
a question that he was, I remember interested from the very beginning when I met him. The thing was that this peeling,
these components of the curvature, which are r to the minus seven half, r to the minus two, and r to the minus one, is not the peeling predicted by physicists before. They had this strong peeling, which was predicted by Penrose, or Bondi-Sachs also.
So he wanted to know whether you can do better, whether you can get the full peeling. Anyway, so here I will mention very fast two results. So first of all, the first result, which was with Nicolas in 2001, which was the following thing. It's the same thing as instability of Mikovsky space.
You start with sigma zero gk, the same condition, the same initial condition, but I don't assume any smallness. I don't make any smallness assumption, but instead I look at the domain of influence, of the future set,
of a sufficiently large compact set. So I take a large compact set on sigma zero, and I'm only interested in what happens outside. So here, if I have large data, of course I'm in the regime of the final state, when lots of things can happen here.
I can have extremely complicated things. So this result tells you that if you forget about this part, which obviously is going to be difficult, and you are interested only sufficiently far from a compact set, then this behaves like in the stability of Mikovsky space. In other words,
the data now is going to be sufficiently small because of asymptotic flatness. Asymptotic flatness means that things become small and small as you approach infinity. So things are going to be sufficiently small, and therefore I can construct my space time all the way to the null hypersurface generated here. In other words,
I cannot go beyond, but I can construct all the way to the null hypersurface. And this is based on double null foliation. So the innovation here is that you don't use t equal constant anymore. Of course, you couldn't, because the maximality of t will make it impossible
to use it just in this region. Something maximal has to be global, so it will have to go inside the region, which you don't control. So you construct only something outside, but instead of constructing with t equal constant, you construct with a double null foliation. So in other words, in addition to this null const,
you take another family of light consts. So in other words, you replace t equal constant, you replace it by u-bar equal constant, which is another family of null consts which are now incoming, so moving in this direction. So in other words, I can foliate this region like this.
So the intersection is still going to give me two surfaces, which are s, u, u-bar. So instead of having s, u, t as I had before, now I have s, u, u-bar as level surfaces.
And I can do the analysis more or less in the same way. So the only innovation here is that it's what is called the double null foliation. So double null foliation is given by two functions, u and u-bar, verifying these equations. And otherwise, it's very similar, and the result is very similar. And finally,
yeah, so those would be the same power, but I make also the same assumptions in that result. But now, I can say more. So this is another result, Nicolo in 2003, it's a matter of the assumptions on the initial data.
If you remember, the assumptions were here, these ones. But does it mean okay plus one? Okay, so let me explain. So if you don't take any derivatives, it's just gij minus one plus 2m over r delta ij is all of r to the minus two.
In other words, if I subtract, this is a Schwarzschild part. If I subtract the Schwarzschild part from the metric, what I'm left with are terms which decay like r to the minus two. k plus one means I'm also taking a certain number of derivatives. Every time I take a derivative of this,
relative to the coordinates, because I have a coordinate system in a neighborhood of infinity, it decays better by power of r. It improves by power of r. The same thing here, it improves by power so far. So now in here, so in this result, I make stronger assumptions.
So if I take away the Schwarzschild part, then I can go all the way to r minus three half and plus a gamma. In other words, gamma is a parameter which allows me to make stronger and stronger assumptions at infinity.
For example, if gamma is exactly three half, I get r to the minus three here. So I get much stronger decay than before. So I take the Schwarzschild, I get much stronger decay. So if gamma is larger than three over half, in other words, if the initial data decays sufficiently fast, then I can get the Penrose peeling.
In other words, r to the minus five for alpha and r to the minus four for beta. But you see, this requires a lot more decay. So it's somewhat non-generic. This was postulated by Penrose.
So this came from the analysis of Penrose. Yes, yeah, right. So Penrose sort of assumed that a space-time can, this kind of space-time, which is really a flat solution of the Ricci flat equation, can become formally compactified by adding a boundary to infinity. But this is an ad hoc assumption.
I mean, of course, it was never justified. In fact, nobody was able to justify this conformal compactification picture. But the estimate is still true. Right, exactly. Yeah, I mean, you can do it, provided that you take enough decay, which it's not, yeah. I mean, the exact amount of decay is important in applications.
But anyway, that's something should have been discussed to Stibaut if you're saying. What is g and gs? That's a Schwarzschild part. So you have g and k, right? And I take gs, the Schwarzschild. In other words, it's 1 plus 10 over r.
And then what's left should decay first. You cannot, you see, because of the mass, the positive mass theorem tells you that if m is equal to zero, then, in fact, the g has to be exactly the gradient g, right?
So if you are to have any non-trivial perturbations, they have to contain this 1 over r terms. Otherwise, it's not right. So you always have to have this long range. But if you are close to Schwarzschild. But if you are close to Schwarzschild. In fact, yeah, exactly, right. So in a certain sense,
it's also tied to stability of Schwarzschild, actually. But Schwarzschild, of course, much more difficult. Anyway, so I think this is a good place to start. So next time, which is just the last two hours, I'll really talk about this new result on black hole stability with Jeremy. And that will be...
That will be to show that, at least, so the black hole stability is much more difficult in the case of Schwarzschild and Kerr. The only thing which is known today is a linear theory, so there are many interesting results in linear theory, but non-linearity is much more difficult and the result I'll
mention is the resultant stability of Schwarzschild under restrictive perturbations. Those restrictive perturbations are such that they constrain the final state to still be Schwarzschild, because normally, if you perturb Schwarzschild, you will not stay in the
Schwarzschild class. You'll go into a Kerr with small a, with small rotations. You always generate some rotation unless you make some restrictions, so the restrictions we make is just so that the final state is still Schwarzschild. Nevertheless, you still have to work hard to adjust for the final mass, to track the
final mass, because the final mass is going to be different, and to track for the gauge, to find dynamically the correct gauge in which you have decay that's the hardest part, in fact. With this, I'll stop.