The h-principle and a conjecture of Onsager in fluid dynamics
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Part Number | 23 | |
Number of Parts | 23 | |
Author | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/20812 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
10
11
13
19
00:00
Position operatorSurfaceOperator (mathematics)Vector spaceModule (mathematics)Einbettung <Mathematik>Derivation (linguistics)Perturbation theoryCurvatureOrder (biology)Dimensional analysisStandard errorIsometrie <Mathematik>Neighbourhood (graph theory)Reduction of orderComputabilityDivergenceLengthTheoremLimit of a functionMultiplication signMatrix (mathematics)SummierbarkeitAlpha (investment)ExponentiationIteration2 (number)Logical constantPotenz <Mathematik>Connected spaceAdditionProof theoryInsertion lossTwo-dimensional spaceAnalytic setPositional notationStudent's t-testCondition numberArithmetic meanFourier seriesVector fieldMetric systemManifoldEuklidischer RaumTurbulenceMereologyCoefficientInequality (mathematics)ApproximationTheorySequenceSinguläres IntegralPartial differential equationBargaining problemMany-sorted logicNichtlineares GleichungssystemUniverse (mathematics)CircleTensorVariable (mathematics)Identical particlesFood energyEstimatorNormal (geometry)Functional (mathematics)ModulformPoint (geometry)AnalogyAverageTerm (mathematics)Finite setSet theoryParameter (computer programming)Trigonometric functionsGradientSymmetric matrixOscillationLinear algebraPressureQuadratic formFrequencyLinearizationTangent spacePhysical systemPoisson-KlammerEqualiser (mathematics)Expandierender GraphEvoluteExistenceCompact spaceProduct (business)Scalar fieldExpressionSpacetimeSeries (mathematics)Energy levelSpiralNumerical analysisWell-formed formulaRankingRight angleDiagonalMathematicsSineMechanism designInfinityDifferent (Kate Ryan album)OrthogonalityCurveExterior algebraResultantSquare numberPower (physics)Limit (category theory)Drop (liquid)InterpolationImmersion (album)Kontraktion <Mathematik>Game theoryConvex setCylinder (geometry)Line (geometry)OpticsQueue (abstract data type)Special unitary groupState of matterWage labourOpen setMeasurementForcing (mathematics)Group action3 (number)Model theoryMoment (mathematics)AreaKörper <Algebra>Principal idealINTEGRALDirection (geometry)Content (media)Block (periodic table)Division (mathematics)CuboidConstraint (mathematics)Complex (psychology)MedianSign (mathematics)Euler anglesPrice indexLocal ringLattice (order)Theory of relativityStress (mechanics)Observational studyPhysical lawEvent horizonMonster groupPopulation density1 (number)CubeMaß <Mathematik>Social classSurvival analysisStokes' theoremAmenable groupSheaf (mathematics)PredictabilityGreatest elementProcess (computing)Nominal numberComputer animationLecture/Conference
Transcript: English(auto-generated)
00:15
Thank you for the opportunity to speak at this wonderful place. It's the first time for me and thank
00:23
you to the chairman who comes from one of the best universities in the world. So, I'm going to talk about the Euler equations and the Isometric Embedding problems. So, you already saw in Vlad's talk a lot about the Euler equations. So, this is the incompressible Euler equations and the question is whether the energy identity,
00:42
the energy conservation which is this identity here is valid or not. The Onsager's conjecture is the following. So, yes, if you have the following condition and no,
01:16
there are solutions which don't have the energy conservation if the condition is weaker.
01:30
Okay, and you see that actually in his original conjecture, this constant here is the C which is independent of X, Y, and T. So, in a sense, we could say,
01:41
here we have the one-third derivative in space of V, which is in L infinity in time and space, and here we have this guy, one-third minus epsilon in L infinity.
02:04
Okay, and then you heard in Vlad's talk, all the results which are known so far about this conjecture. So, yes, this is okay. So, constant in a, t, t were actually the first to
02:22
prove exactly the statement of Onsager after work of Aninc and then there have been refinement. Then for the null, you have Schaeffer in 1993 and Schnielemann in 98,
02:42
and this is V in L2. Then you have Laszlo and myself in 2008, and you have V in L infinity. Then again Laszlo and myself in 2011, and you have V in C0,
03:02
then slightly after one-tenth of a derivative of V in L infinity, then ez in 2013,
03:22
which has one-fifth of the derivative in L infinity, and that is an alternative proof by Tristan, Laszlo, and myself around the same time, and then there is again Tristan, Laszlo, and myself,
03:43
and this time it's really one-third of a derivative but it's going to be in L1 in time, L infinity in space, and then there is still Tristan,
04:01
Nadir, and Vlad, and they have the one-third of a derivative which is L infinity in space, and L infinity in time, and L2 in space. So, these are the results so far,
04:22
and Vlad discussed that probably the correct conjecture is that actually this over here is in L3 instead of being in L infinity and that is the funny thing that actually these two results interpolate to L3, although we don't know how to interpolate them. Okay, so as you see it's a hell of a lot of exponents,
04:42
and I want to give you a feeling about these exponents today, but I want to start with the origin of the methods that we are applying to at least of the ideas that we are using in order to prove these results, and that goes back to the isometric embedding problem
05:00
and Nash. Okay, so here you have a similar situation. So, you have the isometric embedding problem. So, this is a map which is isometrically embedding a Riemannian manifold in Rn. I mean if you're not familiar with the problem is for instance what you can do with a piece of paper, this is the flat metric and I can bend it,
05:22
but I cannot tear it up or I cannot really extend it because it's not elastic. So, that's an isometric embedding, and well for instance if you are isometrically embedding a flat manifold, you actually know that if you are isometric embedding in C2 in R3, this has to be a ruled surface.
05:41
A ruled surface means for every point, there must be a line which is passing through it. So, you see that for instance you cannot do anything better than a cylinder in a sense, if you want to put your manifold in a very small space. But Nash and in fact, in the way I'm going to state the theorem here,
06:00
subsequently Kuiper or Kuiper, as my PhD student told me it should be said. So, they proved in the 50s that every time that you have, so for every epsilon bigger than zero, and for every immersion V which is short, which means that it satisfies this in the sense of quadratic forms.
06:33
So, nearby the V shorts that exists U, which is going from Mg into R3,
06:43
okay into Rn in this case, which is an isometric embedding and which is uniformly close to your V. So, obviously this contract dates the rigidity theorem because I could take actually, for instance a piece of paper shrink it by
07:02
decreasing length into a ball of reduced epsilon and then by the theorem of Nash and Kuiper, approximated again by staying in the epsilon neighborhood and then put isometrically my piece of paper in C1, with the C1 isometry in the ball of reduced to epsilon. Okay, and that is obviously contradicting somehow the rigidity in a sense.
07:21
So, actually what happens is that, if you're considering positively curved surfaces, so it is actually known, so in this case n is equal three and the dimension of the surface is equal two and
07:41
g has positive Gauss curvature. It was actually proved in the 50s by Borisov, that the embeddings are rigid,
08:01
so you cannot for instance have a theorem like Nash-Kuiper, if they are in C1 two-thirds plus epsilon. We have a second proof of this together with Sergio Conti,
08:22
also in 2011, which is much shorter than Borisov proof. There's some connection with the proof of constant in A and TT of the yes to the Onsager Conjecture. Similarly, there was actually a claim by Borisov, in 1963, so that's what he announced.
08:45
So, the Nash-Kuiper phenomenon, so Nash-Kuiper theorem can be proved for C1 alpha matrix.
09:01
So, sorry, for C1 alpha maps u if the metric g is analytic. Okay, the exponent that he actually announced in the case of two-dimensional surfaces in F3, so then I can give you the general exponent is one over
09:22
seven, but in fact he proved a lot later, so this was 2014, one over 13. So, one over seven actually really appeared in our paper together with Sergio and Laszlo, and we can actually remove analyticity of this metric, and I can maybe give you an idea why
09:41
he's assuming g analytic and what is the problem. More recently, in 2015, together with a PhD student of mine, Laszlo and myself actually improved the exponent to one over five.
10:08
Okay, and this is up to date actually the best that you can do for two-dimensional surfaces in F3. There is a general theorem that you can prove for general surfaces in n plus one-dimensional Euclidean space,
10:23
but then the exponents are going to deteriorate. Very important, so the Nash-Kuiper theorem can be proved for C1 alpha metrics. I mean, these exponents over here are valid if your manifold is topologically trivial, so if it is a ball somehow. Okay, and the ambitious goal of
10:43
this talk would be to give you an idea of where these exponents come from. By the way, actually, if you're interested in the isometric embedding problem, Gromov conjecture is there at the threshold, which for the Onsager's conjecture is actually one over three should be one over two, and that seems to be also what Borisov believed,
11:01
that Borisov actually died. So, he can't answer to this question. Anyhow, this really seems to be what he believed. Okay, so I'm going to actually focus on the exponents, but on the constructive side. Okay, so the first thing that I want to show you is, so how actually could Nash prove that theorem over there?
11:31
And for the sake of this talk, actually let me assume that instead of going into any Euclidean space, you're going to a Euclidean space
11:41
which is two-dimension more. So, let's say big N is bigger or equal than M plus two. So obviously, this does not serve the purpose of our talk, which I mean of our problem over here, which would be for two-dimensional surfaces in R3, but that's actually the improvement of Kuiper. But the computations then they're sort of nastier and more difficult to explain.
12:01
Okay, so I want to actually produce my approximation U as a sequence of successive approximation. So, I want to pass from UQ to UQ plus one, to UQ plus two and so on. Okay, and how do I want to do this? Well, first of all,
12:20
I will be a short map along the way. So, I'm going to have this inequality all the time. Notice actually that the matrix notation, this inequality is just the fact that this symmetric matrix over here is less than G.
12:43
Okay, and then I want to modify UQ to a new UQ plus one in such a way that this thing over here, so I want to get this guy, which is substantially decreased. So, it's substantially smaller than this guy over here.
13:17
Okay, and how am I going actually to do this? So, this is the interesting part.
13:22
So, I'm going to take H, which is my metric error. Okay, and this is a positive definite matrix. I'm going to decompose it as a sum of coefficients.
13:43
Let's call it A squared. And then rank one matrices of which are positive semi-definite. Okay, so the fact that I can do this, the fact that I can decompose this by having fixed vectors over here and here coefficients which were I in X actually,
14:00
is a simple exercise in linear algebra. Okay, and now what I want to do is, I want to modify my UQ. I actually have to modify it in a finite number of steps. So, what I want to do is, I want to perturb UQ to UQ plus one by adding a perturbation.
14:21
And each time that I add the perturbation, this product is going to look, I mean, this product compute for the perturb metric is going to look like the previous one, but I'm adding one of these guys, one of these portions. And I do this a finite number of time, and then I have something which is UQ plus one,
14:41
which is going to have, so the aim is essentially to have something like this, up to a negligible error. Okay, very good.
15:01
So, how am I actually doing this? So, this is done by what are called nowadays Nash spirals. So, for instance, for the first perturbation, you take UQ of X, and you add the following precise formula. So, you have one over Lambda,
15:20
and then you have B of X, I'm going to tell you in a second what it is. Then you have A one of X cosine of Lambda X dot E one. And then you're going to have one over Lambda, and then N of X, and then A one of X, and then sine, and then Lambda X dot E one.
15:41
Okay, so now what are B and N? B and N are two orthogonal vectors, which are also orthonormal to my surface UQ, okay? So, B is orthogonal to N, and B and N are orthogonal to the tangent space, to my image manifold.
16:07
Okay, so now what I want to show you is why this perturbation is going to work.
16:21
And the reason why this perturbation is going to work is that when I compute the UQ plus my perturbation, so what I get is the following. So, it is the UQ, okay? Then you see that when the derivative hits the cosine,
16:41
then I get the Lambda outside, which cancels the Lambda. So, I have something similar when here I hit the other guy.
17:05
And then I have some error terms. I mean, these error terms are one over Lambda, because the derivatives hit the other coefficients. I mean, they hit B or they hit A. So, one typical example of an error which is over here is, for instance, something which looks like this.
17:21
So, the derivative of B, then I'm dividing by Lambda, then I have A one, and then I have cosine. So, this is a typical error. Okay, and now what happens is that since these guys are actually orthogonal to this guy over here, when you compute B of this guy transpose,
17:47
so what you're going to have is that you have no mixed products, because they all cancel with each other. So, this is the orthogonality of B with N, and the orthogonality of N and B with DU.
18:02
Okay, and then what you get is actually here, E one tensor E one, and here you would have A one squared, and then you have E one, sorry, and cos and sine squared, and this is sum to E one tensor E one times E one squared, and then sine squared,
18:22
cosine squared. Okay, and you see that what happens is that, voila, I get my E one tensor E one times A one squared. And I've added one of the summands that I wanted to add. Now, I do this a finite number of times, and I'm happy.
18:46
Okay, so this is the basic iteration, and you can actually put the epsilon and deltas, and once I tell you actually this trick, essentially in a couple of minutes, you repeat, or you can write Nash paper from 1954. I mean, it's all here.
19:01
There's nothing deeper than this. Okay, so how do you get actually from this construction to a C one alpha construction? So how do you get an exponent? And let me show you in this computation how while we get an exponent,
19:21
this magic one over three of the Onsager conjecture will appear in a second. Okay, so what is going to happen is actually the following. So what you will prove is that, so this guy will actually become a summable series,
19:42
so that UQ is converging in C one. Okay, but this guy most likely will blow up. Why most likely it will blow up? You know it will blow up, because you have the rigidity theorem
20:01
which is telling you you can't possibly prove that this is going to work in C two, this iteration mechanism. Because you see, the point is that since I have this one over lambda which I can shoot very high, my perturbation in C zero can stay arbitrarily close to your initial map as you want. Okay, so first of all, let me,
20:22
by doing like Vlad actually did yesterday, so let me call this guy which is going to be small, let me call it actually delta Q plus one to the one half. Now, if you look at our computation, this guy over here is essentially A one, right? The size of A one. And A one squared is actually in that sum upstairs
20:46
related to the metric error. So what you actually get is that if you make this ansatz, okay, so this, right, is essentially of size delta Q plus one.
21:08
Okay, now here, when I'm computing second derivatives, so the most important part is when I hit again with the derivative, with the second derivative, my fast oscillating term. So I get the lambda, okay? So let me just put this guy this way.
21:28
Okay, now what I'm expecting is that this is converging to zero and this is actually blowing up. So let us assume that the blow up is exponential and the convergence is exponential. So let me assume that delta Q is given by
21:44
lambda Q to some power alpha, sorry, minus two alpha zero. And this is going to be equal to lambda zero to the minus two alpha zero Q, okay?
22:02
So therefore lambda Q is actually lambda zero to the power Q. So this is the ansatz. Now if you do a simple interpolation estimate between these two guys, what you discover is that in C alpha, this is less or equal than delta Q plus one to the one half
22:22
times lambda Q plus one to the alpha. And of course now, if you look at what happens over here, you have minus alpha zero. So this is going to, this is going to converge, this alpha is less
22:41
than alpha zero. So this alpha zero here is exactly the Höldorf threshold that you can achieve with your iteration. Okay, so if that is what you can do, now you have to understand how you're going to
23:01
kind of make this convergence of the delta Q as fast as possible given what you're going to choose for the lambda Q. And what is actually the point? So the point is that I have to choose lambda Q large to make a certain error small. So that's the error that I have over there. Right, so if I make that error small,
23:22
then I have one. So how small I have to make that error actually? So I have to make that error small compared to the new delta Q plus two, right? So because that error is going to tell me how big is the new guy? So essentially what I have in the computation upstairs is that DQ plus one transpose times
23:42
DUQ plus one minus G, right? So this is going to be as small as this big O of one over lambda. Okay, and I want to kind of quantify that. Okay, now if I quantify that, you see I have an example of one error.
24:00
Actually the story would be much more complicated because I have many other errors. But let's see what happens with that error over there. Okay, so I have a lambda, so that is a one over lambda Q plus one. Okay, then I have a derivative of a vector that I've chosen which is normal to my surface. Now the vector is going to be as regular
24:21
as my tangent space if I choose it smart, right? So the derivative of the vector is going to be like the second derivative of U. Now the second derivative of UQ is blowing up like an exponential. So the second derivative of UQ is the sum of these guys, but when you're summing a geometric series, what you see is something which is compatible to the last guy that you've seen, okay?
24:41
So therefore here I have delta Q to the one half lambda Q. Okay, but then you have the A one and you remember the A one is small, as small as the metric error that you had to kill. Okay, and that is delta Q plus one to the one half. Okay, now if your iteration is consistent,
25:02
this guy should better be actually as small smaller than the next error that you want to achieve. So your condition is the following, is that delta Q to the one half, delta Q plus one to the one half, lambda Q, lambda Q plus one to the minus one
25:23
has to be less or equal than delta Q plus two. And actually, I mean, making it less or equal means to pick up this even bigger. So essentially you can put it equal, that's the best that you can do. Excuse me, excuse me, just one thing. Why is it delta Q plus one to the one half?
25:42
Because it's the... It's A one and A one is not delta Q to the one half? Oh, that's this one. That's this one, okay? So the size of the previous error gives you the size of the next perturbation. So there's a mismatch of plus one.
26:02
It took us like a couple of years before understanding this actually. No, we were always using a long notation. I think Tristan is the first one that really pointed out the notation which is consistent with the thing. Anyway, so now you can actually insert your ansatz, right? And compute the log and compute
26:20
what is minus two alpha zero. I just insert inside, okay? So, okay, so over here I have lambda zero to the minus alpha zero Q plus one. Then I have lambda zero to the minus alpha zero Q, okay? Then I have lambda zero to the Q.
26:43
Then I have lambda zero to the minus Q minus one. And this has to be equal to lambda zero to the minus alpha zero to alpha zero Q plus two. Okay, now I take the log which makes the lambda zero disappear, okay?
27:03
And now you notice that the Q, the terms in Q, they actually cancel, right? So this cancels with that. And, sorry, this minus alpha zero Q minus alpha zero Q minus two alpha zero Q. And then here I have Q minus Q equals zero, okay?
27:20
So let me get upstairs what I get. So I have minus alpha zero. Here I don't have anything. Here I don't have anything. Then I have minus one. And then on this other side I have minus four alpha zero Q. Sorry, minus four alpha zero. Okay, so put the four alpha zero on the other side
27:42
and the one on the other side. Four alpha zero equal one and then three alpha zero equal one and then alpha zero equal one over three. Okay, so this basic computation we did in 2010. And from 2010 we just thought that, okay, this might explain actually at an analytic level why you have the Onsager's conjecture.
28:03
So now why actually here you are degenerating? Well, you are degenerating because I imagined I only have to add one metric in the perturbation but actually I have to have many, right? So I just made the computation with one error but then I have to add the next error.
28:21
And the next error is going to have a faster oscillation compared to before. But I have to make faster and faster oscillations but the metric error is not improving. Okay, what actually happens if you make all the computation which I'm not going to give you is that essentially the alpha zero that you get is one over one plus two the number of steps
28:43
that you have to do, this n star, okay? So why for instance then do you have one over seven over here? It's because the space of symmetric matrices is three dimensional and so if I want to write my symmetric matrix as the sum of rank one matrices,
29:01
I mean the rank one matrices is a linear generator of the space of symmetric matrices but I need at least three, okay? And then I have one divided by one plus two times three and that gives me seven. And similar numbers you can actually crank out for all possible dimension. So why in the hell can I actually improve to one over five?
29:20
And that's because I can use differential geometry and by making a change of variables which is conformal at each step of the iteration, I can actually diagonalize the metric. When I diagonalize the metric, I actually need only two rank one matrices to write a matrix in diagonal form.
29:40
So I only need two steps. And when I need two steps and I plug in this formula, I get the one over five. I'll ask you something. Taking n larger, would it improve at this step because you would have more space to play on the... The n star, you mean? Yes. No, no, no, no.
30:01
Instead of two taking, oh no, because n star... No, no, no. The n star is actually going to kill me because I have to add many more oscillations unless you understand how to plug them all together which we have not yet. n star is uniformly bounded. Right, n star is uniformly bounded, right, right.
30:22
So for instance, in the case of n dimensional surfaces, if it is topologically trivial, we will be able to make this n star equal to the dimension of the face of the symmetric matrices which is n times n plus one divided by two. And this gives you the one over seven for two dimensional surfaces. But then my question was if you take instead of two,
30:43
you take a higher dimension. Of course. Yeah, because this is the formula, right? You increase n star, this is the formula. The less you put in there, the better it is. With this iteration. Okay, so now it looks actually kind of funny, right? Because you see that the world record without going to sort of different spaces
31:02
is still one over five over here. The funny thing is that that one over five does not have anything to do with the other one over five. But since I've used essentially 20 minutes, well, 30 minutes of my talk, I will not be able to tell you that one over five. So, but hopefully I will be able to tell you
31:21
these two guys. Okay, so let me give you a fake proof of the Onsager Conjecture. So let me try to set up the same iteration mechanism for, yeah? But over there you said that the conjecture is that one half is actually optimal. Yeah. So that somehow there's something. This is only going to one over three.
31:41
So either you believe Gromov and then this is not really reaching. Or you believe that there's a reason why one over three should be critical even in this case. So here we believe Gromov, right? In IASH, I guess. Sorry? There are some explanations for the one over two, yeah.
32:00
From the rigidity part though. And it would take me, well I would go extra time if I have to explain you why one over two is actually interesting. I mean, two over three also over there is not a casual, I mean it's not just a random number. So two over three, one over two and one over three, they all have their own kind of internal reason.
32:23
Okay, so how am I going to construct the solutions of the other equation? So you heard Vlad. So it's an iteration. And the iteration is going to look like this.
32:41
So that's already was shown by Vlad. So this now is a three by three symmetric matrix. So how the L is actually the connection of this to the isometric embedding problem? So how do we come up with an idea like this? So you can think about it in the following way.
33:01
So what is a short map? So a short map is something in which you have the inequality instead of having the equality. So if I give you a sequence of isometric embeddings and you don't have any estimate on them, okay, what is going to happen? It's going to happen that you have a sequence of manifolds which preserve the length of curves. But if you have a sequence of curves with the same length which is converging to another curve,
33:23
the curve in the limit has length which is less or equal. So you can interpret this short map, this inequality diu dot dju as a relaxation of the isometric embedding problem. So how can you interpret this? You can interpret this by saying put this equal to zero
33:43
and assume I give you a sequence of solutions of the other equations which is converging weakly but not strongly, okay? If it is converging only weakly, I mean if you only have a bound in L2 and you don't know that you're converging strongly, you can take the limit of VQ tensor VQ, okay,
34:02
and you will see that this will most likely drop below the limit of VQ tensorized with the limit of VQ. Okay, so it's a convexity inequality as much as that one over there is a convexity inequality. So this is in some sense a relaxation of the problem you started with. Okay, and now what I want to do is I want to play a similar game.
34:21
So I want to start from VQ, PQ, R circle Q, and I want to generate a VQ plus one, PQ plus one, R circle Q plus one. Okay, so how am I going to do that? So first of all, I notice the following.
34:41
So this is a problem which I can always solve. I mean you give me a vector and I want a symmetric matrix whose divergence is equal to that vector. That I can always solve, it's an elliptic problem. So as I can always solve, divergence of a vector field is equal to a function, right? So I solve the Laplacian for instance. So that I can solve,
35:00
so I can solve in more than one way though. So I'm going to identify, I mean I'm going to denote by divergence to the minus one, some operator which is inverting this guy. And if you look at this operator in Fourier space, typically this operator is going to have order minus one, right? So if I want for instance,
35:20
divergence of minus one of something like e to the lambda k dot x, okay? Then I will have something like one over modules of k squared and k e to the i lambda k dot x. And here I divide by lambda, right? So there is a one over lambda which is coming out.
35:42
So this is a Fourier operator of order minus one. Okay, so I can always invert this guy. So I actually now want to produce a new map.
36:07
So this is going to be vq plus one. This is the perturbation that I'm adding.
36:27
Okay, so here the perturbation, I'm not going to give a name. And then I declare this to be the next guy because I solved the operator divergence to the minus one.
36:41
So the whole point is that I want to choose w and I want to choose the perturbation for the pressure in such a way that this guy is much smaller than before. Okay, so as before, so you see the analogy. The c zero norm of w which is vq plus one minus vq
37:06
is going to be estimated by delta q plus one to the 1.5. And since I will actually add the fast oscillating term, the c one norm will also have an estimate of this type.
37:26
Okay, so how actually am I going to construct this w? Okay, so this w is going to look like this. So I'm going to make an ansatz. So I'm going to say that this w is going to look like a function big W of vq, rq.
37:42
So these are slow variables. And then I have fast oscillations. Okay, so for instance, if you were in the ansatz upstairs, a one which is a function which depends on the metric error will be somehow the dependence in these variables.
38:03
And the oscillation that I have, so the cosine of x dot e one would be in these variables. Okay, and something similar I do for the pressure. So pq plus one minus pq is going to be some big function p and then here I have vq, rq, rq plus one x
38:23
and rq plus one lambda q plus one t. Okay, so now what are the set of conditions which in this context would make the Nash scheme work? So I'm just going to write them down.
38:43
And it took quite some time to understand why these are the set of conditions because it took us quite some time to set up something which would resemble like Nash. But then I will show you how this set of condition is actually very natural if you want to set up the iteration in this way.
39:02
Okay, so the set of condition are the following. So first of all, okay, so my function W now is going to be a function of v, r, xi and tau. And the first thing that I want to solve is this guy.
39:40
So this is the PDE part.
39:44
And then I have a second part. So since I want to add oscillations, the function should actually be periodic in xi. Okay, and now I'm going to use the bracket for the average in xi of that function.
40:01
So the average of this should be zero and the average of this minus its trace should actually be the traceless part of r.
40:24
I didn't tell you, but the tensor over there, I actually take it traceless. So why these conditions are the conditions which will ensure that I can run the iteration? Okay, so let us look at, for instance, one first thing. So first of all, my W is not necessarily divergence-free.
40:49
And I want to actually add a perturbation which is divergence-free. In fact, the real perturbation that I'm adding
41:04
is this one, which will not be divergence-free, and then I add something which is divergence-free. Okay, and now what I claim is that if I take this condition and that condition, okay?
41:23
The, together. So the divergence of W is going to be very small. And I have to correct it with something which is small. So WC is small. Okay, and what is that? Well, because I can expand W in Fourier series
41:40
in e to the i xi, okay? So this would look like something like one over k, and then I will have coefficients ck which will depend on v and r, okay? And then I have e to the i lambda q plus one k dot x. Okay, so the condition that the average of W is equal
42:01
to zero, it tells me that c zero is equal to zero. And the condition that the xi divergence of W is equal to zero, it tells me that when I am computing this guy, I'm not hitting the fast oscillating exponentials. I'm hitting only this guy, so I get this, okay?
42:23
So now, if I want to invert the divergence from here, and I want to find the new vector field which has this divergence, since it is fast oscillating, I gain a one over lambda q plus one when I invert the operator, okay? And that will give me a correction WC which is very small. Okay, so that's actually the easy part of the,
42:43
I mean, this is the thing which is easy to figure out. So let us go and look actually at what happens to the most complicated part, which is the part upstairs. So from now on, I will actually forget that this guy exists. Okay, so you see, I know that dt vq plus divergence
43:02
vq times vq plus gradient pq is equal r circle q. So I can subtract the equation for vq from the equation for vq plus one. And what I get is the following. So I get dt w plus vq dot grad w, okay?
43:23
Then I get plus divergence of w times of w plus grad of pq plus one minus pq. So you see that somehow here, I'm expanding the product over there.
43:43
So I've taken vq dot grad w, but I have w dot grad vq. So this is still missing, okay? And then I have minus what I'm subtracting, which is the thing upstairs, which is the divergence of r circle q.
44:09
Okay, now what happens? So now let me look at, so let me take this guy out. This guy is going to be called Nash error, and I'm going to discuss it in a moment.
44:21
And let me actually couple all of these guys together.
44:43
Okay, now when I'm plugging my ansatz, and my ansatz is that the perturbation has that form, okay, so when I apply this operator, I'm applying it to the fast variables, and I'm applying to the slow variables, okay? And the condition that I have over here
45:01
exactly tells you that when I'm applying it to the fast variable, the operator is equal to zero, okay? So that's exactly the condition that I have over here. So this will only be applied, so this is only slow derivatives of my function big W and big P.
45:23
Slow derivative, what it means, it means that I'm hitting all the derivatives not on the lambda q plus one x, lambda q plus one t, but on the big W, okay? So now if I'm hitting the slow, I mean if I'm hitting the slow derivative, what I essentially can do is, I can take my expressions over here,
45:40
all the expressions that appear, and then I can expand it in Fourier series of xi, okay? And then I can plug in for xi, the lambda q plus one x. So I'm doing exactly the same trick that I've done over here. And the reason why I was to gain the lambda q plus one was that the zero coefficient of the Fourier series
46:04
was equal to zero. Okay? So now, let me make the computations.
46:23
So by this notation slow, it means I'm computing the derivative only on the entries of big W, where I have vq and rq, okay? So here I have this low big W
46:43
plus vq dot grad slow big W, and then I have divergence slow big W tensor W minus r circle q.
47:01
Okay, what does it mean, this big writing? This big writing means that if I expand everything, so for instance, if I expand everything here, this W as a series, so I will have the e lambda q plus one x, which is not touched, and then I have the time derivatives which are sitting my coefficient ck, okay?
47:23
And when I'm actually expanding in Fourier series, this W, I know that c zero is equal to zero. That's my condition that the average of W in the xi variable is equal to zero. So I'm in good business over here. Here, I'm also in good business because you see that big W enters linearly.
47:41
So again, if the average of W in the xi variable is equal to zero, then I'm fine. But here, I'm not in good business. And the reason why I'm not in good business is because I have a resonant term. I have a quadratic term, right? So if I have a quadratic term now, what I have to do is I have to expand big W tensor W as a Fourier series of xi,
48:02
and then set up the condition that this guy is equal to zero. Now here, there is no fast variable. So what actually the condition is is that this W tensor W has to cancel this r circle q, okay? And that is the condition that we had here.
48:23
And actually, it's a little bit fake because here I have a trace-free matrix, and here my matrix is not trace-free. Well, what actually will happen is that, so since here, r circle q is,
48:43
I mean, since here somehow there is a divergence, what I can do is I can actually add a constant, which is independent of x, but it's depending on time, and I can make actually this r. So in fact, the condition is not really this one. It's more something like this is r circle plus some constant function,
49:00
which I'm going to call e of t times the identity. So this is the real condition. And this is kind of the increment of the energy if you want. Okay, so assuming I can do this, I can invert my operator divergence to the minus one and gain a lambda.
49:28
Okay, so if you were able to do that, of course you would be still with an error over here. And this is the error which looks like in the Nash iteration. Okay, so let me therefore handle that term over there.
49:44
So first of all, you see from these ansatz that since I'm asking that the average of w tensor w is equal to this r circle, so what actually happens is that, okay, so the C zero norm of w, so this was the C zero norm of q plus one minus vq.
50:03
So this is of, we said, of the order delta q plus one to the one half. So to be compatible with this restriction, we find exactly what we had for the Nash error, that this error r circle q actually has to be of the form delta q plus one.
50:24
And very good. Now, if I have these ansatz, what is actually going to happen is that my Nash error, which is computed on this guy, okay, so my Nash error gives you the following.
50:41
So I have to invert the operator divergence to the minus one. So the divergence to the minus one gives me the one over lambda q plus one. And then I have to compute the C zero norm of this guy and the C zero norm of this guy is the delta q plus one to the one half,
51:01
which is coming over here. And then I have the gradient of vq and if the vq is converging with a speed which is delta q to the one half, the gradient is just getting the previous oscillation. And now if you remember what we had on Nash, this is exactly the same error that we had.
51:21
So if this would work, actually, it gives you the Onsager conjecture, this one over three. So what is the problem? So why can't we actually solve the Onsager conjecture by this method? And the reason is essentially here. So it is possible,
51:41
and this was in some sense hidden in Vlad's talk, it is possible to make this guy small but not to make it exactly equal to zero. So if you're going to look for a function big W which is solving these guys over here, we are not able to say that it exists. We don't know, I mean maybe it exists but it seems very unlikely.
52:01
So what you can actually do is you can make this one small by adding an extra parameter and then by adding this extra parameter you can try to optimize with the other parameter lambda Q plus one and then you will have an error term which unfortunately brings you away
52:21
from this one over three regime. And I mean if you're just crude, it brings you away to this one over 10. If you are less crude, it brings you away by this one over five somehow. But okay, so I guess I was just too ambitious. If I wanted to give you some ideas about this one over five or one over 10, I should have done too much details.
52:41
Thank you very much. Yeah, okay, so formally it's unlikely
53:02
that you have a solution to this because in some sense this guy appears over here quadratically and this guy does not. So it doesn't look kind of, I mean you could think, okay, so what happens if our circle Q is very small but V is not, okay?
53:23
Then if our circle Q is very small but V is not, you're converging to a regime in which the transport term actually seems to win because it's linear and it's way bigger than the other guy. So you can make it, I mean you can try to make it small because after all, W tensor W is not exactly equal to zero
53:40
and which is, this is what happens somehow in our iterations but to solve it exactly seems impossible. You're looking for a periodic solution? Even that one of course, it's not necessarily so. So a quasi periodic solution actually would still be okay. So the point is that I want to stick in a lambda Q plus one and still stay bounded
54:01
and not only stay bounded but stay bounded with all derivatives. So if I had a solution which instead of being periodic it would be uniformly bounded with all the derivatives uniformly bounded and a lot of the times when we have these epsilon we actually end up computing a large number of derivative
54:20
so of this W which is growing up as we get closer to the threshold. So of course that one would be good enough as well. So a quasi periodic solution would still be perfectly decent. The title had the phrase H-principle but I don't think. Okay, so here's the H-principle.
54:43
This theorem tells you that you can actually approximate any relaxation of the problem with an actual solution. Okay, so this is one form of the H-principle actually. So the H-principle of Gromov also has a kind of path connecting the solution to the sub-solution.
55:04
Okay, I don't know what the H-principle is at all. Right, so the H-principle is something like this. So you have a system of, I mean this is like the point of view of an analyst. Okay, so you have a system of PDEs and the system of PDEs, I mean for the system of PDEs you have the following principle which holds.
55:21
If you have a solution of the relax problem you can approximate it with any up to any epsilon with an actual solution. So that is a form of the H-principle if you want. Okay, so it's kind of telling you that although you have a system of PDEs
55:41
which should kind of give you some rigidity, it's actually behaving more like an inequality than a true PDE, right? So if you have a solution to the PDE inequality nearby, there is a true solution of the PDE constraint, of the PDE equality. Okay, so that is a form of the H-principle. Now, okay, none of these papers
56:02
really prove an H-principle but there is a recent paper by Laszlo and Saradonari which proves also an H-principle for this guy.
56:21
Now, what would be an H-principle in this context? An H-principle in this context would be something like this. So I told you, I didn't tell you, I told you how I do the iteration, right? Well, I didn't tell you from which point I start. So here essentially the Nash Cooper Theorem is telling you start from a point in the relaxation and you can run the iteration. So the paper by, I mean, in there, in all these iterations,
56:43
we are starting from a trivial point. We are starting from zero, zero, zero and then we run the iteration and zero, zero, zero is in the relaxation somehow. So in there, the paper by Saradonari and Laszlo have a statement which characterize the point from which you can start, okay? And it tells you essentially that are all the points
57:00
which satisfy a certain inequality. So that's the H-principle. Further question? Probably. So to what kind of evolution equations can this be applied? So first time we actually proved this,
57:22
I would have said, well, only this. But actually it's not true. So Phil Izzet and Vlad have actually applied it to all, for instance, to all active scalar equations which satisfy a certain condition
57:41
on the singular integral kernel. And the condition is the condition which tells you that there is essentially no compactness, no compactness hidden inside, right? So you see that this relaxation problem, I mean, this relaxation is really working because you have some sort of lack of compactness.
58:01
Okay, so a posteriori, the theorem, the H-principle is telling you there is no compactness for your sequence of solutions. Otherwise, you wouldn't be able to approximate so well something which is in the relaxation, right? So a posteriori tells you that. So a priori, you could sort of dream, okay, if I have a lack of compactness, then I can apply it.
58:21
But of course, the situation is very subtle because you see if I start, okay, I actually do have compactness in here for the rigidity problem. So if I have a sequence of solutions which are smooth and you're solving, for instance, being an isometric embedding problem for a positively curved surface, then the Gauss curvature is positive,
58:41
then you have convex, and then you have an X-ray estimate. And so actually, your space of solutions is compact. So the lack of compactness is only when you go below a certain threshold and none above. So it's somehow, in these cases, it's all very interesting because, so what is this threshold lying? And we don't have a good guess for where this threshold is lying.
59:01
I mean, for Deon-Sager's conjecture, we have Deon-Sager's conjecture. So we have the theory of turbulence which is giving you a guess. And for instance, here, it's much less clear because there is no clear intuitive picture.