We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Derivation of invariant Gibbs measures for nonlinear Schroedinger equations from many quantum states

00:00

Formal Metadata

Title
Derivation of invariant Gibbs measures for nonlinear Schroedinger equations from many quantum states
Title of Series
Part Number
12
Number of Parts
23
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Derivation of invariant Gibbs measures for nonlinear Schroedinger equations from many body quantum states We prove that Gibbs measures of nonlinear Schroedinger equations of Hartree-type arise as high-temperature limits of appropriately modified thermal states in many-body quantum mechanics. In dimensions d=2,3 these Gibbs measures are supported on singular distributions and Wick ordering of the interaction is necessary. Our proof is based on a perturbative expansion in the interaction, organised in a diagrammatic representation, and on Borel resummation of the resulting series
Multiplication signCondition numberFree groupEstimatorFood energyVector potentialSequenceSpectrum (functional analysis)ComputabilityInfinityPoint (geometry)MeasurementModulformHamiltonian (quantum mechanics)Eigenvalues and eigenvectorsPoisson-KlammerElement (mathematics)Inverse elementProduct (business)CoefficientBasis <Mathematik>Arithmetic meanSummierbarkeitFourier seriesEquationSurjective functionProjective planeAxiom of choicePrice indexDimensional analysisInvariantentheoriePotenz <Mathematik>Set theoryIndependence (probability theory)Positional notationPartial differential equationBounded variationOrthogonalityPhysicalismMomentumCylinder (geometry)Derivation (linguistics)Korrekt gestelltes ProblemQuantum mechanicsInvariant (mathematics)Nichtlineares GleichungssystemKörper <Algebra>Square numberFunctional (mathematics)Operator (mathematics)Population densityObject (grammar)Complex numberTime evolutionLinear mapHarmonic oscillatorWave functionMathematical optimizationMoment (mathematics)TheoremNumerical analysisLaplace-OperatorTheoryTheory of relativityGoodness of fitExpected valueSpectral theoremPerpetual motionRootLimit of a functionMereologySigma-algebraWahrscheinlichkeitsmaßQuantum stateStrategy gameStandard deviationRight anglePresentation of a groupThermodynamisches SystemLogical constantAdditionForcing (mathematics)Film editingQuantum field theoryNichtnewtonsche FlüssigkeitNatural numberVariety (linguistics)Universe (mathematics)Amenable groupMechanism designSurface of revolutionRule of inferenceProof theorySolid geometryFrequencyRadical (chemistry)1 (number)MathematicsGroup actionProcess (computing)VotingThermal radiationMaxwell's demonComputer animationLecture/Conference
SequenceFunctional (mathematics)Physical lawMechanism designPhysical systemCausalityKörper <Algebra>Category of beingSquare numberExpected valueArithmetic meanSummierbarkeitSpacetimeOrder (biology)Operator (mathematics)MeasurementDivision (mathematics)MomentumEvent horizonComputabilityElement (mathematics)Normal (geometry)Condition numberQuantum mechanicsRight angleNumerical analysisProcess (computing)Game theoryVector potentialPosition operatorColor confinementGreatest elementVector spaceDimensional analysisFrequencySelectivity (electronic)ModulformDivergenceDecision theoryDependent and independent variablesInfinityPoint (geometry)Flow separationPopulation densityMortality rateCartesian coordinate systemTerm (mathematics)Multiplication signThermodynamisches SystemWave functionAnalytic continuationMereologyIncidence algebraAbsolute valueSet theoryBoundary value problem2 (number)FinitismusFood energyCuboidPlane waveTorusWahrscheinlichkeitsmaßTime evolutionGenetic programmingEigenvalues and eigenvectorsGeometric quantizationWaveLimit (category theory)Probability density functionGoodness of fitWell-formed formulaTheoryKopplungskonstanteDifferent (Kate Ryan album)Free groupPhysicalismLinear mapComputer animation
LinearizationVector potentialProduct (business)EquationWaveCombinatory logicCondensationQuantum stateKörper <Algebra>Thermodynamic equilibriumApproximationTheoryFlow separationVector spaceGroup actionInvariant (mathematics)Arithmetic meanHamiltonian (quantum mechanics)Physical systemSummierbarkeitDensity matrixExpected valueMultiplication signFood energyQuantum mechanicsEvoluteWell-formed formulaMereologyRootMaxima and minimaOperator (mathematics)SpacetimePoint (geometry)Different (Kate Ryan album)Order (biology)TheoremFunctional (mathematics)Wave functionWeightEigenvalues and eigenvectorsObject (grammar)Quantum chromodynamicsSquare numberLimit (category theory)Theory of relativityDirection (geometry)Category of beingMeasurementCorrespondence (mathematics)Descriptive statisticsTime evolutionPosition operatorUnitäre GruppeAverageOrthogonalityProjective planeDerivation (linguistics)Special unitary groupForcing (mathematics)Population densityRadical (chemistry)Numerical analysisFactory (trading post)Hand fanSet theoryMechanism designThermodynamisches SystemDependent and independent variablesLogicVariety (linguistics)Game theoryUniverse (mathematics)ResultantProcess capability indexWavefrontSign (mathematics)Uniformer RaumComputer animation
Group actionFunctional (mathematics)FrequencyCanonical ensembleDecision theoryTerm (mathematics)Theory of relativityQuantum stateProduct (business)Finite setOperator (mathematics)Social classRight angleCondition numberMereologyNumerical analysisNormal (geometry)Multiplication signPhysical systemLogical constantPopulation densityProcess capability indexInfinityTotal S.A.Limit (category theory)Different (Kate Ryan album)CondensationFlock (web browser)Combinatory logicGroup representationQuantum mechanicsFood energyWeightOrder (biology)Stability theoryParameter (computer programming)MeasurementSpacetimeTable (information)Military baseMoving averagePresentation of a groupPosition operatorCommutatorLogicVector spaceSequenceMusical ensembleRule of inferenceAlgebraSequelPoint (geometry)Mathematical analysisDistribution (mathematics)SummierbarkeitCanonical commutation relationWavePotenz <Mathematik>Thermodynamic equilibriumBeta functionHamiltonian (quantum mechanics)Symmetric matrixFormal power seriesGeometric quantization2 (number)Maxima and minimaEigenvalues and eigenvectorsDirac delta functionTheoryObject (grammar)Equaliser (mathematics)Wave functionAxiom of choiceWell-formed formulaDot productGoodness of fitAnnihilator (ring theory)Correspondence (mathematics)Laplace-OperatorLecture/Conference
MeasurementExpected valueLogical constantGamma functionQuantum stateCross-correlationPartition (number theory)Numerical analysisTransformation (genetics)Operator (mathematics)Square numberLimit (category theory)Normal (geometry)Product (business)Körper <Algebra>Population densityDimensional analysisSummierbarkeitDensity matrixGreatest elementOrder (biology)Network topologyInvariant (mathematics)RootComputabilityDifferent (Kate Ryan album)Beta functionFraction (mathematics)Similarity (geometry)Flow separationTerm (mathematics)Scaling (geometry)Category of beingMereologyDivisorFinitismusDivergenceEigenvalues and eigenvectorsMultiplication signTheory of relativityPerpetual motionThermodynamic equilibriumSpacetimeArithmetic meanFunctional (mathematics)Moment (mathematics)TheoremSocial classFood energyVariable (mathematics)Positional notationPhysical systemCombinatory logicCanonical ensembleRight angleFilm editingVector potentialSpherical capAreaFigurate numberModulformInfinitySurjective functionFrequencyAnnihilator (ring theory)Coefficient of determination1 (number)Staff (military)ExplosionMany-sorted logicStandard deviationGroup actionCoalitionObservational studyCondition numberFactory (trading post)Free groupSpecial unitary groupApproximationDependent and independent variablesComputer animation
Quantum stateDimensional analysisSlide ruleOperator (mathematics)Factory (trading post)Point (geometry)InfinityGraph (mathematics)Line (geometry)Parameter (computer programming)Term (mathematics)Order (biology)Multiplication signPotenz <Mathematik>ResultantAnalytic setINTEGRALCross-correlationFunctional (mathematics)AdditionQuantum mechanicsCommutative propertySummierbarkeitMereologyEstimatorLimit (category theory)Population densityProof theoryFree groupCommutatorThermal expansionMeasurementKörper <Algebra>Expected valueTheorySeries (mathematics)Group actionKopplungskonstanteReal numberAlgebraic structureProduct (business)Numerical analysisAlgebraBound stateAnnihilator (ring theory)DiagramDifferent (Kate Ryan album)Event horizonSimplex algorithmCorrespondence (mathematics)Right angleVector potentialMany-sorted logicInductive reasoningDivisorNeighbourhood (graph theory)AnalogyCoefficientRule of inferenceObservational studyComputer programmingAngleAreaCondition numberCollisionFrequencyComputer animation
Diagram
Transcript: English(auto-generated)
Thank you very much for the invitation. And yes, I would like to say something about the derivation of this invariant heaps measure
for some non-linear Schrodinger equation, in particular for Hartree-type equations starting from many-body quantum mechanics. So I'm going to say something to start about Hartree theory. Then I'm going to say something about many-body quantum mechanics. And then I would like to try to explain the relationship within these two theories, in particular,
I would like to present the theorem that we can prove. Good. So Hartree theory is based on the energy functional, which I wrote down here. So we will consider this energy functional for function phi in L2 of Rd. And we will be interested in dimension 1, 2, and 3,
with values in c. And then you see that this is the kinetic part of the energy. This is an external potential. And this is the interacting part of the energy that we want to consider. So we will assume that the potential, the external potential, v of x, is confining. So it's trapped. So it grows at infinity.
And in a second, I will be a little bit more precise about what kind of condition we need. And I will also assume that the interaction potential, w, is of positive type, meaning that the Fourier transform is positive. It's a positive potential. And that it is bounded. So it's an element of infinity. So this may be relaxed a little bit.
But we didn't try to optimize this condition here. So this is the energy. If you take the variation of the energy with respect to phi or phi bar, you get a time-dependent equation, a time-dependent Hamiltonian equation, which is the time-dependent Hart equation. And I wrote it down here. So on the left-hand side, you have the time derivative of phi at time t times
the complex number i. And then on the right, you have what you get if you take the variation of the function with respect to phi t. So the Laplacian acting on phi, the potential, and then the interaction potential, which is given by this convolution here. So the time-dependent Hart equation preserves the mass,
preserves the L2 norm, and preserves the energy. And it is interesting to construct the invariant measure, which is formally given by this expression here. So you take the energy, which is, as I said, the preserve of the time evolution. You take the L2 norm, which is also
preserve of the time evolution. You multiply with some kappa, with some number kappa, and you take it into the exponential. And this d phi here should be something like Lebesgue measure, although this is, of course, very, very formal. And yesterday, we heard in the talk of Andrea that there has been a big effort in the PDE community
to construct, to make sense of this measure here, and to prove the invariance with respect to the time evolution. Because this is very helpful when you want to show the well-posedness, the almost sure well-posedness for this nonlinear Schrodinger equation with rough initial data, so with irregular initial data.
And I wrote down some of the names which have been active here in the mathematical community. This problem has also been considered already in the physics community, in the construction of quantum field theory, and there was an important work going back to Lim and Jafre, which I didn't mention here.
Now, one thing I should say, most of these works here, maybe all of them, they consider much more difficult case in the mathematical sense, in the sense that here we are looking at a simple example. We're taking a hard-to-type nonlinearity
with a bounded and repulsive potential, which is the construction of this measure is straightforward, as I will show in a moment. But of course, if you take more difficult example, then you have many difficulties to overcome here. So this is not the topic of my talk. I don't want to construct this measure for a very difficult example. I'm just sticking with this simple case,
and I want to see relation with other objects coming out of other theories. Good. Now, although yesterday we had a very nice introduction and we saw how this can be constructed, I wanted to spend the next five minutes to explain once again, or to recall, the construction of this, the precise construction
of this measure here, because it was going to be useful for what I'm going to say later. So the starting point, the basic idea is that you first construct the three measures. So the embodied measure associated with the free energy, which I denote here by epsilon 0 phi, which can be written just
as the expectation of this linear operator H minus Laplacian plus external potential plus a constant, kappa, in the state phi, with a wave function phi. So to define the free measure, it is useful to diagonalize the operator H. So we assume that the potential V is confining,
so the spectrum is pure point. So we can find the eigenvalues lambda n of H, and eigenvectors U n of H. I'm using here this bracket notation just to indicate the orthogonal projection onto U n. And of course, if it is confining,
then it's always pure point spectrum. But we need some more quantitative type of estimate. So we'll assume that the potential is so confining that one of these two bounds holds. So the first one is what we assume for the one-dimensional case. We assume that the trace of the inverse of H is bounded, which is the sum of the inverse of eigenvalues
lambda n. And in dimension 2 and 3, we will assume the potential is so confining that the trace of 1 over H squared is bounded. So let me remark immediately. You probably already know, or you already saw it yesterday. In dimension 2 and 3, independently of the choice of external potential V, it is not reasonable to make the assumption
that the trace of H to the minus 1 is bounded. Because you don't have enough decay in momentum. You should think that the operator H in momentum behaves like p squared. The Laplacian is momentum squared. And 1 over H is like 1 over p squared. So in one dimension, it is integrable. So the trace is finite. In two and three dimension, one of p squared
is not integrable for large momenta. So it is not reasonable to assume that the trace of H to the minus 1 is bounded. Instead, if you take trace of H to the minus 2, 1 over p squared squared is 1 over p to the 4, which is integrable also in two and three dimensions. So this is a reasonable assumption.
It is, however, an assumption. Because not for all external potential V, this will be true. If you think, for example, about the harmonic oscillator, and you are in one dimension, for example, you know that the eigenvalues of the harmonic oscillator are just natural numbers. So it's n. And you know, of course, that the sum of 1 over n is infinite in one dimension. So the harmonic oscillator is not confining enough.
You need to confine the particle with some little bit more than x squared will do it. As soon as you confine, the external potential grows at infinity faster than x squared, you will have enough decay in the x to make sure in one dimension that this is correct and in two dimension that this is true.
In three dimension, you need a little bit more than, you need a lot more than x squared. I didn't do the computation. Very well. So we have this spectral decomposition of the Hamiltonian. And we are going to use it. And to define the measure we're interested in, the free measure we're interested in, we decompose the phi in this basis.
So we call the coefficients with respect to the basis un omega n divided by square root of lambda n. And we do it like that. Because then it's easy to see that the free energy of phi is just the sum of the omega n squared. So instead of defining the free measure on the phi,
we define it in the sequence of omega n. Because we see that, because of this form here of the energy in the sequence of the omega n, in the omega n, the measure we want to define is just a product measure. A product of independent and identically distributed measures if you want. So to be more precise, we define on mu zero
on the set of sequences with complex entries. Each omega n is a complex number. And we have one for every index n. Equipped with a typical cylinder with a sigma algebra which is generated by all cylindrical sets if you want. And we define the measure mu zero
as the product of Gaussian measures. So we have a Gaussian measure for every omega n and this is the density of a Gaussian measure. I hope that I normalized it correctly. You want, of course, to get a probability measure. So we want that the integral of this guy is equal to one. Good, so this is a formal definition. This is a precise definition of a free measure mu zero
associated with a free part of the Hartree equation. And once you have this measure, you can start to make some computation to ask questions about the properties of this measure. Here, for example, you can compute what is the typical, what is the expectation for well two norm of phi squared. And then you do the computation. This phi is here, so the normal phi squared is this sum here.
The expectation of each one of this omega n is order one, maybe one or maybe something else. It doesn't matter if it's a two here. But this is proportional to the trace of one over h. It's the sum of the inverse eigenvalues of the Hamiltonian h, the linear operator h. So this is finite in one dimension because of the assumption that we made.
But it is infinite in two and three dimensions. Because again, you don't have enough decay momentum to make sure that this is finite in two and three dimensions. So the typical L2 norm in two and three dimensions is infinite. So it means this measure lives on a space which is below L2. The measure mu zero of L2 is equal to zero in this case.
You can do a little bit more general computation. And you can try to compute the hs norm as a real number. And hs, of course, with respect to the Hamiltonian small h. It's not the typical hs norm because you also have an external potential. And then you do the second computation. And you find that this expectation here
is the trace of minus one plus s. So we assumed for d equal to two and three that the trace of h to the minus two is finite. So we see that the expectation of h minus one, norm of phi, is finite in two and three dimensions. So it means that the measure mu zero in two and three dimension
lives somewhere between L2 and h to the minus one. So this is a different way to say it. The measure of h to the minus one is the full measure. It's one. The measure of L2 is zero in two and three dimensions. In one dimension, the measure of L2 is one.
Very well. So you can be a little bit more precise if you make a more simple assumption about this external potential. For example, the simplest case you can think about is instead of taking an external potential confining your particles, you can just put your particles in a box and you impose periodic boundary conditions. So you have the small h is now the Hamiltonian plus a constant on the torus,
on the d-dimensional torus. And in this case, we know exactly what are the eigenvectors and what are the eigenfunctions of this operator. So these are plane waves. So if you take the wave e to the ipx and you act with h on it, then you get this number here. This is the eigenvalues. And p has this quantization condition coming from the periodicity condition
at the boundary of these torus. So in this case, you can compute exactly what is the expectation of the hs norm squared with respect to the free measure mu zero. And you get this formula here. And you see that this is finite in d-dimension if and only if s satisfy this bound here. So for example, you see that in two dimensions,
the measure mu zero leaves just below L2. In three dimensions, it leaves just below h to the minus 1 half, just to make a more precise statement with respect to the general statement that follows from our assumptions on the external potential. OK, so this is the free measure. Now we want to construct the full interacting Hartree measure. So how will we do it?
Well, the idea is we would like to define h, the interacting invariant measure, as an absolutely continuous measure with respect to mu zero, with density given up to a constant, by e to the minus the interaction. So the idea is that e to the minus Hartree function, you write it as e to the minus epsilon zero times
e to the minus w. And e to the minus epsilon zero, you absorb it into the mu zero. So what is left is this density, e to the minus w phi. So this is how you want to define the interacting Hartree measure. So how is it possible to do it like that?
Well, let's think for a second. If you are in one dimension, typical phi have finite L2 norm. So if you assume that the w is in L infinity, for example, but it is not needed, you see immediately that the w, the interaction w, is almost surely finite. So it is fine. You can really define the full Hartree measure by this formula
here, by taking the mu zero and taking the absolute continuous measure with respect to it with density given by this function here. So this is well-defined. It is no problem. If you are in two and three dimensions, however, we just discussed that the L2 norm of phi is typically infinity.
So this interaction here for phi in the support, typical phi in the support of this measure mu zero, is going to be infinity. And now you can tell me, well, it's plus infinity, so e to the minus plus infinity is still finite, but it's equal to zero. So it's difficult to define a probability measure if the density is identical.
It's almost surely equal to zero. So this is not going to work. This definition here is not going to work if you are in dimension two and three. And the solution of this problem was already cleared in the 70 by works of Glim and Jeff and so on. And it is to replace the interaction
by weak-ordered interaction. So let me try to explain what this means. So we fixed a cutoff k, positive cutoff. You should think of it as being a cutoff in the frequency, but it's actually in the energy spectrum of this operator h, because the energy is not just p squared in this case. Which means that we take the field phi to the k
by summing not over all n, but only over n smaller or equal than k. Good, so we can define, you see that as k goes to infinity, the L2 norm of this guy here is going to diverge. So you want to compensate for this divergency. So you define rho k as the expectation of the square of this field here.
So it's given by this formula here. So the L2, this is a deterministic quantity. And its L2 norm is going to diverge as n. Its integral is not a L2 norm, sorry. Its integral is going to diverge as k goes to infinity, because it's just the sum of lambda n to the minus 1. So what you want to use with is the fact
that you have a divergence in the phi. You have a divergence on the rho, and they should cancel. So you define a cutoff and weak ordered interaction by taking, we call it wk. But you see that instead of taking phi phi, phi squared, phi squared, we take phi squared onto the point x minus this rho k at the point x.
And we take the same thing at the point y minus the corresponding rho at the point y. And if you do this definition here now, let me repeat it once again. If you don't have the rho k, this is going to diverge as k goes to infinity, and this is going to diverge as k goes to infinity. But if you subtract the rho k, you subtract exactly the right quantity in order for this guy to remain bounded as k goes to infinity.
And then you can prove that this wk defines a Cauchy sequence. You should think of a wk as a function of a phi, or if you prefer, as a function of the omega, right? Because in principle, we defined our measure on the set of all possible sequences omega. So the wk as a function of the omega is a Cauchy sequence in LP for all p, and then it has a limit,
and the limit is independent of p. And we denote the limit by w, w, which is maybe not the best notation, but I just wanted to remind that this is the weak ordered interaction. And using this new w, w, we can define the new weak ordered measure, Hartree measure.
And denoted the measure with new h, w, and the corresponding expectation with this symbol here. And then you can prove that the measure is invariant with respect to the time evolution. So this is the construction at the Hartree level. Now let me switch to the many-body quantum mechanics. Let me say something about this different theory here.
And at the end, I will try to establish a relationship between the two theories. Good. We are interested in system of n particles. And at the quantum level, at the many-body quantum level, a system of n particles can be described by a wave function, which is an element of this Hilbert space here. The s in the bottom means that we only look at wave function which are symmetric with respect
to permutation. So in the physics language, we are looking at a system of bosons. So we assume that the wave function we're interested in are symmetric with respect to what possible permutation. And another important assumption, we only are interested in wave function which are normalized. Because in quantum mechanics, wave function has a probabilistic interpretation. If you take the absolute value squared of psi n,
it's the probability density for finding the particles in the different regions. So we always assume that wave function is normalized. Good. We are interested in this system, in this quantum system, in the so-called mean field regime. And we characterize the mean field regime by taking a Miltonian, and I wrote the Miltonian here.
The Miltonian is a self-adjoint operator on this Hilbert space here. And the important remark, or the important thing, property is here this coupling constant proportional to 1 over n. This is what characterizes the mean field regime. Mean field regime means that you have many particles, n particles. Each particle interacts with almost all other particles.
So we have n collisions, if you want. But each one of these interactions is very weak. It is of the order 1 over n. So that the total effect of these many weak interactions can be approximated by an average mean field potential. This is why we take this 1 over n here.
If you prefer to think in different terms, you want to put the 1 over n here to make sure that the free part of the Miltonian, this operator here, which you see is the sum over n particles. So it's an object of order n. This guy is the sum of the whole pair, so it's an object of order n squared.
But if you divide it by n, they are of the same order. So they can give rise to some non-trivial limit as n goes to infinity. OK, so this is the Miltonian we want to look at. So what are the properties of this system? And let's start to see some relation with the Hartree theorem that I have in the previous part of the talk.
So the first remark that you can make to see a relationship with Hartree is by looking at the ground state, the ground state energy of the ground state vector. The ground state vector is the eigenvector of Hn with the smallest possible eigenvalue. And the ground state energy is the corresponding eigenvalue. So for this type of potential, so this type
of repulsive potential, it's quite easy to see, to check, that the ground state exhibits condensation, meaning that in the ground states, you can approximate the ground state by a state where all particles are in the same one-particle state. So you can approximate the ground state vector by a product of n copies of the same one-particle wave
function, phi. And when I say one-particle, it means that phi is in L2 of Rd, not Dn. And you take the product of n of them, and then you get something which is in the right space. Now, if you believe for a second that you have condensation, that this is correct in the ground state, then you can compute the ground state energy just
by taking the expectation of Hamiltonian in this state here. So you have to take this operator here. You have to take the expectation with respect to this state here. And it's a very simple computation. You get exactly the Hartree function. You get the Hartree functional for the wave function square root of n phi. Or if you want, you can take out the n and, well,
one of these two formulas. I think this formula here is not completely right. But look at the last formula here. It's n times the Hartree energy of phi. OK, so it means, what is the ground state energy? Well, to get the ground state energy, which I call En, the smallest eigenvalues of Hamiltonian, and you divide by n, you just have
to take the phi which minimizes the Hartree function. So the ground state energy is given in the limit by the minimum of the Hartree function. And the ground state vector, the corresponding psi n, exhibits condensation in the minimizer of this Hartree
energy. So this is, in some sense, a first relationship between Hartree. It's the first way in which Hartree theory gives an approximation to many body quantum mechanics in this mean field regime. Another example is, if you look at the dynamics. So look at the dynamics in many body quantum mechanics. The time evolution is governed by the Schrodinger equation,
which I wrote down here. This is a many particle, many body Schrodinger equation. So on the left, you have the time derivative of wave function at time t times i. And on the right, we have the action of Hamiltonian on the same wave function, psi n of t. So this is a linear equation, but if n is very large, you can imagine it's not so easy to solve it.
I mean, you always have a solution, because you can just solve it by applying the unitary group e to the minus i ht on the initial data, and you always get a solution. But to say something more about the solution is very difficult. And Hartree helps you in this respect, in the sense that you can prove the following convergence to the Hartree equation.
And this is the following statement. So if you assume that the initial data, you start with n particle wave function, which exhibits approximate condensation. And now we're not assuming here that phi is the minimizer of Hartree. It can be any phi, because if psi n0 is the ground state, then it doesn't move. It's not so interesting. But the point is that here, phi
can be any one particle wave function. You just have to assume that at time 0, the many body states is approximately the product of this phi. You let it evolve with respect to this full many body dynamics. Then at time t, you still have approximate condensation. You can still write approximate psi solution with a product of n copies of a one particle wave
function phi t. Of course, the one particle wave function is not the same as at time t equal to 0. You have to evolve it. And the evolution is described by the time dependent Hartree equation. So again, we have a relationship between many body quantum mechanics and the Hartree theorem.
And also here, I wrote some of the names of the people who have been involved in this work. So the first work back in the 70s happened by Ginni Brembello, who are here. And then there was work by Spohn and then many other people. And I also have some work in this direction. But maybe let's move on.
And I would like to switch now to the main question of this talk. Now, we have seen that there is a relation between many-body QM at the level of ground state energy and at the level of dynamics. So the natural question that you can ask is, is there a way to understand this invariant measure that you have in Hartree theory,
this invariant measure that we constructed in the first part of the talk, as seeing this measure as emerging from many-body quantum mechanics? And what does correspond to this invariant measure at the level of many-body quantum mechanics? And for the case of one-dimensional system,
a quite precise answer to this question has been given by Mathieu Levine, Nam, and Nicolas Rougerie, who proved, I think it was about two years ago now, that many-body Gibbs state in this mean-field regime can be approximated by the Hartree invariant measure
that I constructed in the first part of the talk. So there is a relationship with this invariant measure. So if you look at these many-body Gibbs states, so let me try to explain what are these many-body Gibbs states. So many-body Gibbs states describe thermal equilibrium at positive temperature.
And in order to get these correspondence, you have to take high enough, high temperatures, as I will see in a moment. Now, in order to explain what are these thermal equilibrium states, I have to tell you first what are mixed states in quantum mechanics, because this is something that I didn't mention yet. And maybe not everybody is familiar with it.
So when I told you that you can describe a quantum system by a wave function, it was correct. But it is not the most general description. If you are at positive temperature, it's also important to know what you can describe your state by mixed state. Mixed state means you don't know in which state you are. You only know that you have a certain probability to be in several possible states.
And then the states you are in is described by a density matrix. Density matrix is a trace class, non-negative operator on the Hilbert space where we worked, with trace equal to 1. So because of this assumption, it means that the density matrix can always be written as a linear combination
of orthogonal projection with some weights Pj that you can interpret as probability, because all Pj are between 0 and 1, and the sum is equal to 1. So if your row is given by this formula here, the interpretation is that your state, your system, is in the state psi j with probability Pj.
It's very different in quantum mechanics. Of course, quantum mechanics are linear theory. So if you have a several psi j, you can also take a new wave function by taking a linear combination of psi j. But this is not the same thing as taking a linear combination of a projection. Because if you take a linear combination of a projection, you don't have interferences between the difference wave function.
That's called the incoherent superposition of states, instead of a coherent superposition of states. Anyway, if you have your row, then if you have this interpretation, it's also clear how you can compute the expectation of an observable in such a state. Because if you have an observable A,
and you have your state row, well, what is the expectation? Well, we are in the state psi j, with probability Pj. So we better take the expectation of A in the state psi j, and then we take the weighted average according to these weights Pj. And if you think for a second, you will find that this sum here is just the trace of the product between A and row.
So this formula here tells you how you take expectations with respect to mixed state. Okay, so why did they introduce mixed state? Because equilibrium states at positive temperature are mixed state in quantum mechanics. And in particular, if you are at a temperature which is denoted as one over beta,
the equilibrium state is given by a normalizing constant times the exponential of minus beta times hn. Okay, so if you have an Hamiltonian hn with eigenvector psi j and eigenvalue ej, it means that you have this formula here. It means at equilibrium, you are in the state psi j,
with probability e to the minus beta ej. Okay, this is what you need to know. When you are at positive state, you're not knowing exactly which state you are. You only know that you are in many possible state with some probabilities. This is the only information that you have. Well, if you look at this formula in particular,
you see that if beta tends to infinity, so the temperature tends to zero, then the probability are going to collapse to the one with the smallest energy, right? So at zero temperature, the equilibrium state is just the ground state. And in the other limit, when beta goes to zero, so when the temperature goes to infinity,
well, this goes to one, so it means that all states have the same probability. Okay, good. So these are the types of states which will lead in the limit as n goes to infinity to the nonlinear Gibbs state or to the Gibbs state associated with our theory that we defined at the beginning of the talk.
Okay, and this is what we want to understand. But there are two, if you think for a second about it, you will see immediately that there are two very simple objection why this cannot be true for this rho beta here. So the first remark is that if you fix beta positive, so it means if you are at a fixed positive temperature,
then it is easy to see that the state rho beta still exhibits the same type of condensation as we had for the ground state. Okay, remember in the ground state, all particles are described by the same one particle we found. And it's not difficult to understand that if you fix the beta also at positive temperature, this is the same because you see you have a system
where the ground state energy is of order n. Okay, now if you take excitation of this order n energy by something of order one, right, which is what you see if you take a fixed temperature. If you fix the temperature, it means you're not only in the ground state, you have a combination of states with energy which is a little bit above the ground state, but just by order one.
Because if you are very far, then this weight here is still going to be very small and you don't see it, okay. But if you excite the ground state by order one, it means that you can only move a finite number of particle out of the ground state, okay. And if you only excite a finite number of particle, most of the particle, we have n particles in the system,
the bulk of the particle is still in the same one particle state, okay. So because of this argument, it is clear that if you take beta fixed, the measure that you will see at the end, at the R-tree level, is going to be a delta function exactly on the minimizer of the R-tree function. Okay, the bulk of the particle are all in the same one particle state,
which is a trivial measure. It's certainly very different from this invariant measure that we wanted to understand, okay. So we have to do something else. And the thing that you have to do, of course, you have to take the temperature to grow together with n. As n goes to infinity, you don't want only the number of particles to go to infinity, but also the temperature. And the right choice turns out is given by beta equal one over n,
which means temperature is equal to n. Okay, so this is the first remark. Now there is a second remark. And if you think for a second back to what I was saying about the ground state, right, we saw that there is a relation between the number of particles in many body quantum mechanics, this n, and the L2 norm of the corresponding Hartree state,
of the phi, okay. So it means if we consider a system with a fixed number of particles, n particles, exactly n particles, then in the limit, we will have a measure at the Hartree level where the L2 norm is fixed, okay. We fixed L2 norm.
And this is not what we had in the previous part of the talk. So we have to switch to a different representation of this quantum system, where the number of particles is allowed to fluctuate. You don't want to fix the number of particles. You want to take combination of states with possibly a different number of particles. So this means, yes.
So your J there goes up to n, or is your summing over all energies? This is all energy, so you have. So as beta varies, the measures don't become mutually singular as you change beta? As you, no, no, no, no. It's always, the largest weight is always on the ground state. That's where you have the most energy. But then you have larger tails,
so it will move up in energy, in a sense, if you increase the temperature. Okay, so in the physical language, it means that you want to switch to a canonical ensemble. Canonical ensemble means the number of particles is fixed. To a grand canonical ensemble, where you allow the number of particles to fluctuate.
Okay, so you will take averages, not only over state with different energy, but also over state with different number of particles. Okay, and to switch to a grand canonical setting, the right way to do it in quantum mechanics is to switch to a Fox space representation of the system. So let me try to go through this formalism,
second quantization formalism. Okay, so the Fox space, the bosonic Fox space, it means that instead of looking at the fixed L2 space, we take the direct sum of all possible L2 spaces, of all possible number of particles. We are summing over all M, and we take the symmetric product of M copies of L2.
If you fix the M, this space here, or which is the same as this space here, describe states of the system with exactly M particle. And now we are summing over all possible M because we want to allow the number of particle to fluctuate, we don't want to fix it.
Okay, so you have the Fox space. So vectors in the Fox space are sequences of wave function, psi j. And for different values of j, you have different number of particles in your system. Okay, so if you are on the Fox, this is very useful to introduce creations and analysis operators, so this is the formula. It's not so important how they are defined,
but interpretation is somewhat important. So the A star of F, you see F is an L2 function on Rd, so it's a one particle wave function. And acting with A star of F means that you are creating a new particle with that wave function. You are increasing the number of particle by one.
And if you act with AF, which is the adjoint of A star, you are decreasing the number of particle by one. Okay, good, and then simple algebra tells you that these creations and analysis operators satisfy canonical commutation relation, which I wrote down here. So if you have a commutator between A and A star, it's just the scalar product in the two between F and G,
and all other commutators are going to vanish. It is also useful to introduce operator value distribution, which I call A of X and A star of X, which create or annihilate a particle at the point X. Okay, these are only distribution because you have to smear it out
integrating against the function F, otherwise we don't quite make sense. And the definition is so that this relation hold true. And in terms of these creations and analysis distribution, you can rewrite the canonical commutation relation in this term here. But more importantly, you can define other operators,
and the two operators that I'm going to need are the number of particles operator. So now remember that in our system, the number of particle is not fixed. It's a variable, right? So you can measure how many particle a state has. And the right way to measure is to apply this operator N. If you think for a second about it, A star of X, A of X, is the density of particle at point X.
So it's clear that the total number of particle is given by integrating this thing here over all possible values of X. Okay, so this is the number of particles, and the other one is the Hamiltonian of the system. And to define the Hamiltonian, why we have this part here. We integrate over all X. This is the second quantization of this operator here.
This is just a notation. If you prefer, you can check that the Hamilton operator is defined in such a way that it commutes with the number of particles. It does not change the number of particles, right? It's equal to zero. And this you can see because in each term, the number of creation is the same as the number of annihilations operator.
Okay, so the number of particle is not changing. And if you restrict it to a fixed number of particle, you get exactly the same operator as we had before. This thing here, which may look scary, is just the sum of the operator minus Laplacian plus external potential acting on all possible particles from one to n. And in the same way, this is the interaction
is equivalent to this thing here. Okay, so now we have a Hamiltonian operator, so we can construct the grand canonical equilibrium state. So this is a mixed state, so it's a density matrix. It's a positive, it's a non-negative trace class operator with trace equal to one, and the trace equal to one
is guaranteed by this normalization constant here. And then you see that it is proportional to the exponential of minus one over n, and then you have, again, a combination of energy and number of particles, right? These are the two things which are preserved by the dynamics, the energy and the number of particles, so you put it into the exponent here. And kappa, the constant kappa,
is called the chemical potential, right? When you switch from canonical to grand canonical, the number of particles is not fixed, so you have a new variable. In a sense, it's the Legendre transform of this number of particles. And the idea is that you have to fix the kappa so that the expectation of the number of particles is the right one. Right, you don't fix the number of particles,
but you can fix the expectation of the number of particles. Okay, so this is the state that we are interested in. And to write it, you see that we divide by one over n because we want the temperature to be equal to n, right? The beta is one over n here, and that's why you have this extra one over n. Otherwise, it's just e to the minus beta times h plus number of particles.
So to rewrite it, since we divide by one over n, it is useful to rescale the creations and the relations operator by dividing it by one over square root of n, right? Because you remember that we have the hm, maybe I go back one slide. It's a sum of a quadratic and a quartic part. But in front of a quartic part,
there is already one over n. And then we multiply everything with one over n in front. So the first guy is carrying a one over n factor. The second guy is one over n square factor. So if you put one over square root of n in each a and a star, all this n get absorbed into the definition. Okay, so we define these new operators with this extra variable here.
And then the same state rho n, the constant is the same as before, and then it's just a way to rewrite it, right? Now I absorbed all the n inside this a star and a factor. Okay, so this is a measure that we want to look at positive tempers. This is a state which describes thermal equilibrium
at temperature n. Good, now what can we say about this state? Well, let's start by looking at the free part. So we turn off the interaction with w equal to zero, right? We try to do the same steps as we were doing for the art with you. First look at the free measure. So here, first look at the free state
by removing this interacting part here. Okay, so if you do so, then you got this quadratic expression in a and a star. But you can diagonalize by switching to the, remember, this is exactly the operator h that we had in the first part. And you remember that I called the eigenvalues lambda j and eigenvectors uj, right?
Maybe in this way, it's easy to understand it because a star uj a uj is measuring the number of particle in the state uj. And lambda j is the free energy that we have, right? So if you sum over all j of number of particle in the state uj times the energy, you get the total energy of this many particle system.
Okay, so we use this notation here and this is our free state with a free partition function, a free normalization constant that we need here. Okay, so what can we say about this state here? Let's try to measure the number of particles in this state. Let's try to measure, for example, the number of particle in the state ui, right? ui is a one-particle state.
You have n particle, you want to know how many of these n particles are in the state ui. Good, so this is the expectation of this guy here. Expectation means that you have to trace the trace of this operator against this density matrix here. So I put the density matrix here and in the bottom. And you see that the density matrix, actually,
it was a sum over several j, right? But all the terms with j different than i, they simplify, they cancel from the numerator and the denominator because they are independent of the particle because there is no interaction in this setting here. So you cancel everything and you end up just with this expression here. And then you have to know that this operator here,
measuring the number of particle, its eigenvalues are just n. And then the expectation that you get is exactly this thing here, okay? So now let's try to compare with what we had in the Hartree case. In the Hartree case, you remember that we had one over lambda i. Well, it's not so different because if you expand
the exponential, it's one plus lambda i over n plus something else. So at least for lambda i, which is much smaller than n, this looks exactly like one over lambda i. So you start to see some relationships, some similarity with the Hartree measure that we had at the beginning. Okay, so if you want to know the total number of particle, then you have to sum over all possible modes.
So far, I just measured the number of particle in the mode ui, now I'm summing over all i and I'm dividing by n because I want to look at the risk. Actually, I should not have divided by n because we're already looking at this operator an, which carry the one over square root of n. So forget about this one over n. Anyway, we can do the computation and you find this sum here, right?
And then you know that since we know in one dimension, we assume that the sum of one over lambda i is finite. So expanding this guy here, you easily see that you get something of order one. In two and three dimension on the other hand, well, the sum is still finite, right? Because even if lambda is very large,
you get the cutoff at n, right? As soon as lambda is much bigger than n, the denominator is going to be very large, it's going to make everything converge. But it does not converge on the right n scale. So what you get here is a number which is finite for every n, but it diverges as n goes to infinity, okay? So again, we get a similarity with the Hartree case. The only difference with respect to Hartree,
in Hartree, the number of particle are just infinity. The L2 norm was just infinity, right? In theory, you have a natural cutoff for giving me this n here. So you don't see the divergence for finite n, you only see the divergence as n goes to infinity. Okay, so these are the properties of the free measure.
Now, if you want to pass to the interactive measure, then you can do the same type of thing as we were doing in the Hartree case. I look at the interaction, this is the interaction, right? Now, the interaction is going to be, expectation of interaction or of powers of interaction is going to be finite as long as n is finite because we have this natural cutoff. But if you look at what happens as n goes to infinity, again, you get a divergence, okay?
You get something which is finite in one dimension, but this is going to be infinite for dimension two and three. So also here, it is natural to do this weak ordering. Okay, weak ordering means that we replace the interaction and now we don't need to introduce this cutoff because there is this natural cutoff given by hand, right? So we just take the interaction and you subtract
from these two guys and these two guys, this quantity here, which is just the free expectation of the product of A and star. So which is something whose integral, so L1 or this guy here diverges as n goes to infinity in two and three dimension. Okay, now, using this weak order interaction,
we can define a weak ordered grand canonical state, which is what you would expect. And this is, again, a state on this fork space, on this many-body fork space. Okay, so now we want to compare this state here
with the Hartree measure that we constructed at the beginning, right? And the theorem should be that there is some relation between this guy and the final guy and the Hartree and the Hartree measure. Okay, now, well, maybe this is just a comment. I believe we can skip the comment, let's go. So how do you compare this state, this many-body quantum state with the measure?
You look at the correlation function and you look at the moments of the Hartree measure and you want to prove that they are the same. So what are the correlation function? You take expectation with respect to this weak ordered many-particle measure of products of A and A star. All observable of this many-particle system can be written like that.
And of course, here, we only look at the case where the number of creations is the same as the number of finalization operator just because all of the expectations are equal to zero. And then we want to compare this gamma and k, this correlation function, with the one constructed with the help of the invariant measure. So we take expectation with respect to the Hartree invariant measure
of products of this field phi and phi bar, okay? Okay, and so the conjecture is, as n goes to infinity for every final k, this limit here, for example, measure, this norm here, for example, measure in Hilbert-Schmidt topology should be equal to zero. So they should be the same in the limit. You can approximate the many-body quantum Gibbs state
with the invariant Hartree state here, okay? At the level of the expectation of the review of this correlation function. And this is exactly what was proven by Levine, Nam, and Ruggieri for the one-dimensional case, okay? We also have some results for two and three dimension, but for a different type of potential, so I'm not mentioning it here.
So what we want to do, what we wanted to do is to prove the same thing in dimensions two and three, right, where you need the weak ordering. In one dimension, the simplification is, of course, you don't need this weak ordering technology. Okay, unfortunately, we cannot prove this conjecture here yet, but we have to modify the little bit of this many-body quantum state.
And instead of looking at the state e to the minus full Hamiltonian, we have to look at this modified state with this parameter eta, which is anything just bigger than zero, where we take out two eta terms from here, and we put it back, there is a mistake here.
The one minus two eta should only multiply this H zero, not the interaction, interaction carries a one in front, okay? So in particular, you see, if this operator commuted with a full operator, you can put everything together, and you get exactly the same state as we had in the previous slide. The fact that this state here is not the same as the Gibbs state that we had before
is the consequence of the fact that you cannot put the exponential of a times the exponential of b is not the exponential of a plus b for non-commuting operators. Okay, so this is the state that we can look at for any fixed eta, and for this state, we can prove what we expected. So that the k-particle correlation function
will converge to the one corresponding to the prior criteria, right? So you see, as n goes to infinity, this thing should converge to the same, you see, maybe this is very much that I didn't make before, but, so why do we expect this convergence to hard? The point is that in terms of this a and a star, you see that the commutation relation are not zero,
they're still non-commuting operators, but they have this one over n factor, so they are almost commuting fields. And as n goes to infinity, they become commuting fields, and of course, then they are nothing else than the phi that you have at the R3 level, okay? But this remark about commutativity, in particular tells us that as n goes to infinity,
it doesn't matter whether you write like that, or you put everything together, because in the limit, everything commutes, and so you have only one limit. So there is no modification for the limit. Okay, so maybe one minute today, some couple of words about the proof. The proof is based on perturbation expansion. So you have this big quantum,
mini-body quantum state, where you have a free part and interaction, and then also you have the R3 measure, which are the free part and the interaction. And then you expand both of them with respect to the interaction, which means you can do it by, okay, I'll skip this part here, it's not the same part, maybe. You do the Duhamel expansion,
so you expand everything in terms of a free Hamiltonian, and then the interactions are here in the bottom, right? These are all the interactions that you have. So then you construct this strange state, with modified state, where you subtract this guy and you add it back on the other side, and you get something like that. You see that the effect of this modification
is that the integrals in the Duhamel series, they do not start from zero and go to one, but they start from eta and go to one minus eta. Okay, so this is the expansion we look at, and then we look at all these terms here, and we prove that each one of these terms here converges to the corresponding term, to the corresponding term in the expansion
for the Hartree mesh, okay? So, well, how do you do it? Well, forget about this part here. You see, when you are here, each one of these double, each one of these interaction is a quartic operator, is A star, A star, A and A. So you have a lot of A star and A just in the denominator,
and you have to take their expectation to compute this correlation function. And to do so, well, to do so you use this weak theorem, which tells you that with respect to a free measure, the expectation of any product of A and A star is the sum of the real possible pairings of products of couples of A and A star.
It's the same kind of algebra that you have for Gaussian measures. And it is exactly the reason why this guy at the end converges to the series for the Gaussian measure, for the big Gaussian measure that you get in Hartree theory, okay? So you know what are these expectation for the quadratic part. You see, you have this thing here
that we already had before, but in the limit converges to the classical, to the Hartree measure. If you have it in the opposite order, remember, we don't commute because we are in quantum mechanics. So A star is different than A star A. But the difference is not so much. It's only with one over n because of this almost commuting property. So as n goes to infinity, you can get rid of all this guy here.
You can compare each one of this guy with the one over h that you have in the Hartree expansion and you can prove that the two expansions are the same. And to do so, is this technique with diagrams. So my collaborators are very skillful with drawing this graph. So each one of this wiggle line is a potential,
w of x minus y, right? And then you can, the lines, they describe the pairing. You have to sum over all possible pairings. So you can pair this guy here with this guy here, this one with this one, this one with this one, and so on. The only thing that you cannot do because of the v-cordering, is to pair one of this guy with this neighbor,
with the same x, right? And this is what you don't want to do because if you pair this guy with the same x, with the a with the same x, if you go back for a second, you see, it means that you have to take an expectation of this, but at the point xx. And then you integrate over x. But if you take xx and you integrate,
you get the trace of this operator here, which is essentially the trace of one over h, which you know is infinite in two dimension and three dimension, okay? So that's what you want to avoid. You don't want to take the only one operator, the trace of only one operator. You have to take at least two, but this is what you have because of this structure of the pairing here. Okay, so using this diagram,
you can first prove bounds. You can prove that the expectation of all this pairing here is bounded. And then you can prove that the expectation of each one of this pairing here converges as n goes to infinity to the corresponding term in the expansion for the high trim measure. There is one last thing that I wanted to say, one last final obstacle,
is that if you think for a second, what is the number of pairings, right? You have your 2n operators and you have to pair them. And the number of possible pairings is of the order of 2n factorial, okay? So to be more precise, you have 4n because for each term in the expansion, you have one more interaction, and the interaction is squirting. So you have 4n, but it's only 2n creations
and 2n annihilation operators. So it's going to be proportional to 2n factorial. Now you have something which helps you, which is this integration, integration over this simplex, which gives a one over n factorial, that's fine. But still, 2n factorial divided by n factorial is behaving like n factorial. So the series is not converging, okay? So we can compare each term in the series
to converge to the corresponding term in the classical, in the series for the Hartree measure, but we don't have a convergence. So it's not so trivial to get the results that we have. And what saves us is this idea of Borel summation, which use some additional information. You have to prove some additional analyticity of this function.
You have to take the coupling constant in front of interaction to be complex, and you have to prove analyticity in some appropriate region of this function. And then this idea of Borel summation allows you to eat up this additional n factorial that you had in this expansion
and to get the convergence that you want. Okay, so thank you very much for your attention. In the convergence of the reduced density matrices, do you have an estimate in terms of n of the difference? No, it's only because of this Borel technique,
this Borel summability technique, it gives you convergence, but with no rate at all. To get that, you would need the series to be convergent, and then you could estimate the error, but here you cannot. Just get convergence, point-wise convergence is not.
Can we expect a next order correction, like a Bogoli-Boff theory for Gibbs state? Does it make sense? I don't know. I would have to think about it. You see it at the level of the ground state, when you have a finite excitation.
I think that you are so up, because to take this temperature to be very large, so you're very much up in the energy, so I'm not sure that over there you can still have something like the Leibov theory. I think not, but maybe I'm wrong.
Thank you. Thank you. Thank you.