We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Nonlinear fluctuations of interacting particle systems

00:00

Formal Metadata

Title
Nonlinear fluctuations of interacting particle systems
Title of Series
Number of Parts
3
Author
Contributors
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
We explain how nonlinear stochastic evolution equations may emerge from interacting particle systems. In particular, we explain how the KPZ equation appears as the scaling limit of current fluctuations around its stationary state of one-dimensional, nonequilibrium conservative systems.
Keywords
Particle boardPair productionReference workDaySpare partStarQuality (business)Measuring cupProzessleittechnikSpaceflightFlugbahnCoach (bus)DensityBahnelementElectric power distributionWatercraft rowingYearReaction (physics)SizingTrainFood storageBrillouin zoneMetalPlatingDie proof (philately)Surface acoustic waveLimiterHourStation wagonMeasurementHot workingCardboard (paper product)Cartridge (firearms)EmissionsvermögenNanotechnologyToolFACTS (newspaper)Energy levelAtmosphere of EarthRoll formingKey (engineering)InkSpieltisch <Möbel>Stream bedAutumnOrder and disorder (physics)Screen printingMapGround stationLeadScale (map)Noise (electronics)TurningCylinder headSource (album)CollisionElectric generatorRoots-type superchargerRoman calendarWeather frontAlcohol proofFullingLastUninterruptible power supplyJointBestrahlungsstärkeTelephoneBird vocalizationData conversionEuropean Train Control SystemEffects unitProof testMorningWeekVolkswagen Beetle (A5)Focus (optics)RückezugPagerDestroyerMusical developmentWhiteCorporal (liturgy)IceBrake shoeKette <Zugmittel>Mitsubishi A6M ZeroModel buildingBauxitbergbauFoot (unit)ColorfulnessGentlemanVisibilityThrust reversalRefractive indexBinary starElectronic mediaBoiler (power generation)Girl (band)Finishing (textiles)Tape recorderStem (ship)Remotely operated underwater vehicleVolumetric flow rateSeasonTuesdayPerturbation theoryField-effect transistorAerodynamicsDrehmasseHiggs bosonDiffuser (automotive)VideoGeokoronaLitter (vehicle)Stress (mechanics)Matrix (printing)Interval (mathematics)Scott, MichaelBallpoint penTransportChemical substanceLeistungsanpassungCombined cycleContactorRopeMicrophoneCurrent densityWireBusEveningAngle of attackSpannungsrauschenBill of materialsSchwache LokalisationDisc brakeVeränderlicher SternSeries and parallel circuitsGamePercussion malletWater vaporRelative articulationWoodCylinder blockOceanic climateBasis (linear algebra)CogenerationQuantum fluctuationTextileMicroscopeKickstandSpeckle imagingContinuous trackWire bondingRolling (metalworking)TypesettingTheodoliteSapphireSummer (George Winston album)MechanicAnalemmaPhase (matter)AnnihilationMultiplizitätNyquist stability criterionLocherThermodynamic equilibriumVerdrillung <Elektrotechnik>WeightHeatGas turbineAmateur radio repeaterHauptsatz der Thermodynamik 2Zeitdiskretes SignalVertical integrationLambda baryonPowder metallurgyBeta particleSpin (physics)Orbital periodUniverseGround (electricity)Linear motorControl rodRulerCocktail party effectLecture/Conference
Transcript: English(auto-generated)
Actually, I decided to change a little bit the subject of my talk to much better what
Thierry was explaining this morning, also because what I'm going to talk now is, I think it's very exciting in this field of non-equilibrium statistical physics,
it's also simpler than the super technical KPC stuff, so for students that maybe have come here to see the talk, I hope it's going to be a better choice. So what I'm going to talk today is about
non-equilibrium fluctuations of one-dimensional particle systems. So this, what I'm going to talk
today, is part of the PhD thesis of my PhD student, Autavio Menezes from IMPA. And I'm going to
a very simple model on which we already have this feature about non-equilibrium fluctuations, about non-equilibrium. It's a system which is very simple but doesn't have explicit invariant
measures in the sense of Markov chains, and it's a non-reversible system on which these features about how to deal with thermodynamics and fluctuations and these kind of questions are
also present. So maybe it's not the most natural model, but it's the simplest one on which we can already see what is going on. So just to repeat a little bit the notation of Thierry and maybe
fix some other different ones. So let us start with n, it's going to be a natural number, it's going to be the scale parameter of my system. So I'm going to consider a family of Markov chains indexed by this parameter n. So let me call lambda n the discrete circle
with n points, which we can identify with the set 1n. And let me call omega n the set of binary
sequence of length n. And from the notation it's more or less clear that I'm going to consider periodic boundary conditions on my system, so I'm thinking about the circle of n points. And now in this finite state space I'm going to define a Markov chain,
a continuous time Markov chain, which by now should be more or less familiar to you. Let me first draw these intervals. So I have particles going around this discrete lattice,
and these particles will follow what we call the simplest crucial dynamics. They can jump
left and right. And this model will be in continuous time, so the rate of jump is going to be n square. So I'm already introducing this diffusive scaling Thierry was mentioning
this morning. So what happens here is that this particle tries to jump to the left with an exponential rate n square, which means that the time is so far the one over n square. The same thing to the other side independently for each particle. And there is the exclusion
rule that tells me that this jump here is forbidden. So this is what we call the symmetric simple exclusion, periodic. This is maybe some part of the interval,
and we consider periodic boundary conditions. On top of these dynamics,
I'm going to do something else, because as Thierry was mentioning this morning, this system here has nice product invariant measures. It's reversible with respect to this invariant measure. So what is the non-reversible, non-equilibrium feature of the system? So we have to do something with it. Something we can do is to put these reservoirs with
the different densities. But we can do it also in a sort of translation invariant way, putting creation and annihilation of particles. So to this part of the dynamics,
I'm going to add an extra feature, which is the following. So if a site x is empty at some time t, then with rate 1, we will create a particle there. So the model is no
longer conservative. And if the site x is occupied, then a particle will be destroyed with a rate which is 1 plus b times eta x minus 1 eta x plus 1. The exact form of this factor
here is not very important, as long as it's different from 0. And it depends on the neighbors, on my site x in some way. As Thierry this morning, I'm going to call eta x the occupation
number at site x of my Markov chain. So this is equal to 1 if there is a particle at site x minus 1, equal to 0 if there is a hole there. And you can check that this part here destroys
the invariance of the product measures. Maybe as an Easter egg for people who might be interested in this, you can check that these rates here actually leave invariant the Gibbs
measure of the IC model that Thierry defined this morning. That's not very important, but what is important is that the product measure that was the invariant measure for the exclusion process is no longer invariant because of this factor here. So this is going to be my sequence of Markov chains parameterized by this scaling parameter n.
As you can see here, this creation and annihilation part is not accelerating in time. So there is no factor n squared or n or something here. So this happens at a rate
which is slower in principle than the rate of which particles jump from site to site. In a sense, you can think about it as a perturbation, but it's not really a perturbation.
It turns out that these two parts of the dynamics, they are comparable in size. In any meaningful way, maybe comparing eigenvalues or some other statistics that you want to see.
You will see that both parts of the dynamics have a non-negligible effect on any observable of the system. So this is the model. I'm going to call it eta x t n. And in a moment, I will start to drop the index n from the notation because it's going to be present
everywhere because everything depends on this parameter n. So this is my Markov chain that
I want to study. And it's irreducible in this final state because now the number of particles is no longer preserved. So it has an invariant measure. But this invariant measure is very complicated. As far as I understand, nobody really knows anything about it.
We don't even have this kind of beta ansatz framework or mapping to the... There might be some mapping to some quantum spin system. But in this case,
it's not really meaningful because it's not an integrable quantum spin system. So actually, I'm coming from a point of view which is exactly the opposite of Malik. I tried to derive methods which are somehow robust on the particular details of
the dynamics. So we want to do something which does not depend on the particularities of each model. Of course, the results we can get from that point of view are much weaker
than the results you can get from integrable systems. But this is a complementary thing because you have this object, this phenomenon that you want to characterize. So you want to say from one side as much as you can about it. And from the other side, you want to say that phenomenon is as much universal as you can. So we are working on the universal part and
with integrable probability, you are working on the fine description of the phenomenon. So this is my setup. We have this Markov chain and we want to study this Markov chain and
we want to hopefully prove something about the non-equilibrium fluctuations. OK, so let me do some definitions. So Cx plus is going to be this quantity here.
So what are these quantities here? These are the rates at which particles are created
and un-equilated by the dynamics. So this is the rate at which particles are created. So this is the rate at rod there. But you also need to have a hole there at position x. So you have this extra factor here. And C minus is the same thing. The rate is one, but you need to have a particle to destroy it. So this is the definition. Let me also call mu rho.
It's going to be the product that newly measured. If you want explicit formula, it's something of this sort. And as I already remarked, this measure mu rho is not invariant
under these dynamics. But in some sense, it should be close to invariant maybe.
Yeah, you're right. I switched them. So I should correct one of them. So in my notes, I made my notes like this. So probably it's better to correct them in this way. Thank you. And yeah. So another quantity which is meaningful for this model is the function f of rho,
which is going to be the average reaction rate on the system. So it's the number of particles that in average are created minus the number of particles that are destroyed in average with respect to this if your system is distributed with a newly product measure mu rho. So f of rho
is going to be just the expectation of this guy minus the expectation of this guy with respect to mu rho. So it's 1 plus b. So it's 2 b rho, 1 minus rho, minus rho.
A quick observation. f of rho is equal to 0 for rho equals 2. So I wrote down a formula, which is totally irrelevant actually. But sometimes people like formulas.
It's that f of rho is equal to 0 for that particular choice of the parameter rho. Actually what is important is there is some density, non-trivial density, for which f of rho is equal to 0. This is always true because no matter what I put here
and here, f of rho is going to be equal to 1 at 0 and minus 1 at 1. So it's positive at 0, negative at 1. So it has to be 0 somewhere in the middle. It might be 0 even multiple times.
Actually, multiple appearances of zeros might be interesting for stability or things like that. But for the moment, all that we need to know is that there is an invariant measure. So let me keep going with a definition. So let me call f t.
It's going to be the density of the process, so eta t of my Markov chain with respect to nu of rho. So from now on, I'm going to fix rho. And rho is going to be always
equal to this number here, for which f of rho is equal to 0. So that particular point is interesting from the point of view of non-equilibrium statistical mechanics.
Because somehow the average rate of creation and annihilation is 0. So it means that at that particular point, probably, the effect of the creation and annihilation is smaller in the system. So you call f t the density with respect to mu rho.
I'm going to assume that f 0 is equal to 1. That is,
I'm going to start my system with the distribution mu rho. And I want to see whether the system stays there at later times or not. So I'm going to define now
a number h n t, which is just the relative entropy of my Markov chain at time t, with respect to this product measure mu rho. And this talk is going to be a little
more mathematical than the previous talk. So in particular, I'm going to state a theorem. It's a function of eta. So it's a function in omega n. And so I'm going to state a theorem.
So the theorem is the following. For any time t positive, there exists a constant that depends on this time t such that h n of t smaller than c for any time up to time big T
and for any n. So this is the theorem. Actually, this theorem is very useful.
It's telling you a lot of things about your system. And it also is very surprising because you see that omega n is a huge space due to the n elements. And to be finite
in such a huge space means that you're really close to this product invariant measure.
We were talking at the beginning. So although the product measure is not invariant, it's very close to invariant. This is what this theorem is telling you. Moreover, let's say that you take any statistics of your configuration space for which
under this product measure, you can prove convergence theorem, large numbers, many central limit theorem. This bound here is not telling you that you can
transport this convergence result to our system, but it's telling you at least that those statistics do have a limit. It's not necessarily the same, and it's actually not
going to be the same, but they do converge. So it's some sort of relative compactness theorem. The local ones, it depends on whether your statistics depends on time or not,
and whether your statistics are increasing fast enough with respect to the size. Just look at the finite size. Yeah, finite size won't be the same, certainly.
Yeah, because it's just a finite entropy. So you know that the distribution is going to be absolutely continuous to the actual limit. In some cases, they're going to be the same because of maybe translation invariance or other arguments.
But the point is that this is a very strong statement. Okay, so now I will try to convince you why such a statement should be at least reasonable.
Let me just mention some observations. I don't want to enter into the details of this thing because it's very technical and it's complicated. And there is not an easy way to formulate it, but this family of processes
has what is called the hydrodynamic limit given by the following equation.
So this part here is not surprising because it's exactly the same as in the simple exclusion process. It's what Thierry described before. And now you have a reaction term which is f of u, which is also very reasonable because it's just
the average rate of creation and annihilation. So what is going on usually in these hydrodynamic limits is that realistically the system is close to product in any finite but large box.
So when n goes to infinity, if you fix a box of size 100 or maybe log n, inside there things look like a product measure. Therefore, when you look at the density as a global object, there is averaging and you get that the density as a global object evolves with this pd.
And f is the same function that is written there. So, of course, and this result can be understood as a large number. And of course, there is one particular solution of this equation which is interesting for us, which is that rho u x t
constant equal rho is a stationary solution of the hydrodynamic equation.
So this fact that the rho is a stationary solution of the hydrodynamic equation, hints you at the fact that at least in that scale n squared on which the hydrodynamic equation appears, this product measure shouldn't evolve too much.
So for this reason, maybe you may expect a result of this sort. Actually, this is something you can check. Let's say that you want to change the global density
of your product when newly measured. Large deviations theory tells you how to do it. You have this exponential to the minus n generator etc. And so what is important is that n in the large
deviation principle tells you exactly that the entropy cost to change the density in a box of let's say epsilon n, so something that should be an observable for the macroscopic system, is of order n. So you need at least delta n entropy to change the density of your system.
Therefore, this fact that the rho is a stationary tells you that the entropy
should be little o of n. And this is something that can be proved, that the entropy is little o of n in this case. And this is actually called Yau's relative entropy method
for hydrodynamic limits. And it's a well-developed part of the theory of interacting particle systems. And usually that's what you can prove. Exactly. It's more general because you don't need to start with a constant product measure.
You can make it evolve in time. But the only thing that I want to stress is that if you are satisfied with large numbers, what you need to prove is that the entropy is little o of n. So at the level of large number of macroscopic observables, this is all that you need.
In the other hand, once you have a large number, a natural question is about large deviations or central limit theorems. Large deviations turns out that it's a simple problem than central limit theorems for some reason. And then you can check that if I give you
a finite amount of entropy, I can actually change the CLT.
Because, for example, something I can do with this product measure is to change the density rho to rho plus 1 over square root of n. That is something that at each side produces entropy of order 1 over n. So if you sum over n sides, you get entropy of order 1.
And now this plus 1 over square root of n allows me to change the mean and also the variance of the Gaussian random variable in the limit. So if you want to prove something like central limit theorems, this is actually the list you need to prove. Because if I give you
finite entropy, I can change the variance in the CLT. So therefore, if you believe that some sort of central limit theorem is true
for these kind of systems, then you can start to believe that this theorem might be true. And since for these systems, large deviation principles have been proved,
you are tempted to believe that the CLT is also true. So from the euristics point of view, these results should be reasonable. Yes, that's kind of a naive question. I thought that you tried a large deviation principle
and you just expanded it near the minimum. It gave you the central limit theorem. At least I'm a physicist. Yeah, no, no, no. Actually, this is far from being true for probabilists. Something which is true is that if you are able to prove both, you can recover the variance
looking at the large deviation, at the expansion around zero of the large deviation function. This is true. So taking the large deviation principle for this model and expanding around the equilibrium or around whatever point you are interested in,
you will obtain the variance of the Gaussian process that you should obtain. But it's not true. And it's actually much more difficult to prove central limit theorems in the concept of interacting with a particle system than the large deviation principle. And the reason is related to this
is somehow to do this KPC business. Because the objects, the space-time processes that will appear are these non-linear stochastic partial differential equations, which are very delicate and difficult objects.
So this is why actually CLT is more difficult than large deviations. Actually, it's not naive because when you will learn probability, you will learn the other way around, that large deviation is more difficult than CLT.
And this is why I like a lot about these results here. This is a general fact about Markov chains that has been kind of overlooked a little bit. But everybody probably knows the following inequality.
So let me just write the following. So now I'm going to enter a little bit into the proof of this theorem. So from here, you can see that I'm really a mathematician because
who cares about proofs? But I think that this particular theorem has a very interesting proof. And we can learn something about the proof. This is what I like about proofs, that when you can learn something about the proof, it's not just the proof by itself, it's something interesting. And so if you have a Markov chain,
and then you take any reference measure, and you compare the entropy of the law of your Markov chain with respect to this reference measure, you have the following inequality. So you take the entropy. So well,
if you want to prove something like this, how do you proceed? The usual way that you do is, OK, let's take the derivative, and let's prove that the derivative is bounded. OK, this is what I'm going to do. So you take the derivative, and you start to bound
until you bound it by something constant. So what you have here, and this is totally general, is that the derivative of the entropy is bounded by this expression here.
So let me explain what these guys are. So I will start with this guy here. This is what
people call the Dirichlet form. In our case, it's not really the Dirichlet form because the measure mu rho is not the invariant measure. But it's still a positive quadratic form. So d of a function h is the following. So it's n squared, the sum over x of the integral of.
So I hope that the notation is not too complicated. It shouldn't be. So this gradient
here is a discrete gradient that is the difference in the function h when I move a particle from x to x plus 1 or vice versa. So it's the rate of change of h when I give one of the jumps of the exclusion part of the dynamics. And this thing here, the same
but for the reaction part. So now the h of x is how much the function h changes when I create a particle at x or destroy a particle at x. So those are nice quadratic
forms. And this turns out to be what people call the Dirichlet form, but for the case on which b is equal to 0. And now I have to explain what is L n star.
So L n star is very easy. It's just the adjoint of the generator in L2 mu rho. So when this measure mu rho is invariant, the adjoint of the generator is the generator of a Markov chain. So when you apply it to the indicator function equals, this is a constant function equals to 1. So when you apply a Markov generator to a
constant function, you get 0. So in the case on which mu rho is actually invariant, this term here is 0. This term is positive. And you recover something very well known. The relative entropy with respect to the stationary state of a Markov chain
decreases in time at that the rate of change is bounded by the Dirichlet form. Or maybe you can call it, you want to call it Fisher information or something else. Energy, I like to call it energy. But when it's not invariant, you get something that might be
increasing. And it actually has to be increasing at some point because the measure mu rho is not invariant one. And at the end, when t goes to infinity, you converge to the real stationary state, which is not mu rho. And the entropy is different from 0.
Well, here, since you have a finite Markov chain, actually this adjoint thing is very easy to compute because you just have a matrix. And you have to compute the adjoint. It's some sort of weighted transposition. But this is something which is
general. It's true for any Markov chain, any measure that you put there. And it sometimes gives you some useful information, sometimes not. And the name of the game now is to choose, as a reference measure, something as close as possible
to what we believe should be the stationary measure. So if you succeed in that part, then you may get something which is not very big as the ln star of 1.
So in our case, you can go and compute. It's not very difficult because everything is explicit. This is just a few lines computation. You can compute ln star of 1 is equal to what? The sum over x of phi x. This is easy to understand. The dynamics is translation invariant.
So this function is translation invariant. So you should get something translation invariant. So you get this. What is this? It should be something local because the dynamics is local. And well, I have an expression. Actually, maybe you can write, well, it's equal to that.
But you also can write it like this, which is a little bit more natural because it's actually what pops out from the computations. Doesn't really matter. The point is that this object here is what we call a quadratic function.
What does it mean? A quadratic function. Imagine that, OK, of course, the expectation of this function with respect to the measure mu rho is 0. But imagine that I didn't take the right density. So I took another density rho prime,
which is not rho. And I compute the expectation of this function with respect to this new product measure rho prime. If you do that, you will get, well, this constant doesn't matter. You will get the difference rho prime minus rho to the square.
So the deviation, when you have something which is not the right density in the expectation of this function here, is quadratic. And that's the key. Since it is quadratic,
this deviation, we can get a very nice bound on this expectation there. If it weren't quadratic, it doesn't work. You will only get something like little o of n as a bound. And this fact here that this function is quadratic in that sense
indicates that you are really choosing the right reference measure on your computation.
So there is something called, let me quote, so now let me write a lemma.
So this is another thing which is very common among mathematicians. Of course, the expectation there, I can write it as the integral with respect to
f t. Because f t is the distribution of the law of eta t. So I can write it as the integral of that function with respect to f t. The function f t, I don't have a lot of information about it. It's basically, if I knew what f t were, I would know everything.
So you say, OK, let's forget about what f t is. And let's see if we can prove something which is true for any density f. So lemma, the lemma is the following.
So this is for any delta bigger than 0, there exists a constant c finite such that for any f,
for any density, the following holds. So you can bound the integral of this sum here
with respect to the function f by the form of f times this constant delta and the entropy. So h of f is the entropy. So if you assume that the lemma is true, then the theorem is
proof. Because what happens that, OK, first we choose delta small enough to be compensated
by the minus c there. Then you get something of the form d dt h smaller than this factor here. And now you use, well, you say, for example,
Gronwall or whatever, even just you can say that the solutions of all these are unique or something, then you can get a constant. So it should blow up really different from the
one because actually, so the invariant measure, OK, so notice that let's say that b is equal to then mu one half is invariant, but it's not only invariant, it's also reversible.
So you are comparing your non-reversible dynamics with a measure which is reversible with respect to some dynamics which looks very close to the real one, but it's reversible.
So there will be some observable that takes into account this non-reversibility thing that will tell you no, you're not really in the reversible situation. So this variable should be the current. Here, there are no currents in the spatial sense, no?
But you can have another definition of current, maybe a heat flow, something like that, because when t goes to infinity, because you have this current that is evolving in time and is creating
the non-reversible features of the model. So actually, you can use this theorem to prove
an actual fluctuation theorem, so to prove that convergence to some stochastic PDE. And for those stochastic PDE, you can ask the question what happens at t equals infinity, and you start to see the non-reversibility issues. So this constant should effectively
blow up in t. It shouldn't blow up too fast. So we expect it to blow up like polynomial in t. From the ground wall, you just get exponential, but it should be just polynomial, but it should blow up in t because of non-reversibility.
And here, if you interpret this system as some sort of particle system in contact with some chemical reservoir, you can talk about chemical currents, and then everything makes sense. And you can say, OK, this current there is the one that will detect the non-reversibility of the
model. So how much time do I have? Well, if you accept that this lemma is true,
then the theorem is proven. I think it's a very nice result because it's really telling you that at least at the scales on which we are interested in, this non-reversibility
of the systems is a smooth phenomenon. So you develop these long-range correlations, these characteristic features of non-reversible systems in a smooth way when you start from something which is close to the, let's say, equilibrium setting.
So here, we are trying to show how systems go from this well-known phase to the more
complicated non-reversible thing. So let me see if I can actually tell you a little bit about the proof of this lemma. So actually, there is something that I just want to quote.
What's really going on here is that each time, this is a one-dimensional phenomenon,
each time you have a local observable which is quadratic in the sense that I described before, of a nice interactive particle system, you can show that this local observable is very
well approximated by the square of the density of particles in a box of macroscopic size. OK, so this is something we call the second order Boltzmann
Gibbs principle, which is something we introduced with Patricia Gonzalez in the context of the KPZ equation, which tells you, what does it tell you?
It's not the same, but it's close in spirit, and actually, it can be proved that the proof is very similar, at least in spirit, and it tells you the following. So this is actually the thing that is bounded. Let me see if I am missing some
constant somewhere. So the variance of this guy, there should be a square root of L here,
so this is a further one, then the variance is rho 1 minus rho, there is also rho 1 minus rho, so there is probably v times rho, and there is probably here square root of rho 1 minus rho.
OK, so if you do this, then this is what is bounded by...
So the main idea, this is something that happens very generally in one-dimensional systems,
is that when you have a quadratic function, in this sense here for example, then you can approximately vary well by the square of the density of particles,
properly normalized, and the cost you pay to do that is the Dirichlet form. So this is what we call the energy estimate in our work with Patricia. And from here you see
that we are almost at the end of the lemma, because once you are here, you just need to prove that this part here can be bounded by the entropy, and this is just the entropy inequality
with the extra element that this guy here is very close to Gaussian distribution. OK, so once you have understood that these kind of local functions can be approximated by the square of the density of particles, then these kind of results are very reasonable.
OK, so while this is more or less how you can prove such a theorem,
but it also hints you at how you can obtain more refined results. Because what actually this inequality here is telling you is that you can approximate at the
level of the macroscopic evolution any local function of your system by a combination of a linear and a quadratic function of the density of particles.
Because let's imagine that here it's not purely quadratic, we have a linear term. Then what happens is that actually the linear term is OK, because it's the density of particles.
So if you now go back to the beginning and you say, OK, let's try to prove some sort of fluctuation result associated to the hydrodynamic limit, what you need to understand is how the evolution of the density of particles behaves as the scale of your system
goes to infinity. So in principle, since we are talking about a huge Markov chain with 2 to the n states, the density of particles, which is, roughly speaking,
is order n variables, because you can imagine that we are taking blocks of size 100 to compute some average density and you just keep track of these numbers there,
will not describe everything, will not describe any observable of your system. But once you understand that you can actually, any local observable of your system, express it in terms of the density, you can try to obtain a closed equation for the density of
particles. And this closed equation at the level of fluctuations in one dimension will involve two elements, a linear part and a quadratic part, because of this lemma here.
When you have a linear and a quadratic part, there is a finite set of equations that you can obtain in the limit. Actually, you have basically two possibilities, either the linear stochastic heat equation or the KPC equation. So once you have proved this theorem there,
about the entropy, complemented with the method of proof, you see that you can actually try to tackle the problem of what happens with the observables of your Markov chain,
in the limit when the system goes to infinity. Well, I think I will stop here. Thank you very much for your attention.
Any questions? Can you extract some results on the fluctuate dynamics once now you control quite well? So a priori, that depends on the level of rigorosity you want, because that depends on
the level of phase you have in tightness. So let's say that you can prove tightness in some nice topology that is enough for your process, so it's good enough. Then in that case,
what you will prove for this system, for example, is that when you do the natural scaling of the density of particles, so you take something like you define some field xtn. So you have to use these functions, because
at the level of fluctuations, this guy has the bad taste of being a distribution. Well, this is what it is. You have to do it like that. You use some test function, and
you will get that in the limit, this field here will be solution of this equation here. So in general, well, let's write this the case that I...
Discussing here and then there will be some noise. So this is a white noise
This part of the noise comes from the from the exclusion part. So, okay, so that here there will be some square root of rho 1 minus rho and then there is a second noise coming from the From the creation and annihilation which has On front of it g of rho
Where this g of rho is You put some here, okay, so Well, it's some function. It's some constant that is related to the rates of creation and annihilation
So this is what you can prove for this particular system. Okay, so for this particular system You can prove that the fluctuations evolve in this way notice that here You have this number f prime of rho so f prime of rho Can either be positive or negative?
If it's At this scale it doesn't matter because this equation is well posed for any time t but when you send t to infinity this equation will converge to a to an equilibrium measure if and only if f prime is is
Non-positive if f prime is positive this will start to blow up exponential in time Okay In that sense you can see that you cannot do better than that in the other hand When the when f prime is negative you expect to be able to prove that the
That the constant doesn't blow up too fast so Depends on the situation okay, and and somehow the The entropy bound has to be sensitive to what happens when t is big
Because it's something that depends on the whole distribution of the of the backup chain