We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Integrability and non-equilibrium statistical physics

00:00

Formal Metadata

Title
Integrability and non-equilibrium statistical physics
Title of Series
Number of Parts
3
Author
Contributors
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
During the last twenty years, a large number of exact solutions have been derived for some non-equilibrium interacting systems, such as the exclusion process, leading us to a better understanding of non-equilibrium behaviour. Integrability has played an important role in these developments. In this talk, we shall review some of the techniques involved and present a few representative results obtained in the field.
Hauptsatz der Thermodynamik 2MechanicQuantum fluctuationThermodynamic equilibriumThermalPartition of a setDensityLadungstrennungTransportElectric currentClothing sizesKontraktionAerodynamicsConstraint (mathematics)GradientSoundPaperModel buildingAsymmetric digital subscriber lineElektronenkonfigurationMagnetic coreParticleProzessleittechnikMatrix (printing)Tire balanceRestkernSynthesizerEngineElectronic mediaInterface (chemistry)Zeitdiskretes SignalKette <Zugmittel>Orbital periodAngeregter ZustandElectric power distributionFlatcarSpin (physics)Relaxation (physics)GearPlane (tool)WavelengthLinear motorRoots-type superchargerBand gapSpace probeProof testNegative feedbackContour lineCrystal structurePhase (matter)FlussdichteSchubvektorsteuerungThermostatChandrasekhar limitVermittlungseinrichtungNanotechnologyInversion <Meteorologie>Transmission lineGreen politicsMail (armour)Finger protocolMusical ensembleWire bondingBarqueTypesettingParticleSchubvektorsteuerungProzessleittechnikRail transport operationsHot workingTemperatureCombined cyclePaperAerodynamicsMatrix (printing)Series and parallel circuitsSizingProfil <Bauelement>NanotechnologyBahnelementInterface (chemistry)Contour lineDie proof (philately)Relaxation (physics)Partition of a setModel buildingElectric power distributionLimiterWind waveMeasurementMusical ensembleUniversePattern (sewing)Phase (matter)DVD playerYearAnalytical mechanicsController (control theory)Ship breakingStandard cellWoodturningTypesettingQuantum fluctuationBrightnessThermodynamic equilibriumVideoCartridge (firearms)Book designPagerLinear motorMorningElectric generatorField-effect transistorNegationProgressive lensFeltH-alphaQuality (business)Wire bondingRoll formingKardierenGround stationSpeckle imagingRadiationGalaxyBird vocalizationSpare partFACTS (newspaper)MinuteCurrent densityVertical integrationSchwache LokalisationVisible spectrumDensityMagnetic coreGroup delay and phase delayKontraktionCrystal structureQuantumSpin (physics)Ship classPower (physics)Crystal habitInitiator <Steuerungstechnik>ElektronenkonfigurationAsymmetric digital subscriber lineFullingZeitdiskretes SignalScale (map)TeilchenrapiditätRadioactive decayExtraction of petroleumRailroad carDiffusionLastNegativer WiderstandSigmaBlackMartini-Henry-GewehrMechanicGlory (optical phenomenon)LocherWindAmplitudeControl rodOrder and disorder (physics)LiquidSwitchArtillery batteryMapCrystallizationGround (electricity)Astronomisches FensterBeta particleLiquid crystalSubwooferKette <Zugmittel>Hauptsatz der Thermodynamik 2String theoryLeistungsanpassungAtmospheric pressureRing (jewellery)FuelNoise (electronics)ElectricityChemical substanceContactorMembrane potentialKey (engineering)Thermodynamisches PotenzialNeutrinoMetalAfternoonSeparation processRoots-type superchargerWater vaporDrehmasseClinching (metalworking)Volumetric flow rateDirect currentHeatIceStagecoachEnergy levelGasTransmission towerITERCatadioptric systemCapital shipGirl (band)Effects unitTelephoneAngeregter ZustandMagnetizationDiffuser (automotive)Conductivity (electrolytic)Cosmic distance ladderTelemetrieSolar energyFlugbahnGentlemanGamma rayBulk modulusSteckverbinderPhotographyInertial navigation systemParametrischer OszillatorGamePlain bearingVideotapeTowingSauerstoff-16Constraint (mathematics)Binary starStationeryRemotely operated underwater vehicleDigital televisionFiling (metalworking)WeightAmplitude-shift keyingBallpoint penFlatcarStirling engineKelvinCylinder blockBrownian motionKepler-BewegungSolidPlane (tool)Coining (metalworking)Relative articulationNetztransformatorThermalColor chargeComputer animation
Transcript: English(auto-generated)
Hello, good afternoon. Thank you for staying. I would like to thank Nicolas and Stéphane for their invitation.
It's a big honor to be here. French, English. Anybody doesn't speak French? OK, so English, or Hindi maybe. OK, so I'm going to talk about some of the concepts that we learned about this morning from Thierry, how to use them in physics.
I'm a physicist. That's why I'm using PDFs. And just to match, to make an asymptotic and smooth matching with what Thierry said this morning, I will just scan very fastly through the first transparence, which just, in fact, say again the same things, but in less details, that we heard this morning. We know that when we do equilibrium statistical
mechanics with a reservoir at a given temperature, T, or beta is 1 over T, we have a prescription to study such systems. And the prescription is given by the canonical Gibbs-Boltzmann law, Maxwell law, whatever you like, which tells you that underlying a system
in statistical mechanics, there is a measure, the Gibbs measure, which has the temperature as an important parameter. And that this measure is normalized with a partition function. And the partition function is, in fact, nothing but an avatar of the free energy of the system, the thermodynamic free energy up to a log and a kT. And once we know the free energy,
we can study the phase diagram, understand how things evolve when you change the temperature or some other parameters, such as the pressure or the particle density. And of course, statistical mechanics is much stronger than thermodynamics because it predicts fluctuations. You have not only average values, but you have variances, fluctuations,
a real probability distribution. And indeed, Brownian motion is the paradigm of fluctuations, pure fluctuations. And Brownian motion goes beyond classical thermodynamics. You cannot predict it using the first and the second principle. But of course, you can understand it very well
using equilibrium statistical mechanics. This is what Einstein did in 1905. OK, now if we go out of equilibrium and consider the simplest picture that Thierry already drew this morning, and I think everybody in the community draws the same picture, the big difference can be that you can put circles or squares here. So if you take two reservoirs at different temperatures,
chemical, electrical potential, and you make a contact between them through a rod of metal, and you wait long enough, there will be a persistent stationary current going from the high temperature to the low temperature. And just to describe this very simple everyday situation,
no microscopic theory is yet available. So in fact, we don't really know what are the relevant parameters, P, V, T in thermodynamics here. We don't really know what are the relevant parameters to describe such a system. Certainly, we should put the length of the rod, the two temperatures on the boundaries, and so on.
But what else? Not only we don't know the parameters, but we don't know which functions we have to study. We get analogous of entropy, free energy, enthalpy, and so on. So we don't know which functions to put and which parameters to put into the functions. Nothing is really known about universality. This is very important in equilibrium statistical
mechanics. Here, we don't really know how to separate universality classes. And in fact, there is no Gibbs measure or something, a general form for a microscopic measure that would underlie such a simple model. So if the two temperatures are equal, we know everything. We know there is a Gibbs measure.
If you take them different, we just don't know how to write anything. And we don't know the fluctuations. So the important thing is that when you wait long enough, there is a macroscopic current flowing from high temperature to low temperature. And the system is out of equilibrium because this flow of current breaks time reversal invariance.
If you take a movie of this phenomenon and you project it backwards, you will be able to say that something is wrong in your movie because heat would be flowing from low temperature to high temperature and you know that it's not possible in real life. So let's talk a bit about non-equilibrium fluctuations.
Again, we heard about it this morning. So we know that, so here I put the low temperature here and the high one here. We know that the density profile or the temperature profile in the steady state given by Fick's or Fourier's law will be a straight line on average. But there can be fluctuations in this profile, fluctuations drawn in red.
We don't know how to compute what is the likelihood of any non-typical profile in this framework. Of course, again, at thermal equilibrium, if both temperatures or potentials are equal, let's think about the gas in a closed room at a given temperature.
Then the typical profile is flat and the fluctuations can be computed. In fact, Thierry did it this morning. And we can say that the likelihood, the probability of seeing any density profile, rho of x, stationary profile, takes a large deviation form.
So these probabilities of the form exponential minus beta, the volume here is the length of the system, times some functional of this density, rho of x. And this functional at equilibrium is nothing but something very closely related up to an integral to the free energy of the system.
So the free energy is in fact nothing but something that quantifies large deviations at equilibrium. Free energy can be viewed as a large deviation function at equilibrium. But again, out of equilibrium, we don't know what's going to happen. So what is the probability of observing
the red profile in the steady state? What is the corresponding non-equilibrium free energy, f to f gothic of rho of x? We don't know how to compute that. We don't have a principle for that. Similarly, similar question for fluctuations of the current.
If we count the total number of particles that have gone from left to right in a given time, let's call that y of t. Thierry called it q of t this morning, if I remember well. And we take y of t divided by t in the long time limit. This will give us the typical current that flows through the system.
This typical current is given by Ohm's law, if you want. u is equal to r times i, r times j. But we want to understand what are the fluctuations around this typical current. Or more precisely, what is the likelihood, the probability of seeing an empirical current, so I measure how many charge flew from left to right,
and I divide by the total time, is equal to a small j, which is not the typical current. And this takes, again, a large deviation form. So it's a function minus t, the time phi of j. And we want to compute this large deviation function. So the general question would be,
and again, we saw this at the end of Thierry's lecture, this morning, what would be the probability of seeing a local current, j x t, and a density profile, rho x t, during a certain range of time between zero and capital T, with the correct scaling, the diffusive scaling,
we heard about this morning again. And this probability will take a large deviation form, with a rate or large deviation function, capital I of j and rho. And what people are looking for is a kind of principle to compute this rate, this large deviation function,
capital I. So there is no general principle yet, but if we had one, if we could compute this capital I, of course, by contraction, by taking marginals, we could compute the two important physical quantities, capital F and phi, that I just defined before.
So one path toward the answer to this very general question is the macroscopic fluctuation theory, which is okay for a driven diffusive system, but which is not totally general. And just I recall you the variational principle we saw.
So this rate function can be written as a solution of a variational problem. So there is an Euler-Lagrange theory behind it. And the only thing you need to know from the microscopic dynamics, again, this is what we learned this morning, is this conductivity sigma and this diffusion constant d.
This morning, d was equal to one half, and sigma was equal to rho times one minus rho. But if you take a general lattice gas, they can have much, much more complicated expressions. In general, we don't know how to compute them, and we have to compute them really starting from the microscopic dynamics.
So this framework, so maybe that's not the name used in mathematics, but I will use the acronym MFT, macroscopic fluctuation theory, to describe it when I allude to it during the talk. And it is due to many people and including among them, the Rome group,
Martini, de Soleil, Gabrielli, John Azzigno, and Nandim. So that's the end of the introduction and the matching with Thierry's lecture this morning. I'm not going to go into this direction now. I'm going to go to a different direction because what happens is that if you really want to solve this problem,
you have to solve some nonlinear PDEs, and I just don't know how to do that. So what I will be telling you about is how to get some exact solutions for some very simple discrete models, the exclusion process typically, using integrability.
Integrability is a concept which comes from quantum mechanics. So I'm going to spell out examples and to give you some formulas that were obtained for some specific models for this kind of general questions. So let's start again with the general picture, and this general picture can be modeled
by the asymmetric simple exclusion process. This lattice gas, discrete space, continuous time, Markov process, where particles hop from a given site to its neighboring site by respecting the condition that you have at most one particle per site. This is the exclusion principle.
So this hop is forbidden. That's a hardcore interaction. And the reservoirs on both ends are just here to put in particles or to extract particles with different rates, alpha, beta, gamma, and delta. So you can adjust the values of these four rates
to mimic any boundary density you wish. Okay, no questions up to here? So let's start to look at this model more precisely. So this model is a kind of minimal model for non-equilibrium statistical mechanics. It plays a role analogous to the i-easing model
in equilibrium stat-meg. So many people are studying it. Thousands of papers have been devoted to it in the last 20 years. And just to emphasize again, this exclusion brings in some interaction. So it's a non-trivial n-body problem. It's not a one-body problem. The asymmetry drives current in the bulk.
So together with the reservoirs, these are the two features that keep the system out of equilibrium. And the fact that it's a process, a Markov process, so it's genuinely stochastic, prevents you from using any Hamiltonian and trying to adapt any kind of Gibbs measure.
There is no Hamiltonian, no energy. So how can you even try to write exponential minus H? There's no H anyway. So everything is encoded in the Markov, the generator. So in this evolution equation for the probability density function, and that plays a role in fact of the dynamics.
There's the dynamics of the system and all the information is encoded in the generator. So just to recall you, this very nice mathematical model was not invented by mathematicians. In fact, in the mathematical literature, it appeared in Spitzer's papers in the 70s.
But two years earlier than that in 68, 48 years ago, people working on biophysics invented this model to understand how ribosomes read messenger and RNA and reading it build up proteins from the genetic code.
So this is really the origin of this model. So I'm not a biologist, I'm not going to talk about that at all. I'm going to say wrong things. But this is a true photo of these ribosomes proceeding along the RNA strand and building up proteins.
So this has something to do with reality. It's not purely, yeah. Anyway, it's a minimal model, so it appeared in many, many different contexts. Reputation of polymers, conductivity, hopping conductivity, driven diffusive systems, Cardar-Barre-Zizang equations. And it's still very much used.
For example, the nice example that people like to say, or do I know anything, nothing about it, is that the traffic in Geneva, in Geneva, in Duisburg nowadays, is implemented in real time using an avatar of the exclusion process. So it's useful. One important connection between the exclusion process
and a very famous stochastic differential equation is the fact that the exclusion process is a discrete version of the Cardar-Barre-Zizang equation. So the Cardar-Barre-Zizang equation describes how a height, H,
evolves in space and time because of diffusion, aggregation, or evaporation of matter, particles, and random noise. So this describes the evolution of a random interface through various processes. I will give you some examples by the end of the talk.
So this is a continuous space-time stochastic partial differential equation, which was given meaning by, for example, by Martin Ehra's work in the last few years. But if you discretize it, so if you consider that your interface is just this small, this black line with slopes plus or minus one,
and you suppose that this interface evolves either by adding lozenges or by removing them, then, I will not go into the details, but you can very precisely map the evolution of this interface by an exclusion process.
The idea is that slope minus one corresponds to a particle, slope plus one corresponds to a hole, and when the particle hops, the interface evolves exactly in the way of adding to it a small rhombus. So it's just perfectly equivalent. So many of the results that are derived
from the exclusion process can be pushed into the KPZ world and tell you things about KPZ. So now let's go more into the details and to the exclusion process. If we want to study it, we have bulk dynamics, but we have also boundary conditions. And the mathematics and the physics
is very different in the different cases according to the boundary conditions. The simplest case is the periodic case. Thierry told us this morning that at least the stationary measure is trivial. It's factorized. It's flat. Maybe the more realistic case is the one with open boundaries,
a finite lattice with open reservoirs. This is really the rod connecting a battery to the earth, for example. Another very nice and maybe the mathematically more fascinating case, most fascinating is the infinite line case where there's plenty of beautiful things. I hope to reach the third part of my talk
and tell you a bit about this case. So in each subcase, the techniques and the results are a bit different. So I'm going to start with the simplest one, the ring case, then go to the open boundary case and hopefully have time to tell you about the infinite line problem
and its relation with random matrices. So let's start with the exclusion process on a ring. And I want to tell you a little, but not much unfortunately, about the Beth-Hernzatz. So if we write the dynamics of the exclusion process on the ring and we spell out what the Markov generator is very precisely,
well, we just say that the configurations are just given by the position of the particles on the ring. And the Markov generator is a kind of discrete Laplacian. So I'm calling, so there's an asymmetry P and Q. So here I just rescale things.
Particles can jump in the trigonometric direction with rate one and in the anti-trigonometric one with rate X and X is less than one. So I have a kind of discrete Laplacian operator, which plays a role of the generator of the dynamics. But I have to be careful that some configurations are forbidden.
So of course I cannot put two particles on the same side. So there's one way of writing the evolution operator of my exclusion process on a ring. So what I want to know is first what is the stationary solution of this equation?
What is the stationary probability distribution? And this is a simple exercise. The number of configuration is just given by the binomial L sides and particles. And in fact, the stationary distribution is just that any configuration has the same probability as all the others. It's a flat measure. And the stationary distribution
is just given by the inverse of the binomial. This is just another way of saying that there's a factorization in the grand canonical ensemble. But this is not enough. If we want to understand the dynamics of this system, we need to know about the eigenstates of this operator. It's a linear equation. So the best we can think about
is to be able to diagonalize fully this operator. So at least what we know is that the density profile is flat on average. And there are Gaussian fluctuations because everything is completely flat at the level of the stationary measure. However, the current is non-trivial.
And I will tell you how to investigate the properties of the current. So as I told you, we are interested in diagonalizing the generator of this Markov process, which I call M. And one first very beautiful observation, which I think was due to Deepak Dhar at the end of the 80s
and then spelled out by Spohn and Guo, is that this generator M is something which is very familiar to physicists and maybe much less to mathematicians. This generator is nothing but a spin chain. So you can really write this problem
as a problem of a quantum spin chain with Pauli matrices. So I imagine that people who are classical mathematicians, maybe for them it's not completely clear what it is. But for a solid state physicist, it's a kind of, it's a graal. When he sees a spin chain, he's happy
and he knows that he has almost 100 years of knowledge about it, 80 years, and can use plenty of methods to investigate this problem. So the exclusion process is a quantum spin chain in disguise, which means that it's a quantum magnetism problem
in one dimension in disguise. And there are, again, many techniques to try to understand it. And so in 19, so these kind of quantum spin chains were invented by Heisenberg in the 20s, in the late 20s.
And the first solution to this quantum problem was performed by Hans Bethe in 1931. And he invented the so-called method of Bethe-Anzatz to solve this kind of quantum spin chains. So what is the Bethe-Anzatz about? Bethe-Anzatz is a way of diagonalizing this matrix.
And if there were no interactions, if there was only one particle, you would have one particle on a circle, well, you could use Fourier. You could find plane waves to diagonalize your system. If the particles were independent, you would just say that the eigenvalues of your system
would be just product of plane waves. Just, okay, again, independence. Well, what Bethe tells us is that for some classes of very special systems, which are in fact the systems that one can integrate, can solve, so in fact all the systems
that we know how to compute with, and integral systems, they have the properties that even though they are interacting non-trivially, their eigenvalues can be written as a linear combination of plane waves. That's the heart of the Bethe-Anzatz.
So of course, most of the systems are not integrable. You cannot solve them using this method. But all the systems you know to solve are in fact somehow, in disguise, solvable using the Bethe-Anzatz. Ising model, six vertex model, or even classical models,
they are in fact an avatar of the Bethe-Anzatz. So the idea is to look for plane wave solutions for the eigenvectors of your evolution matrix. And this works here. So here, instead of writing exponential i k x i,
exponential i k, I call it z. So you have to use linear combinations of plane waves with these wave vector z's. And the z's, plugging it back, this Anzatz into the eigenvalue equations,
the z's have to solve a system of algebraic equations. So it may look a bit rapid. The thing is that the matrix you want to diagonalize is a huge operator.
The size of the operator is the size of the configuration space, which is two to the power l, typically. More precisely, the binomial l n. And the Bethe-Anzatz tells you that you have to look for l eigenvector, l fugacities, z i's.
So you go from two to the power l to l that satisfy some non-linear algebraic equations. So it's a huge simplification in complexity. Just to give you an example, if you want to simulate a system of size 10, your Markov matrix is 128, 124.
Sorry, 1,024, 1,024. It's big. You may try to diagonalize it exactly. You may go up to size 20 at most to be of order million. But these equations, they involve l variable,
where l is the size of the system. So you can very well solve them up to a system of size 150. But two to the power 150, you will be never able to diagonalize it. So you go from exponential complexity to linear or quadratic complexity, and that's the key of the solvability of the model.
So by solving these equations, and the beautiful thing is that in the case of the exclusion process, these equations, which look not so nice, can even be solved explicitly in certain cases, because the roots of these equations lie on some nice curves, which are called the Cassini ovals and so on.
So you can really extract purely analytically some results, and you can compute the tower of the eigenvalues of your Markov operator, at least the most interesting ones, which are close to the stationary state. So the ones which decay with a longer time,
which decay the slowest. And you can classify all excitations of this operator, and make a complete spectral analysis of your Markov operator in this case. A full spectrum, you can even reconstruct from many initial data,
which are not completely random, but from many interesting initial data, you can decompose them into eigenvectors, and do the full evolution, thanks to this exact diagonalization of your operator. So that's, in some sense, you can fully solve the problem. So this allows you in particular
to compute the relaxation time, and to see how it scales with the size of the system. So this exclusion process, they are non-diffusive, typically, as long as they are not symmetric, and the relaxation scales like the size of the system to the power of three over two, and not L squared,
which would be the case if they were diffusive. And you can even predict some oscillations, some waves. I mean, again, for the physicist, there are plenty of things that you can compute, compare with numerical simulations, and there's a lot of phenomenology underlying this simple model that you can really go for analytically too.
Okay, now I want to tell you how to calculate, and what are the results, for the statistics of the current. So here, we don't have reservoirs, so we cannot calculate how many particles went from left to right. It's not a problem. We can just sit somewhere on the lattice,
and count how many particles jumped from i to i plus one during time t, minus the number of particles that jumped from i plus one to i. So there's a local current, and I recall yt. So sorry, again, this was qt this morning. So this is yt this afternoon.
Probably, we'll use another notation even. And we want to know the statistics of this yt. So ideally, we would compute the distribution of yt, but we know how to compute its Laplace transform. We know how to compute the characteristic function, exponential mu yt, in the long time limit.
So what we want to compute is the average of exponential mu yt. If we formally take an expansion with respect to mu, we'll have all the moments of y. And what you can prove rigorously, even for a physicist, is that in the long time limit,
this exponential average behaves like exponential e of mu times t. So e of mu is nothing but one over t log of the average of the exponential. So if e of mu is nothing but the cumulant generating function of your random variable yt, okay?
So we know how to compute this e of mu. This is what we want to compute. And how do we compute it? Well, the idea is that, you see, this is a purely probabilistic problem. e of mu is a cumulant generating function of a random variable. We want to compute it. And the beautiful trick,
which goes back to Donsker and Varadan, is that you can trade off this purely probabilistic or statistical question into an eigenvalue problem. And the idea is the following, is that there exists a deformation of your generator of your dynamics that I will call mu.
I'm going to explain that slightly later, what is mu. But there's a deformation of your generator such that this function e of mu is the dominating eigenvalue of your operator mu. So somehow, the quantity you want to compute
is nothing but an eigenvalue of an operator. So you have traded off a probabilistic problem into an eigenvalue problem. And there are plenty of tricks to compute eigenvalues. So, to be more precise, the way that you deform your generator by this factor mu,
is that you want to compute the particles that are hopping between site i and i plus one, where you decide to put an enhancement factor, exponential mu, to all the jumps that occur from i to i plus one. So I call m plus the part of the generating operator
of the generator that makes a particle jump from i to i plus one. And you put a factor exponential minus mu to the jumps of particles between i plus one and i. So you deform your dynamics by putting these two fugacities, exponential mu and exponential minus mu,
locally where you want to measure the current. And you construct this new operator from the generator itself. And this new operator is such that its dominating eigenvalue is nothing but the cumulant generating function.
So now you want to compute an eigenvalue. And the nice feature is that even after deformation, the new model that you obtain is still integrable by better answers. So you still remain in the class of solvable models. And you can still use this technique of better,
which was invented for spin chains, to solve this operator, which has now less to do with spin chains. But anyway, the trick still works and allows you to compute the full spectrum of this matrix m of mu. But you don't care about the full spectrum. You just want the dominating eigenvalue. So this can be done.
And I just tell you just very, in a very sketchy way, what the solution looks like. So we want to compute this function e of mu. But, and this is usually the case in all these kind of problems, we never get e of mu. We get a parametric representation.
We get e as a function of a parameter b, mu as a function of a parameter b. And somehow, at least formally, one has to eliminate b between the two equations. So we get mu as a function of b, as a series in b, and e as a series in b. Okay, so this series contains
bk over k, bk over k, and some coefficients, ck and dk. These two coefficients, I mean these two families of coefficients in k, they are combinatorial numbers. They have some combinatorial interpretations in terms of trees and forest and things like that,
but it's not so important here. And the thing is that we can compute them as residues. So there exists a function phi k, such that if you take a small contour that enters in circle zero, ck is a residue of phi k, and dk is a residue of phi prime k at minus one.
So if you know phi k, yeah, I learnt complex analysis from you a long time ago. So it's still useful for me. I hope I didn't say anything wrong. Okay, so you can compute ck and dk
using function phi k. So if I tell you what phi k is, you know how to compute ck and dk. Okay, so the information is in fact contained in phi k. Well, we can wrap all the information together, use a generating function, and say that the full information of the phi k
is in fact embodied in this function wb. So the object which is important to compute this cumulant generating function and so on is this function wb, okay? So now I tell you how to compute this function wb. Then you can just unfold everything
and get the cumulant generating function. Well, this function wb is a solution of a very nice equation. Again, just look at the structure and not the details. Wb is a solution of a self-consistent equation which contains a log, b itself,
a linear operator with a kernel, okay, and a pre-factor which is a function which is a simple rational function. So this is a kind of general structure. Wb will be a solution of a self-consistent equation with a kernel.
In fact, this kernel is not a arbitrary kernel. It appears in a lot of combinatoric works, especially in the calculation of partitions by Andrews and Ramanujan. So these are typical objects that you see in combinatorics. And this is a simple rational function.
So I don't tell you how to solve this equation. This can be done. And if you do it explicitly, you can, for example, in some simple cases, when backward jumps are equal to zero, obtain explicit formulas. So that's important. I mean, up to now you could have thought that this is just, I mean, waving hands and abstract,
but in the end, you get explicit results. And you see that the coefficients in this simple case is very complicated-looking, ck and dk's. They're nothing but binomial coefficients, so it's not. It's not a big deal. But now using this function, e of mu, and how do you say, eliminating b between them,
you can compute e as a function of mu and get the average value of the current, its variance, and so on and so on. You can even go to the full system and reconstruct the large deviation function of the current.
And as, again, this morning, Thierry drew, the large deviation function of the current has typically a kind of well shape, but it has some nice physical features. It's asymmetric. Okay. The important features for physicists are the way it vanishes around zero,
the quadratic or not vanishing. Here it's quadratic, and this allows you to compute the fluctuations, the variance of the current. And the other two important features that we look for in physics are the tails. So the left and the right tail,
so the asymptotic behavior of this large deviation function. And as you see, it's highly asymmetric. So it's not so easy to predict the five halves and the three halves, but at least one thing is clear is that it grows much faster to the right than to the left. And this tells you, in fact,
that in these simple systems, it's, in fact, much easier to reduce the total current than to increase it. And the reason is simple. If you want to reduce the current, this is trivial. Well, you just have to have one lazy particle. If one particle suddenly decides not to jump anymore, it's going to block everybody else,
and the current will drop down. So just one guy can just prevent you from progressing. But if you want to increase the current, then all the particles have to be very active and start jumping very fast, and this is much less likely. And this is, at least, qualitatively the reason for the very strong asymmetry between the left and the right tails of your distribution.
Another calculation which can be done explicitly is that you can go to the weakly asymmetric limit. Weakly asymmetric means that particles just almost jump with rate one and one, and the difference between the left and right rates,
one and X, X goes to one, and the difference is in one over L. So it's one and one minus nu over L. So it's almost symmetric. In the weakly asymmetric case, it's possible to resum to solve this equation, and to resum the series, and to draw pictures.
So these are pictures of the large deviation function for different values of the asymmetry. And what you see is that if the asymmetry is not too big, the curve is smooth. But when the asymmetry goes beyond a critical value, which is eight pi here,
you have a kink that appears in your large deviation function. And the fact that there's a kink here is remindful of phase transition. Remember this morning in the Ising case, Thierry drew, what did you draw?
F, the free energy, as a function of H. And for low temperature, there was a kink, and for high temperature, it was smooth. So this is the perfect equivalent here. There's a kink appearing, not for temperature here, but for asymmetry, large enough, and the system undergoes a phase transition.
So this is one more element, sorry, one more element of proof, or not proof of belief, fate, rather. And that large deviation functions play a role analogous to thermodynamic
potentials out of equilibrium. At equilibrium, thermodynamic potentials have some analyticity break at phase transitions. And here, large deviation functions have some analyticity problem, some kink behavior at phase transitions. So here the phase transitions that occurs
is also again very simple to explain in physical term. For small asymmetries close to symmetric case, the profile is flat. And when the asymmetry become strong enough and you want to draw a typical current enough,
then your density profile has to become non-flat. So you have a traveling wave, a kind of soliton which will be developed in your model and which will turn around your system. So again, qualitatively, it's easy to understand. To do the precise calculations is less easy.
And by the way, this calculation, which was done by Bette and Zart, there was a prediction using macroscopic fluctuation theory by Thierry and by Bernard Derrida, which predicted that the kink of the large deviation function should exist, at least from the macroscopic point of view.
And here, the microscopic calculation perfectly matches with this prediction. Okay, so let's go now to the second part. How much time do we still have? 20 minutes, okay. So the second part, which is the open exclusion process,
so which is a system which is closer to reality. Quote, unquote. So here you see the problem is that even the stationary measure is not trivial. The system has two to the power L configurations. And we have no Gibbs measure underlying it. And if we just take this very general system,
we don't know in the stationary state what the probability was the likelihood of seeing, for example, this precise configuration, which is zero one, zero one one, and so on and so on in the stationary state. So already the invariant measure is a difficult problem, which was solved more than 20 years ago
by Derrida, Evans, Hakim, and Pasquier. And they had this very nice idea to use, again, a quantum mechanical point of view, although the system is purely classical. And the idea is the following. I'll just come back to the picture. So a configuration here is a string of zero and ones.
And so you can represent it as a binary digit. And the idea is that, in fact, there exists two operators. Let's call them D and E. And instead of writing a binary digit, I will write this configuration as a word in these two letters, D and E.
E for empty, D for occupied. And for example, this configuration will be just E, D, E, D square, E, D, E, D, E, D, okay? So this is a configuration which is written in these two letters alphabet. And their idea was that if D and E
are well-chosen operators, they satisfy a well-chosen algebra. And if we take a trace over this algebra, then the stationary weights will be given by this trace on this well-chosen algebra. Okay, it looks strange, but it works. And don't forget one thing, this is a finite size Markov process.
Perron-Frebenius tells us that there is a unique stationary state, so any bad trick to compute it is good, okay? So don't ask questions if it works. It works, and it works. So the idea is to choose these two words, D and E, these two operators,
satisfying a simple quadratic algebra, okay? And the trace is in fact a vector element, so you have a co-vector on the left and a vector on the right, or a ket, and Dirac would say, on the left, and the bra on the right. And the ket and the bra are eigenvectors
of linear combinations of these two operators D and E. So again, you just choose this algebra, compute the trace of any D, E, D square, blah, blah, blah, using this algebra, and this spits out the steady state of your Markov process, and it works.
So this can be more explicit, but you can use it to compute the phase diagram of your model, the density profiles, the correlations, anything in the stationary state. So in one word, this algebra plays the role
of the Gibbs measure. This is what replaces the Gibbs measure for this kind of models, and many others. It has been used in many variants. It's not totally universal, but there seems to be many one-dimensional non-equilibrium models, where the algebra trick seems to work well.
For the connoisseurs, this algebra is related to algebraic better answers. So it has to do with integrability. It's not, this was guessed from the blue, but after 20 years, people understand more how to construct it using integrability. But this is worth a full seminar.
Okay, you can find representations of the algebra if you want to know more, infinitely more, recommend to read this review by Bright and Evans on the 20 pages on algebra. So I recall you the calculation of this modeling in the stationary equilibrium case.
The free energy, the thermodynamic free energy, tells you about fluctuations of the density profile. Well, this is a calculation based on partition functions and on the Gibbs measure. One can redo this feat using the algebra, because now we have one analogous of the Gibbs measure
for this exclusion process. This is a very hard calculation, but at least for this system out of equilibrium, using the matrices, one can compute the probability of seeing any density profile between two reservoirs, and density rho A and rho B. And this was done by Derrida, Lebovitz, and Speer
12 years ago. So just to show you a bit what it looks like, if we were at equilibrium, we would just have one minus x log one minus x plus x log x, which is nothing but Stirling formula, the basic equilibrium statistical mechanics.
Well, this very simple formula is replaced by something much more complicated, which is non-local, which involves some non-linear ODE solution and the boundary conditions. And indeed, this basic function,
log one minus x or log one minus y, appears there, but in a very, very indirect way. So the solution for the non-equilibrium case is non-local and much, much more complicated than the equilibrium case. And one important feature is that you could think
that, okay, I just take the equilibrium result, and I just replace the density by the local linear density. This is wrong. This is just completely wrong, okay? So there is no way of extrapolating this formula at equilibrium into that formula out of equilibrium.
Okay, so this was for the density profile, the first question. The second question is about the current. So I'm going to rush through it. It has a structure very similar to the previous one. We want to compute how many particles went from the left reservoir to the right reservoir during time t. So it was called yt before.
Now I call it n of t, and it was called qt this morning. The total number of particles that went from left to right during time t. So again, what we can compute is, in fact, the exponential generating function. And this exponential generating function
makes this cumulant generating function appear. And this cumulant generating function is a dominant eigenvalue of a deformed operator. Each time you add a particle, you add a factor exponential mu. Each time you take a particle from the system, you put a exponential minus mu. So again, you have traded your statistical problem into a eigenvalue problem.
And this matrix, deformed matrix, can be diagonalized using a generalized matrix problem. And the structure of the solution is very similar. Again, parametric functions
with combinatorial coefficients, but which now depend on the boundary rates and on the asymmetric rate q. Again, these guys are residues. The contour is much more complicated, but they are, again, residues. So again, everything is in this function phi k that you can put together into this wb.
And as before, wb is the solution of a self-consistent equation, which looks exactly the same as the one before, with the same kernel, but I don't know if you remember the simple rational function, which was one plus z to the power n divided by z to the power l, or the opposite.
Well now, it's replaced by a much, much more complicated object. But if there are some connoisseurs in the room, this complicated object is, in fact, again, some object which appears in the calculation of partition functions. It's called the Ashkin-Wilson generating function
for these partition problems. So it's not a unknown object. It's a kind of natural object that appears in this game. And this allows you to compute, so for finite size systems, the generating function of the cumulant,
to go to the large division function by Laplace, by Legend transform in the large size system limit. So you see all this horrible calculation to get a rather simple formula in the end. But you have all the corrections in finite size, and there's a phase diagram. You can study much more than just taking the limit for infinite systems.
But this limit for infinite size systems was again obtained by Bodineau and Derrida using MFT. So on one hand, you have the exact solution using combinatorics integrability, and you can take the infinite size system limit and match it to this variational technique
of John and Asinu at all, and solve these equations of other Lagrange type to get this formula. So things match well. And these kind of calculations were important when people were not completely sure about the relevance and the correctness
of these variational answers. And in some special cases, just to flash rapidly, this can be purely explicit. So it's not, again, purely totally abstract. You can have numbers. And in particular, you can compute the skewness,
the third cumulant of the current. So this is an exact combinatorial formula valid for any system size. And if you go to the infinite size limit, you see that the skewness goes to a finite number, which means that even in the infinite size system limit, the current has non-Gaussian fluctuations because the third cumulant is non-zero.
Okay, it's small, it's less than a percent, but it's non-zero. So it's really non-Gaussian. I want to tell you during the last seven or eight minutes about the infinite line case. And this infinite line involves a totally different type of mathematics.
But it's, again, based on integrability, which is the leitmotif of this talk, and better answers. So now we consider, let's say, a finite number of particles on the infinite line. And they're hopping P and Q with exclusion.
Well, the basic quantity to compute will be the probability to find the particles at y1, y2, yn, knowing that they were at x1, x2, xn at time t equal to zero. This is the green function of the problem, propagator.
And thanks to better answers, so by using linear combination of plane waves, there exists an exact formula for this propagator. So, of course, I went very fast, but here you recognize the fugacities, the sum of permutations of a permutation,
which is typical of the better answers. And if you have a very good memory, and you have memorized the better answer equations that I wrote 20 transparence ago, maybe 50 transparence ago, they were of this form. So this is, in fact, very closely related to the better answers on a periodic ring,
but this is on the open infinite system. So this formula was initiated by Gunter Schutz, and then really developed by Tracy Widom in the last four or five years. They wrote a series of 10 papers and really developed this formula. But this is an exact formula. The problem is to be able to do something with it. Looks horrible.
I mean, it's a sum over all permutations of factorial n, where n is the number of particles. It's a big formula. But there is some combinatorics hidden in it, and you can reduce it, at least in some cases, to some much nicer looking formula. So here comes the importance of the initial condition, because the initial condition appears here
as an ingredient into your Green function. And if you start with the simple initial conditions that all particles were just lining up on the negative side at t equal to zero, so zero minus one, like that, and we take the special case
where particle can only jump to the right with rate one, no backward jumps, then you can take the previous formula and massage it and get a nice result. So we want to compute the total current, so the total number of particles, that flew through the zero one bond. Let's call it q of t, at last.
Q of t, the same q of t as this morning. And we want to compute the probability that q of t is larger than m. It's an integral number. We want to compute the probability of having more than n particle, in fact, on the right side of your picture, after time t. Well, this is in fact the probability
that the mth particle here has jumped through the zero one bond. The same thing, because of exclusion. Well, starting from this formula and using some quite elementary in that case, and not too difficult, it's a two-page calculation with determinants, you can show that this probability
is given by this integral. So this integral involves a square of a Vandermonde and some exponentials, and some integral over the cube zero t to the power n. So it's a very simple and compact. There was a sum of factorial things before,
and now it's just a n-fold, very compact integral. So if you have seen some talks on random matrix theories, they really should just strike you, because this kind of integrals, Selber integrals, related ones, appear all the time in random matrix theory.
And indeed, this integral has a very precise interpretation in some ensemble. This is the distribution of the largest eigenvalue in the Laguerre ensemble. I'm not going into the details, but this is a kind of ensemble like GOU,
and so on and so on. And then it's possible to use all the knowledge which was developed in the last 20 years on this distribution of the largest eigenvalues. And in particular, Johansson solved this Tazep case in 2000, and he was able to show that Q of T
behaves like T over four plus T to the power one-third times a random variable. And this random variable is precisely the distribution of the largest eigenvalue of a random matrix ensemble. And it follows the so-called Tracy-Wydum distribution.
So that's the connection between the exclusion process and random matrices through this Bethe-Anzart formula that you can transform into a random matrix integral. It's one way of seeing this connection. So this is how also these Tracy-Wydum distributions of eigenvalues, of dominating eigenvalues
of random matrices appear into the game. So that's one interesting feature. There is another nice relation between Tazep and Koronogrut. As I told you, the exclusion process is related to the Cardart-Paresi-Zong equation.
And the configuration of particles can be drawn as a partition or as an interface in the second dimension. So each particle corresponds to a slope minus one, a whole to a slope plus one. So this configuration of particles is nothing but this interface.
If you look more precisely at what's happening, so if you draw all the squares, you will see that the position of the rightmost particle corresponds to the length of the first line of this Young diagram, because you can interpret this part as a Young diagram.
So configuration of particle in the exclusion process is an interface. If you fill the squares which are missing, you get a Young diagram, and now you can interpret the position of the first particle as the length of the first line of the Young diagram, the second particle, the length of the second line of the Young diagram, and so on and so on.
So there is a perfect mapping between both. But statistics of Young tableaus is an old subject, which has been studied a lot. And it's also known that the first line of a Young tableau is related to the length of the largest increasing subsequence.
in a randomly chosen permutation. So what do I mean by that? Let's take the numbers from 1 to 7, let's take a random permutation of them, so this one, and suppose I want to extract an increasing subsequence. So for example, 1, 3, 4, 6 is an increasing subsequence.
Of course, 1, 6 or 1, 7 also is an increasing subsequence. And I'm interested in the largest one I can extract from any randomly chosen permutation. There can be a few, but few largest, but let's call the length L sigma of the largest
increasing subsequence in a given permutation. While this length is nothing but the length of the first line of a randomly chosen Young tableau. So these two things are the same. And Ulam asked the problem about the statistics of this L of sigma,
this length of the largest increasing subsequence. So now we are convinced that everything in the world is embodied in TESAP, on the exclusion process. So indeed by doing these mappings back, you can relate that to the position of the dominant particle or to the current in the exclusion process. And indeed,
the statistics of the length of the largest increasing subsequence in a random permutation by making the mappings, it grows like square root of the number total, 7 before, plus n to the power
one-sixth, which is an avatar of the one-third before, just by rescaling, times the Tracy-Widdon variable. So again, a random permutation is nothing by RSK, by Robinson-Kleinschonstedt, two Young tableaus. A Young tableau is a corner growth, and the corner growth is exclusion process.
And you know plenty of things about the exclusion process because it's solvable by better results. So that's how integrability can be used in a very indirect way. So one but last transparent, just to remind you that this is supposed to be a physics talk, so there is some
experimental result. So the exclusion process is a discretization of the Cardart-Parasizong equation. The Cardart-Parasizong equation describes the growth, the dynamics of interfaces, and using similar technologies, much more elaborate, a few groups five, six years ago,
Sasamoto-Spon, Corvin-Amir-Castell, and in Paris, De Tzanko, Le Dussel, Calabrese, and also were able to solve this one-dimensional Cardart-Parasizong equation. By solving, I mean, I mean there are plenty of solutions,
plenty of questions. I mean that the statistics of the height over a point, of this random interface, was fully understood and investigated. So not only the average or the variance, but the distribution, the full distribution.
And of course, this is related to the Tracy-Wydum law, which appears in all this game. Okay. So the Tracy-Wydum law, I didn't go into the details, is something quite abstract. It involves pan-Hervé, non-linear equations, and so on and so on. And it's in fact quite
complicated and not so easy to draw, even numerically. Well, now it has been implemented in Mathematica. That's a, you know, claim to glory for Tracy and Wydum. They are a function in Mathematica. So you just plug, type it and draw it. But the beautiful thing is that a group in Japan,
the group of Takeuchi and Sano, conducted experiments, real experiments, in liquid crystals. There are many phases in liquid crystals, and in one of these phases, you have two types of arrangements, two type of phases in these liquid crystals, and one is growing into the other.
It's different, but just think about a piece of ice growing in water, okay? It's not the same thing, but one phase of liquid crystal, which is darker in the camera, which grows in another one. And they were able to monitor in real time the growth of this phase, special phase of liquid crystal, and to investigate very precisely
the statistical properties of the interface, after subtracting the average, okay? And they were able to obtain histograms for the distribution of this interface. And they showed that indeed it coincides with the Tracy-Wydum distribution.
There's even something much more elaborate. There are different Tracy-Wydum distributions which depend on different ensembles, GUE, JUE. These different ensembles correspond to different initial conditions. I told you only about the initial condition where everybody's on the left and nobody on the right.
And if you translate it into the language of liquid crystals or interfaces, it corresponds growing from a circular interface or growing from a flat interface. This gives you two types of Tracy-Wydums. They were able to conduct experiments in both cases and show that indeed the histograms correspond respectively
to GUE and JOE Tracy-Wydum in each of the cases. So these are very, very precise and hand-marking experiments. Okay, so I hope I have convinced you that, but Thierry already did it this morning,
that the exclusion process is the alpha and the omega of human knowledge. So are large deviation functions. At least it seems that they are important for non-equilibrium statistical mechanics. I didn't tell you about the Galavotic coin symmetry that Thierry alluded to this morning,
but this is another important feature that you can check in these models. A nice feature of the exclusion process, at least in the mathematical world, is that it's related to growth models. And who says growth models? He says also Young-Tablos,
Koerner-Grote, and ultimately to random matrix theory. So it's a kind of central point where plenty of different theories converge. But this is not at all the end of the story.
It's in fact only the tip of the iceberg. There's a whole field which has been developed recently, which is the field of integrable probability, in particular by Borodin, Gorin, Corvin, Sasamoto, and many other people. And the exclusion process is one special case of a whole class of integrable stochastic models,
known as McDonald processes, or even some higher vertex models. And in all these models, which are strongly non-Gaussian, it is possible to get, thanks to integrability and to better understand some explicit formulas for the probability of some observables, to analyze them using asymptotics,
and to derive some new universal laws, such as the Tracy-Wydum distribution, which are believed to play a role akin to that of the Gaussian law at equilibrium. Thank you.
Any questions? You're saying at one point that you can map a given configuration of MT and a possible to a given string of two operators, and then you choose an algebra to solve.
How do you choose the algebra? Is there some constraints? That's part of the secret. Ask Bernard Derrida. So as I told you, the real thing is that at the beginning, they guessed it. Really, they just guessed it. Then other people started guessing in other models,
and the body of knowledge became bigger on these models. So there were more and more algebras floating around, and people tried and so on. More recently, there is a series of work where you can try to construct that using, not to be a variant,
a chapter of baton's arts, which is called algebraic baton's arts, in which the integrability technique makes naturally some operators appear,
which satisfy some algebraic relations. So this is kind of standard things since the 80s. This algebraic baton's arts was developed by the Russian school, Fadev and other people in the 70s and 80s. So they are books, and it's well known. So this is a well-known body of knowledge.
And there is an indirect, I would say, not fully understood yet, but there's a lot of progress recently, way of extracting this kind of quadratic algebras, starting from this constructive method of algebraic baton's arts. So you have to learn the algebraic baton's arts first, and then try to use it to extract these algebras.
Or you can try to guess it.