Universality for mathematical and physical systems
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 33 | |
Author | ||
License | CC Attribution 3.0 Germany: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/15948 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
| |
Keywords |
1
4
7
8
9
11
12
13
14
15
16
17
20
21
22
24
26
32
00:00
ResultantMathematicsMathematicianObservational studyFields MedalRandom matrixLink (knot theory)Computer programmingScheduling (computing)CombinatoricsMathematical physicsAsymptotic analysisInverse problemTheoryHypothesisStudent's t-testState of matterCongruence subgroupRule of inferenceEnergy levelLecture/ConferenceMeeting/Interview
03:25
Statistical mechanicsRule of inferenceModel theoryPosition operatorMultiplication signTransportation theory (mathematics)FrequencyNumerical analysisGame theoryBlind spot (vehicle)RoutingResultantRandom matrixMatrix (mathematics)Coefficient of determinationEigenvalues and eigenvectorsConnected spacePoint (geometry)Axiom of choiceMereologyPhysical systemEvent horizonTessellationAtomic nucleusDistanceRandomization1 (number)Well-formed formulaProbability distributionSign (mathematics)Unitäre MatrixRandom variableVariety (linguistics)PhysicalismScheduling (computing)ScatteringMathematicianMathematicsAsymptotic analysisLebesgue measureComplex numberNormal distributionPhysical lawMathematical modelCondition numberResonatorSymmetry (physics)Line (geometry)DeterminantDerivation (linguistics)StatisticsSampling (statistics)2 (number)LengthCombinatoricsThermal fluctuationsCircleGradient descentFinitismusDifferent (Kate Ryan album)AnalogyTheoremSpacetimeDistribution (mathematics)Many-sorted logicTime zoneRhombusFunctional (mathematics)Modulo (jargon)TheoryMathematical analysisDescriptive statisticsPolarization (waves)INTEGRALOpticsCross-correlationSineVariable (mathematics)Arithmetic meanIsomorphieklasseExplosionHexagonMechanism designPercolationCondensationCurve fittingMultilaterationCurveLiquidElement (mathematics)PhysicistThermodynamicsVotingScaling (geometry)Social classEqualiser (mathematics)Nichtlineares GleichungssystemWater vaporBeta functionSheaf (mathematics)Process (computing)Computer programmingOrder (biology)OrthogonalityUnitäre GruppeAlgebraic structureSet theoryExtension (kinesiology)Table (information)Maxima and minimaDirection (geometry)Uniformer RaumRootPiCentralizer and normalizerLatent heatGroup actionRight angleLimit (category theory)Musical ensemblePermutationCentral limit theoremCharacteristic polynomialOperator (mathematics)Airy functionClassical physicsTerm (mathematics)Conservation of energyCategory of beingSimilarity (geometry)Logical constantMaß <Mathematik>Gamma functionConjugacy classThetafunktionThermodynamic equilibriumAreaPressurePolynomialNeighbourhood (graph theory)Slide ruleFood energyMeasurementMathematicianDiagonalNatural numberAtomic numberFocus (optics)Real numberVarianceSummierbarkeitSymmetric matrixComputabilityL-functionDiagramCross section (physics)GeometryPseudo-Riemannscher RaumModulformMetric systemKritisches PhänomenProbability theoryRenormalization groupRankingBinary fileHypothesisNormal (geometry)Cartesian coordinate systemDivisorSeries (mathematics)RenormalizationCohen's kappaMoment (mathematics)Delay differential equationAdditionPower (physics)Degree (graph theory)Solid geometryFigurate numberMassFlow separationGrothendieck topologyCharge carrierGreatest elementObject (grammar)3 (number)Perturbation theoryState of matterIdentical particlesObservational studyStandard deviationNumber theoryMathematical objectFrictionDirected graphMathematical physicsLecture/ConferenceMeeting/Interview
Transcript: English(auto-generated)
00:05
I think we should start. So the organizers of the Congress asked me to remind you that there is an electronic version of the program, which is updated continuously.
00:21
Now, in particular, there is a schedule for the Fields Medalist Prize's talks and the Nivalina Prize talk. Now, also, they mentioned that some
00:40
of the short communications talk are also updated there. And the printed version of the program will appear probably after this lecture, somewhere around here. So you can find it, even the printed version. But please look at the electronic version. Now, I'm very honored to announce the next talk
01:03
of a professor of the Courant Institute of mathematical science, Percy Deit. Percy is an amazing mathematician. He keeps getting surprising results and very diverse
01:22
results in mathematics. Now, he received MS as a chemical engineer. And then, very soon, he realized that it's not his cup of tea. So he decided to switch in mathematics and started doing semantics and mathematical physics.
01:42
And his thesis was devoted to the scattering theory. Then he switched to the study of asymptotic solutions of nonlinear equations. He studied inverse problems. He then made links with combinatorics
02:03
and random matrices. And then, lately, he studied problems of universality. Now, this is something which is behind many of his results. And in particular, there is a method
02:21
coming from Riemann-Hilbert problem, which he applies in an amazing and very diverse way in his studies. Now, Percy is a very warm personality
02:42
and very excited about mathematics in general. And this excitement attracts a lot of young people. And he has many, many students who became, eventually, his colleagues. And what else? What more can you desire from this kind of thing?
03:02
Now, today, he will talk about his title. The title of his talk is Universality for Mathematical and Physical Systems. You are very much welcome.
03:24
OK, thank you, Harry. First of all, I would like to thank the program committee for the opportunity to speak here. I'm very appreciative. And also, I'd like to compliment the Spanish Mathematical Society for the wonderful job
03:41
they have done in organizing this congress. So the title of my talk is Universality
04:00
for Mathematical and Physical Systems. So I'll try and get this in order. Here is the outline of my talk.
04:22
So first of all, I'm going to be giving a very general description of universality, some ideas there from physics mainly. And then I want to propose or speak about a mathematical model.
04:40
And the particular model which I want to focus on is random matrix theory. After that, I want to speak about some physical and mathematical systems, which will illustrate the ideas behind this talk. Then I want to show how to relate the problems which occur in C, particularly to Section B,
05:06
which is random matrix theory. Then I want to say a little bit about what the mathematical methods are behind these results and how they relate to possible future developments.
05:32
So to start off, all physical systems in equilibrium
05:40
are believed or do obey the laws of thermodynamics. And the first law of thermodynamics, everybody knows, is the conservation of energy. The second law has many different formulations. And the one I want to mention here works in the following way.
06:01
Suppose that we have a heat reservoir at some temperature T1. And suppose that we have a heat sink at a lower temperature T2. And we have some heat engine here in the middle.
06:20
And you take an amount of heat, Q1, from the reservoir. You exhaust an amount of heat, Q2, into the sink. And the amount of work which is done by the heat engine is Q1 minus Q2.
06:45
Now, what we're interested in is the efficiency of the conversion of heat into work. So the efficiency is given by W, Q1 minus Q2, divided by Q1. Now, the second law makes a statement
07:01
about the maximum possible value of this efficiency. So the maximum efficiency, which you could obtain presumably by doing the process very slowly, there's no friction, issues like that, the maximum efficiency is given by T1 minus T2 upon T1.
07:22
And nature is so set up that you just can't do any better than this. Now, that's on the one hand. And on the other hand, there is a very old idea going back to the Greeks that matter is made up out of constituent elements.
07:43
We call them atoms. And each of these atoms has its own different set of laws of interaction. So it's the juxtaposition of these two points of view,
08:03
the macroscopic world, say, of this table here, and the macroscopic world, which we imagine that lies underneath it, that presents this long ongoing challenge which involves so many people to try to understand the emergence of a macroscopic world
08:22
out of this microscopic world. So how does one derive these macroscopic laws? Remembering that each of the different constituent elements may have different microscopic laws of interaction. So the salient feature of this challenge
08:43
to deduce the macroscopic world from the microscopic world is that exactly the same laws of thermodynamics emerge independent of the detailed atomic interaction. The same laws emerge.
09:04
Now, in the world of physics, this is known broadly speaking as universality, although there is some caveat here because physicists often mean by universality some statements about different critical phenomenon, scaling laws, but nevertheless,
09:22
I think this is a good way to describe things. Now, of course, let me just say along the way that there are certain sub-universality classes, which I'll mention again later. For example, liquids like water and vinegar, you expect them to obey the Navier-Stokes equation,
09:41
but if you're looking at some heavy oils, you'd expect them not to do that. There'll be some other laws like various lubrication equations. So there are subclasses which satisfy what we could call sub-universality laws.
10:09
Now, until recently, this way of thinking that physicists have not been, these ways have not been common amongst mathematicians.
10:20
Mathematicians tend to think of problems as being different until proved equal. So each mathematical problem, mathematicians think of problems as sui generis, each on its own, unless you can prove some explicit or implicit isomorphism between the two kinds of problems.
10:44
The idea that broad classes of problems on some scale should look the same without producing some explicit mechanism or isomorphism between them has not been a common idea within mathematics. Nevertheless, what I want to speak about today
11:03
and report today is that this type of universality, some sort of emergence of a macroscopic mathematics, for want of a word, seems to be coming more common. And I want to illustrate this with a variety of examples which I will get to in a moment.
11:21
There are mathematical precedents, of course, for what I am speaking about. We all know the central limit theorem going back to the 18th century. We take variables xi, which are independent and identically distributed, mean zero, variance one. We add them up, x1 up to xn.
11:41
We scale them by root n. We ask what's the probability is of the scaled sum to be less than two t, and that will converge to the normal distribution. So one sees here that each of these variables xi could be completely unrelated to each other. X1 could be the temperature in Madrid.
12:01
X2 could be the temperature, say, in Barcelona. X3, the pressure in Milan, and so on. But they have no physical relationship. There's no mechanistic relationship. Nevertheless, this broad theorem makes an assertion of universality for these systems.
12:24
Now, of course, within probability theory, this is just the first amongst many such universality results. So that is the context in which I'm now going to present the rest of the talk.
12:42
So the question is whether these kinds of phenomena, which are well known within physics, and if you think about it for a moment, if there were not these universality laws within physics, there really couldn't be any physical laws at all. So let me begin now with this mathematical model
13:04
of random matrix theory. Now, at this point, there are many, many different random matrix models which are of interest. Of course, a random matrix is just a matrix, n by n matrix, and the entries have some randomness attached to them.
13:23
They're different models which you can place on them. And we will be interested primarily here in this talk just in two different ensembles. So the first ensemble is the Gaussian unitary ensemble, which is GUE. Now, the elements here in the ensemble
13:41
are the n by n Hermitian matrices, m equals m star with coordinates m, k, k, j, and the probability distribution you put on these matrices is just some kind of renormalized Lebesgue measure. So dm is Lebesgue measure on the diagonal entries,
14:04
Lebesgue measure on the real part of the off-diagonal elements in the upper part of the matrix, and this is Lebesgue measure on the imaginary parts of the matrix. Each on this trace m squared
14:20
is just a way of normalizing the Lebesgue distribution, and one upon zn is just a normalization constant.
14:43
Now, as it were, a little bit to get the ball rolling here is that if we replace trace of m squared by trace of v of m, for example, v of m could be m to the fourth. We could replace trace of m squared with trace of m to the fourth. Then you get a general example of a unitary ensemble,
15:07
and there is, sitting in the whole structure, a universality within this choice. In other words, what is true, irrespective of which v you choose, the statistical properties of the matrices
15:22
are going to be independent of that choice. That is a theorem, and it's sort of a sub-universality result moving along. So, the unitary part, if people think about it, refers to the fact that such a distribution is not on unitary matrices. It's a distribution on Hermitian matrices,
15:42
but the distributions are invariant on the unitary conjugation. Now, just as the matrices are random and have this distribution here, their eigenvalues, which we write lambda one, bigger equal to lambda two, bigger equal to lambda n, will become random variables,
16:02
and in particular, that's true under GUE. A second ensemble is the Gaussian orthogonal ensemble,
16:22
or GOE. Here, it's an ensemble. The elements are n by n real symmetric matrices, m equals m transpose with entries m, r, j. Probability distribution is very similar to the GUE case, except now Lebesgue measure, everything's real, so it's just the mkj, where k is less than equal to j.
16:44
Again, you can replace trace of m squared by trace of, say, m to the fourth, or m to the sixth, or any such polynomial. Again, there will be universality results along that way, which will tell you that the interesting statistical quantities
17:01
are independent of the choice of v. Of course, the eigenvalues lambda one to lambda n will also become random variables under GOE. So, just summarizing a little bit of what I'm up to at this point, although I'm presenting GUE and GOE as models,
17:22
they, I could have looked at a much wider class of ensembles and obtained exactly the same results. Now, here comes an important point, is that what do we mean when we say
17:41
that the system is modeled by random matrix theory? Well, we say it's modeled by random matrix theory if it behaves statistically like the eigenvalues of some large GUE or GOE random matrix. So, I have to make it a little more precise.
18:00
Along the way, there's something which is known as the standard procedure. So, what you should have in mind is a situation a little bit like the following. A scientist is trying to investigate some phenomenon, and the scientist puts this phenomenon on some slide,
18:24
which he or she then puts into a microscope, and then can do two things. The one thing that can be done is one can center the slide. The other thing that you can do is alter the focus. But once you've done that, you're set,
18:40
and you have to look and see what you get. The analog of that is what one means by the standard procedure. So, what you have is a set of quantities, a little a k, in the neighborhood of some point a. And you want to see if these quantities a k
19:00
look like the eigenvalues of a matrix. So, you now imagine you have eigenvalues lambda k of some matrix in the neighborhood of some energy e. Then what we always do is center, so you move the slide into the middle of the microscope. So, you move a k to a k minus capital A.
19:20
You move the eigenvalues to lambda k minus e. You then scale both of them. And the agreement, what is meant by the standard procedure, is you ensure that the expected number of a k tildes, the scaled a ks, per unit interval, is the same as the expected number
19:42
of scaled eigenvalues per unit interval. And in the bulk, that's usually taken to be one. So, this is the way things operate. Whenever we want to compare one phenomenon, mathematical or physical, with the eigenvalues of a random matrix, we always understand
20:02
that we've prepared the discussion by following the standard procedure. Now, we are interested in two particular statistics
20:24
for the GUE, and there are similar statistics and formulae for GUE, but I'm not going to write them down. I'm just going to ask you to imagine that they are there. So, let theta be some positive number, and define the gap probability, Pn of theta,
20:42
which is the probability that a GUE matrix has no eigenvalues in the gap minus theta to theta. So, let gamma n be the appropriate scaling for the standard procedure. Then, it's a wonderful result from the 60s of Gordon and Mehta, which showed that for any positive number y, if you ask
21:04
what is the probability that there are no eigenvalues in the scaled interval, then it's given by an explicit formula, which is the determinant of one minus ky, where ky is this trace class operator, with so-called the sine kernel, and acting on L2 from minus y to y.
21:23
And what I would ask you to do is perhaps not remember the details of this formula, but that there is such an explicit formula, and it's part of the, as it were, the charm and the effectiveness of this whole subject, that there are these beautiful formulae which can be evaluated and give you
21:42
very precise information on the statistical quantities you're looking at. The second statistic that I want to bring to your attention is the statistics
22:01
of the largest eigenvalue, lambda one. And what we do, again, it's a similar business, you look at lambda one, and you center it. Here, the centering must be done by taking away a square root of two n, and you scale it in some appropriate way, the scale of n to the minus six here,
22:22
and it's a theorem of Tracy and Widom that this distribution, when the size n of the matrices gets large, is given by an explicit formula called the Tracy-Widom distribution, and has this absolutely wonderful form, which is an exponential, basically of a square,
22:42
of a solution, the unique global solution called the Hastings-McLeod solution of the panel of A2 equation, which if you think of canceling the nonlinear piece, you see, looks like the Airy equation, and you choose your solution, u, to be the one which looks like the Airy function,
23:01
the classical Airy function, s goes to plus infinity. Again, I don't ask you to remember the exact form, but just that there are explicit formulae for these two basic statistics, the first being the gap probability, the probability that there are no eigenvalues in the gap, scale gap, and also the probability distribution
23:22
for the largest eigenvalue of a random matrix. Now, one of the most important features or characteristic features of GUE or GOE
23:41
or any of the orthogonal or unitary onsoms is the notion of repulsion, which I will come back to quite a bit later on. As we said, you have these random matrices, you have their eigenvalues, so the eigenvalues are themselves random variables, and you can compute exactly the distribution function
24:03
for the eigenvalues, and the feature it has is this Vandermonde raised to the power beta. If we're dealing with GOE, then beta is one. If we're dealing with GUE, then beta is two, and one of these other distributions,
24:21
I'm just putting in if beta is four, it's something known as the Gaussian symplectic ensemble, it's just a remark. Now, you see what that is telling you. It's telling you that if two eigenvalues are close, the probability of that event is very small. So what that means is that, naturally speaking, when you're looking at the eigenvalues of a matrix
24:42
displayed out on a line, they got a natural repulsion, which is built in. The probability of them being close together is small. And this is a key feature of random matrix theory, this notion of repulsion. So now, I'm up to the point, part C of my talk,
25:03
where I want to speak about some examples. Now, the first example comes from physics, and that's where random matrix theory was first introduced into the theoretical physics world, then after that, came into the mathematical world.
25:21
It was introduced by Wigner, and so it's appropriate to begin at this point. So what you should imagine, as my first example, you should imagine that you're scattering neutrons at some energy, E, onto some very large nucleus, which could be uranium-2-238 or thorium-232.
25:43
Now, the picture you're looking at, the first one is for thorium, it's a scattering matrix. It's the scattering diagram for thorium. The second one is for uranium. Along this axis, along the X axis, is the energy, and on the Y axis is, loosely speaking,
26:02
the amount of scattering. It's a scattering cross-section. The feature which I want you to focus on is that there are many, many, many lines, and if I was to expand my X axis, you would see there would be hundreds of these so-called scattering resonances.
26:21
The meaning of the scattering resonance, if I pick an energy which is, say, at this peak, then that neutron at that energy coming in and hitting this thorium nucleus would be mostly reflected. But if I pick an energy which is between two peaks, this neutron will, as it were, go through. The details of this are, of course, not important here.
26:44
The question is, how do you proceed to model such a physical situation? The a priori possibility of writing down some Schrodinger type equation and then solving that numerically is clearly, it was beyond the computers
27:03
at this particular period in the 19, this is 1972, it certainly was, it's beyond us now, and it's inconceivable that one would actually be able to really put that on a computer and actually find these scattering resonances.
27:22
Some other way had to be found of making scientific sense of a diagram like this. So the first question is, how does one model these resonance peaks? And the form of my talk, I'm just gonna be posing for a while a variety of questions. First of all, this one from physics and then some questions from mathematics.
27:48
So the first question, how does one model these resonance peaks? The next question is the subject which has caught the imagination of many people, and it goes back to the work of Montgomery
28:03
in the early 70s. He was interested in the zeros of the Riemann zeta function, zeta of s, and assuming the Riemann hypothesis, Montgomery looked at the non-trivial zeros on the line a half, looked and he writes them
28:21
in the usual way, one half plus i gamma j. Then he rescaled. Again, he had this standard procedure and what he would now call the standard procedure in the back of his mind. He scaled them to have mean spacing one in the sense that the number of zeros,
28:41
scaled zeros which are less than t upon t goes to one as t gets large. Then for any a less than b, he computed the two-point correlation function for the gamma j tildes. One hasn't got a look at the details of the correlation function, but loosely speaking it's telling you
29:01
when gamma j one tilde and gamma j two tilde are close together. Okay, he then showed modulo certain technical restrictions.
29:23
If you took this correlation function for the zeros of the Riemann zeta function, rescaled on the line a half and you divided by n and you took this limit, this limit would exist and it was given by a certain explicit formula. The question is, my second question is,
29:40
what formula did Montgomery obtain for RAB? Now, the third problem I want to speak about comes from combinatorics.
30:00
And it's a particular card game and you play the game in the following way. You have a deck of n cards, which for convenience you number from one up to n. You shuffle the deck and then you take the top card and you put the card face up on the table to my left.
30:22
Take the next card. If that card is less than the card on the table, I put it on top. If it's bigger, I make a second pile. Take the third card. If it's less than either of these two cards, I put it on top and I have the agreement that if it's less than both of them,
30:42
I put it as far to the left as I can. If it's bigger than both, I make a third pile and so on until I've dealt out the whole pack. And the question which one asks is how many piles do you get? So mathematically, of course,
31:01
a shuffle is just a choice of a permutation pie. Qn of pi is the number of piles you get after you have played this game. Perhaps a more interesting version is you're in a bar late at night. You've got your deck of cards, and the question you're betting on is
31:20
how big a table do you need to play this game? So let me give an example of how it works. Suppose we have six cards. We shuffle them. We obtain a permutation pie and we get the permutation three, four, one, five, six, two. So three is my top card.
31:40
Four is underneath it. One, five, six, two. So I start the game. My top card is three. I put it down. My next card is four. It's bigger than three, so I put it on my right. Then I get a one, and one is less than both three and four, and my rule is to put it as far to the left as I can.
32:02
I then bring it down to five. Five is now bigger than the top card one and the top card four, so I put it up here. Similarly, six goes down. Finally, I have two. Two is less than four, five, and six, bigger than one, it goes over the four because of my rule of going as far to the left as I can.
32:21
So the number of piles I get, Q six pie is equal to four. One then equips SN with uniform measure, and our third question is how does QN of pie vary statistically as N gets large?
32:47
Okay, so here is a problem from transportation theory. So it's a problem about the buses in the city, Cuernavaca in Mexico.
33:01
Now, the city is about half a million people. They certainly have a bus system, but they don't have a central transportation authority. The end result is that there is no bus schedule. So what happens is that you get
33:20
this typical Poisson-like phenomenon that you're going to be standing at a bus stop and there will be big waits between one bus next, or a lot of buses could come and there could be bunching. Now, the buses are owned by typically individual operators, so they were facing a situation where they would come to a bus stop
33:42
and the bus was already there loading up and they had missed their chance of any customers and then they would then have to go on to the next stop. So they were losing a lot of money, so they asked whether they could do anything about this and they came up with a very ingenious scheme which I've since learned is rather common
34:00
in a lot of many places in Latin America. So what they did is they hired observers. So you imagine there are these bus routes going through Cuernavaca and they would post these observers at strategic points along the routes.
34:20
And what these observers would do is they would take note of when buses passed them by and then when the next guy came along, they would sell this information to that bus driver and say, look, a bus just came by, you should slow down a bit, or a bus hasn't been in a while, should speed up a bit. And there are some marvellous pictures
34:42
you can see on the web of these guys signalling with three fingers up or two fingers, it's very nice to see. The end result of this is that they have a pretty steady and reliable bus service and I've spoken to people from Cuernavaca, it's a well-known thing and they're very happy with it.
35:01
Our interest in it is that recently, two Czech physicists, Kribalek and Szabo, went down to Mexico and began to investigate this phenomenon. They took data on one of the bus routes, route number four, for about a period of a month,
35:21
they collected a large amount of data, and our fourth question is what did they find? So the next question is a model, a statistical-mechanical model or a statistical model
35:42
due to Michael Fisher and it's called, it's one of many walker models. So suppose we have walkers located on the ladder Z, initially at position 0, 1, 2, and they walk according to the following rule. At each integer time K,
36:01
precisely one walker makes a step to the left. I'll illustrate this with an example shortly. No two walkers can occupy the same site, this is what's known as Michael Fisher called these vicious walkers. And thirdly, the walker that moves at time K
36:21
is chosen randomly. So how does that work out in an example? Here, we imagine at time 0, we have the walkers at 0, 1, 2, 3, 4, and so on.
36:41
At time 1, the move is forced. The person at 0 makes a move to my left. Then at time 2, there are two people who could possibly move, this one or that one, the one at 1 or the one at minus 1. Let's just suppose that the one at 1 takes a step to the left.
37:02
Then at time 2, there are again two people that could move, the one which is at 2 and the one which is at minus 1. Let's suppose the one at minus 1 moves. Now there are three possible people who could move. Let's suppose the one at 2 moves and so on. The question we're interested in is let Dn be the distance which is moved
37:22
by the zeroth particle. Here for this particular example, D4 would just be 2. So our fifth question is how does Dn behave statistically as n becomes large?
37:48
Now, this problem is a tiling problem with connections to statistical mechanics. It's a domino tiling problem. So we imagine we'd be looking at a tilted square
38:04
turned up 45 degrees and we're tiling with dominoes which are of size 1 by 2. Here is a tiling, a particular choice of tiling in a tilted square of size n plus 1 equals 4.
38:24
The way one counts is here is the origin there is the origin. You count 1, 2, 3, that would be n and n plus 1 gives you 4. So this is one particular tiling.
38:40
The rule is that the tiles must stay completely within the square. It's a non-trivial theorem of Propp and his collaborators that the number of such tilings is 2 to the n times n plus 1 over 2. Now, what we assume is that all such tilings
39:04
are equally likely and our sixth question is what does a typical tiling look like as n gets large? This problem is called the Aztec diamond because if you just focus on the upper part here and you look at the shape of the tiling
39:22
it looks like one of those Mexican pyramids. The final problem is a problem which is familiar to I think most of us here.
39:43
It's the airline boarding problem and how long, the question here is how long does it take to board an airplane? This is a problem of great interest to the airlines because every extra minute they spend on the ground is lost money.
40:01
Now, I'm going to describe this model which is due to Eytan Bachmat and his collaborators. He has now a much more sophisticated model which makes contact with Lorentzian geometry. It's a very interesting analysis but I'm just going to give his very simplest model
40:21
and it contains the main features of his analysis but let me say again this model can be made much more realistic. I'm not going to go into that. So the model is that you're looking at a very small plane and there is one seat per row.
40:42
Passengers are very thin for reasons that will become clear and thirdly, the passengers move very quickly. The main time, the unit of time that is blocking us as we board is the time it takes for somebody to come in with their baggage, turn around, open up the bin,
41:03
put their luggage in, close the bin and sit down. That is one unit of time compared to that time all other actions are very fast. So how does, how would such a boarding look like? I give an example.
41:26
Okay, so imagine that there are six passengers and these passengers are in the waiting room and then the steward says okay, we are ready for boarding and people line up at the gate
41:41
and suppose they line up in the order 341562. Now these numbers refer to the ticket that the person has so person with ticket number four sits in seat number four and so on. So they line up at the gate in this order. Three is closest to the gate, four is right behind and so on.
42:02
So they now file into the airplane. How do they file in? Well, three can go to his seat but then four is blocked and cannot and must wait until three puts up the bags. The person in seat number one can go to that seat
42:21
but then five, six and two must wait behind and they are blocked. So after one unit of time, one and three sit down and now four, five, six and two are free to move on. Four goes to the seat, five and six are blocked but two can go to his seat.
42:43
Then four and two put their bags up after one unit of time, they sit down, then five can go, five takes one unit of time, finally six can get to seat number six and we see that this process,
43:01
this model process takes four units of time and the question which we are asking here is assuming that passengers line up randomly,
43:21
how long does it take to board such an aircraft? So those are the seven questions and now I want to start off by, so let me keep this over here.
43:42
Now, the remarkable fact is that although all these problems come from extremely different areas of science, mathematics, physics, applied mathematics, all these systems are modelled statistically by random matrix theory. So I recall that to say something
44:01
is modelled by random matrix theory, we have to go through the standard procedure and compare the statistics of these different random quantities with random eigenvalues. So the first problem which I remind you is the scattering problem, the neutrons of these heavy nuclei.
44:25
The scattering resonances after the standard procedure, the probability that there are no resonances in an interval minus y to y is given either by this formula, which I ask you to recall, determined of one minus kky,
44:41
which is the asymptotic gap, probability for goe introduced above or it's goe analogue and you get goe or goe depending on some underlying symmetry conditions. So in some very remarkable way,
45:00
the neutrons are behaving like the eigenvalues of a random matrix. The next result was about the zeroes, the rescale zeroes of the Riemann zeta function. So what did Montgomery find after some technicalities, found that the limiting two-point function RAB
45:21
for the zeroes has this formula. Explicit formula, integral a to b of one minus sine on pi r squared. Now as noted by Dyson in the famous story, which I will not repeat, this is precisely the limiting two-point correlation function
45:40
for the eigenvalues of a random GUE matrix. Now this as a basic idea has been taken up by many people working in number theory, Rudnick, Sarnak, Katz, Keating, many, many people, but it sort of acts,
46:00
these two examples now sort of set out the bookends of what we are talking about. On the one hand we're speaking about this very explicit physical experiment. On the other hand we're speaking about this very pure mathematical object, which are the zeroes of the Riemann zeta function.
46:20
And somehow there's a commonality of description between them. So now what lies between these two extremes? So the third example, let me put it over here,
46:41
is the game of cards, this patient sorting, PN of pi, which is the number of piles that you obtain, that turns out to behave like the largest eigenvalue of a GUE matrix. In other words, if PN of pi is the number of piles you get, again you've got to do some centering and scaling, but once you've done that,
47:01
you compute this, then this goes to F of T, which is exactly the Tracy-Widdon distribution for the largest eigenvalue of a GUE matrix. You may remember that that is something which involves the panel of two equations. So somehow, in this very strange way, just playing this game of cards
47:21
is bringing in this esoteric function theory. This is a theorem of Jin Ho Baik, myself, and Kurt Johansen, and has been developed further by many different people, I mentioned Okonkow, Borodin, many, many people, Tracy and Widdon.
47:43
The fourth problem is the buses in Cuernavaca, and the question was, what did Kribalek and Shaber find? Well, they found quite remarkably that the spacings between the buses
48:01
after the intervention of these observers behaved exactly like the eigenvalues of a random GUE matrix. So the formula is, again, you get this familiar determinant of one minus K, take a second derivative with respect to the length of the interval, you integrate from Z to S and that is what you get.
48:23
Now, the thing that I want to get across, it's not as if these are very approximate models. The accuracy of the models is really quite astounding, as I'm going to show you in a moment.
48:42
It's quite remarkable to me and I think to everybody who's thought about just how good this random matrix model is. So let me show you now what they actually found. So this is taken, it's a paper of Kribalek and Shaber
49:04
in Journal of Math Physics. Now, what you're looking at here, the heavy line is exactly this formula, second derivative, the integral of the second derivative of this determinant. The crosses are what they actually observed.
49:24
Now, if one is an applied mathematician, you get this kind of fit, you're quite astounded, right? But the situation is even better than it looks at first glance. There's an inset here, which is a blow up
49:40
of this left hand corner here. And then you'll see if you look on the inset, there is the heavy line, there are the crosses and there are these dotted lines. Now, what these other lines do is they take into account that the bus drivers are not, or the observers are not recording all the information.
50:02
Some of the information is being thrown, thrown away. So there's what I believe is called a binning problem. Then, so what they do to overcome that problem, they sample the statistics of the eigenvalues by leaving some of them out. So when they leave some of them out to model the way that the actual observers operate,
50:23
they get this dotted curve, which goes through these crosses even better than the original curve. So the fit is really quite extraordinary. Now, the fifth problem is the Walker problem.
50:45
It was analyzed by Peter, Peter Forrester. And he then found, so the question you had, these random walkers, how far does this guy on the left get? Well, the statistics of that guy's motion, DN, is exactly described when N gets large
51:01
by the largest eigenvalue of a GOE matrix. The GOE are not the emission, they're the real symmetric matrices. And again, it's given by some explicit Tracy-Widdham distribution, which is very similar to the F of T, which I wrote down before. The sixth problem is the Aztec diamond.
51:25
You take this square and you tile it. Now, it was a wonderful result of Elky's prop and many other people, that there is something called an optic circle phenomenon. So when N gets large, you scale X by X over N
51:44
and you get the circle coming. Then what you find, and this circle's called the optic circle. In the top region here, or the left here, or the bottom here, or the right here, which are called polar regions, you find that the tiling is completely regular.
52:00
Here it goes east-west, here it will go north-south, and inside, it's, as it were, intuitively random. And this inside region is called the temperate zone. So that's a result of a variety of people.
52:24
So inside the polar region, things are frozen. Inside the temperate region, things are curling. Now, Kurt Johansson proved an absolutely wonderful result. You draw a line, as I've drawn this red line here.
52:40
And here is the edge of the circle. It crosses this line in two places. And for any finite N, it's approximately described by the circle. For, in general, for finite N, there will be fluctuations. Now, the result of Johansson is those fluctuations
53:00
about the circle are exactly described by the Tracy-Wydham distribution. So they behave like the eigenvalues of a random matrix. Now, the airline boarding problem. Again, we find that, under this model,
53:21
that the time it takes to board as people line up at the gate randomly is again given by the Tracy-Wydham distribution, the largest eigenvalue of a random matrix. So I just picked these examples just to try and spread out within mathematics
53:41
where these phenomena are occurring. There are many, many other problems. There are hexagonal tilings, condensation problems, percolation problems. There's a whole theory developed by Peter Sonnack and his collaborators, and Keating's work connected to L functions. There are many, many different things.
54:00
I've just picked these to give you some sense of how things work. At the mathematical level, the status of the problems are, of course, the neutron scattering problem is experimental and numerical. The zeta function is an actual mathematical theorem that it behaves like the two-point function
54:22
of the zeros behave like the two-point function of a random matrix. Modulo certain Fourier type restrictions. The patient's sorting problem is a theorem. The buses in Cuernavaca, there is now a model for it which was developed by Jinho Baik, Alexei Borodin,
54:42
Tufek Suidan, and myself, where we are able to show the origin of the random matrix theory statistics. The Walker's problem is a theorem. The Aztec diamond problem is a theorem, and the airline boarding problem is a theorem.
55:01
Okay, now, what is the kind of mathematics which is involved here? Integrable systems is a key player. There, coming into the analysis are ideas from inverse scattering theory, Riemann-Hilbert methods,
55:22
as I already mentioned, Pandavei theory, theory of determinants, the classical, and Riemann-Hilbert steepest descent method, and also different, many, many different combinatorial ideas, people like Gesell's ideas and also Schur ideas, going back to Schur.
55:41
It's a kind of mathematical arena. It's, of course, many lectures on its own to really bring that up, but that is the kind of mathematics which comes up here. So, on my last slide, I want to just raise a number of issues.
56:01
Question is, maybe you're asking yourself, how do I recognize that the system I'm interested in behaves like random matrix theory? Sort of a more scientific statement of it would be is, in intrinsic probabilistic terms,
56:22
how do I state a theorem which would be the analogue of the central limit theorem? The central limit theorem says I've got independent, identically distributed variables. I do a specific thing on them. I add them, scale them, and then I get a normal distribution. The question one wants to ask in purely probabilistic
56:43
terms is I say I've got some independent, identically distributed variables. I do some operation X on them. Then when I do operate an X on them, random matrix theory comes out. That's the kind of intrinsic question which is being raised. And work in this direction has been done
57:03
by Baik and Suidan and also independently by Dino and Martin. How can we think about what is a question which is posed in, say, in more analytical terms, is that what people believe is that
57:24
the natural arena for thinking about these things is the space of distributions? So that's this space I have here. Initially it's something without any structure, without any topography, but we do know that there's something special here.
57:41
There's a Gaussian point here. It's like a little valley, and we know that as you get near to it, you can be sucked into it. But now we understand there isn't just this Gaussian point. There's also things like the Tracy-Wydum distribution, and one wants to somehow put some kind of metric down in here to understand how you flow
58:01
in the space of probability distributions. So this is a different direction. Finally, the question is to what extent are we seeing an emergence here of what one might want to call macroscopic mathematics? I mean, one has macroscopic physics and macroscopic physics, which satisfies thermodynamics.
58:24
And to end off, I just want to present a picture, which I'd like to give to you. And one should think, as it were, that one is in a valley, and you walk around in this valley, and you see this thing,
58:42
and it's different from that thing, but it's like this thing, but it's different from that thing, and you see. Then you begin to step out from this valley, and you begin to walk away at some distance. The remarkable thing is that what happens is that the situation, as you look back on it,
59:00
does not just blur into some indistinguishable picture. What happens is that a very clear picture begins to emerge with something, a very clear structure begins to emerge, which is very robust and contains a great amount of detail.
59:22
And it is this distant picture that is so well described by random matrix theory. Thank you.
Recommendations
Series of 14 media