We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Statistical Mechanics Lecture 8

00:00

Formal Metadata

Title
Statistical Mechanics Lecture 8
Title of Series
Part Number
8
Number of Parts
10
Author
License
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Leonard Susskind continues the discussion of reversibility by calculating the small but finite probability that all molecules of a gas collect in one half of a room. He then introduces the statistical mechanics of magnetism.
NegationPinch (plasma physics)BarrelElektronenkonfigurationBiasingMinerMagnetizationEffects unitModel buildingMitsubishi A6M ZeroDrehmasseTemperatureMagnetYachtAvro Canada CF-105 ArrowNanotechnologyUninterruptible power supplyBook designDeck (ship)Cut (gems)Cardinal directionToolYearGas turbinePower (physics)Electric power distributionBending (metalworking)KilogramDifferential (mechanical device)SteelAutumnGroup delay and phase delayNightWire bondingCapital shipCirclipAngeregter ZustandBrickyardMint-made errorsElectronic mediaAlephDirect currentCell (biology)Will-o'-the-wispPerturbation theorySummer (George Winston album)KickstandFlugbahnForceForgingCartridge (firearms)Domäne <Kristallographie>WeightParticleCooling towerBeta particleVideoFood storageFACTS (newspaper)PlanetCarl August LinerBird vocalizationUniverseShip classDestroyerQuantum fluctuationDipolSpeise <Technik>DVD playerCylinder headOctober: Ten Days That Shook the WorldMicroscopeStripteaseRoll formingOrder and disorder (physics)Single (music)GameTheodoliteAstronomisches FensterSwitchAstronomerFinger protocolMinuteThin filmCoherence (signal processing)Semiconductor device fabricationDampfbügeleisenFerromagnetismusSource (album)ReplicaNoise figureSwitcherMorningColorfulnessAcoustic membraneRenewable energyRRS DiscoverySpontaneous fissionTuesdaySkyElektronenbeugungStagecoachCrystal structureKopfstützeHourWater vaporCurrent densityPipingSigmaField strengthVortexVolumetric flow rateHot workingGround (electricity)CrystallizationSunlightThermodynamic equilibriumMagnetspuleLastBuick CenturyAmplitude-shift keyingBomberBookbindingRulerKühlkörperSignal (electrical engineering)Spin (physics)Key (engineering)FunkgerätPackaging and labelingMapSpare partAlcohol proofSubwooferSeparation processKoch Membranen GmbHStandard cellLeadField-effect transistorTurningPhase (matter)Newton's laws of motionTypesettingSizingMagnetic momentZeitzeichensenderEnergy levelMeasurementElectric currentJet (brand)Ideal gasBill of materialsAbsolute zeroGasObservational astronomyHalo (optical phenomenon)StarSchwache LokalisationScale (map)Ground stateModulationStar formationRing binderPlant (control theory)Array data structurePurpleFlip (form)Thrust reversalDaySingingBallpoint penTrajectorySpeckle imagingLine segmentSkelettuhrRotating radio transientElectronic componentVeränderlicher SternBoatRelative articulationAirlinerHose couplingExpansionsturbineFocus (optics)GlassAtmosphere of EarthRail transport operationsWoodturningIceLumberWearCoining (metalworking)Watercraft rowingBracket clockAtomismElektronentheorieCaliperArbeitszylinderPump (skateboarding)Eddy currentQuantumSpread spectrumThermalSeries and parallel circuitsApparent magnitudeLecture/Conference
Transcript: English(auto-generated)
Stanford University. All right, let's come back to the second law for a minute, a little bit, and talk about Poincare recurrences. Poincare recurrences are, I am not
sure whether Poincare was really the first one to think about it. I have a feeling that Boltzmann, again, probably understood this. Boltzmann was before Poincare? I think so. I think so. So here's a question.
Here's a question you might ask. Let's suppose you start out with the air in this room all on one side of the room. This is not an impossible thing to do. You put a wall in. You evacuate the air out of one side of the room, stick it into the other side of the room,
and start with an initial condition with all the air on the left side of the room. And now you let it go. OK, what happens? Comes to thermal equilibrium, fills the room pretty much uniformly. Entropy increases. But what if you sit there, and you wait, and you wait, and you wait, and you wait?
Sooner or later, sooner or later, the unlikely will happen. The unlikely event will happen where by accident or by just waiting long enough in time, all of the air will reappear in the left side of the room.
Or in the top half of the room or whatever. But let's say the left half of the room. That's called a Poincare recurrence. And it's really no different than saying that if I flip a coin enough times, I will get a million heads in a row. Very unlikely, but if I do it enough,
some fluctuation will happen. OK, so the question is, roughly speaking, how long do you have to wait for the air to all appear in one side of the room? Is it a year? Is it 10 years? Is it 100 years? Is it the age of the universe? So let's see if we can get a handle on that.
You start out with the idea of phase space. And phase space is the space of the coordinates of the molecules and the momenta of the molecules. And of course, it's very high dimensional. 6n dimensional.
6n because each particle has three coordinates. Each particle has three momentum components. If there are n particles, there are 6n coordinates. So this is a very high dimensional space. OK, now as far as the momentum space goes, that's kind of bounded.
It's bounded because if any particle has an enormous momentum, it will have a very large energy. And there's a certain amount of energy that you put in the box. No more than that. So pretty much we can say the momentum dimension in this box is bounded.
And let's just bound it by saying the momentum is definitely within some range here. And it doesn't matter how many particles we have. It doesn't matter how big the system is. It doesn't matter what the temperature is.
The higher the temperature, the more uncertain the momentum is. But if the temperature is reasonably low, then the momentum direction here is pretty much bounded. And the x-axis, that runs from the left part of the room to the right part of the room, sort of.
And to say that we started out in one half of the room meant that the phase point was in here. In other words, the system started in phase space. Somewhere's in there. Let's not be detailed about where it is in there. It's somewhere's in there.
So the probability distribution is spread out over here. And now we wait for a while. And what happens? The phase point starts to move. Now, it moves chaotically. Chaotically, for our purposes, just means pretty unpredictably. And not unpredictably, because the laws of physics
are unpredictable in principle, but because trajectories, like the example of the billiard balls, errors or slight differences tend to magnify themselves after a while. And so even if we started out two very, very similar
trajectories, they would very quickly depart. And you can pretty much imagine that this means that this phase point moves around in here in very, very complicated ways and pretty much fills up
the phase space. Fills up the phase space in the sense that if you coarse-grain it and fuzz your eyes a little bit and wear somebody else's glasses, it will look like it's pretty much filled up the phase space. So what percentage of the time would you expect that the phase point resides such
that the particles are all in one half the room, if this were the picture? Looks like half the time. That's crazy. We don't expect half the time the air molecules to be in the left half of the room or the right half of the room. And the mistake we're making is we're drawing a picture
in just two dimensions. In two dimensions, if we divide the x space in half and say we're to the right, we're talking about half the volume of the phase space.
What happens, though, if we have n coordinates? And we're not talking about the part of the phase space where one particle is on the right-hand side, but where all of them are on the right-hand side. So let's just say there are two particles. Let's forget the momentum for a minute
and just draw the two coordinates, x1 and x2, or x and y. x and y are the first x and the second x. And to say that both of them are on one half of the room is to say that the phase point is somewhere in the quarter of the square.
So suppose there were only two particles moving in one dimension. And we start somewhere, and they move around, and the system moves around randomly. Two particles scatter off each other. They do very random things. What percentage of the time are both particles
in the left-hand side of the room? A quarter. What if there are three particles? And what if there are n particles? 1 over 2 to the n. 1 over 2 to the n is right.
1 over 2 to the n. That depends on how we're constraining it. If we said all the particles are to the left, but not the, we just said to the left, and left it at that.
Didn't care where they were, up or down. Then I think it would be 1 over 2 to the n. There's another way to think about it. And that's to say, let's take the phase space and identify a subregion of it
as the subregion which we're interested in. Interested means an interesting configuration that is very unlikely. So here's a little region of the phase space.
The phase space is much bigger. And the volume, let's say the whole volume of the phase space, let's forget momentum. Momentum is not important in this. What is the volume of the phase space if there are n particles?
It's the volume of the box raised to the nth, or the 3nth power, depending on the, right. So the volume of the whole phase space, because it's an n dimensional phase space, or the 3n or whatever, let's forget the 3.
The volume of the whole phase space contains a volume to the nth. That's the same volume to the nth that we discovered when we calculated the partition function and we integrated over position. Volume to the nth. Let's suppose this region of phase space over here has a much smaller size.
All the particles have to be in there. Then the volume of this region of phase space with all the particles in here will be some little v to the nth.
This is the volume of the box, incidentally. This is not the volume of the phase space. The volume of the phase space is v to the n, the volume of the box. And if we're asking about all the particles being in some small volume, could be half, I mean this could be half of the big volume. Okay, this is some smaller region and the volume of this region of phase space
is little v to the n. What percentage of the time would you expect the phase point to be in here? Little v to the n over little v to the n over big v to the n, all right?
Now, little v to the n could be some small number, whatever it is. But what about big v to the n? Big v to the n is proportional to the entropy of the whole system.
The entropy of a region of phase space is the logarithm of the volume of that region. If all you knew is that the system was in some region of phase space, the volume of that region of phase space is the logarithm, is the entropy.
Is it the volume of configuration space? Yeah, we're not worrying about the momentum now. Yeah. Why, the momentum is the momentum, roughly speaking, of whatever the temperature is. So let's not worry about that.
Okay, so roughly speaking then, this here, little v to the n, is the entropy that a gas would have if it was in this little region, if all the particles were in that little region. Let's take the little region to be pretty small. So that's just some number.
It's but, but what's about big v to the n? That's the entropy of the whole gas in thermal equilibrium, or sorry, it's the exponential of the entropy. Logarithm of v to the n is the entropy of the gas in this region. And v to the n is the exponential of the entropy. So what this is telling us is that the likelihood of finding a system
in a tiny volume of phase space here is always proportional to e to the minus, because it's in the denominator, e to the minus the entropy of of the thermal equilibrium state.
The meaning of that is that it's very improbable. E to the, e to the entropy, e to the minus the entropy is a very, very small number. What's the entropy of the, of the molecules in this room? Roughly speaking, proportional to the number of molecules.
It's pretty close to just being the number of molecules. Ten to the, ten, no it's more than ten, ten to the thirtieth. Ten to the thirtieth, I'm not sure exactly what, something like that. So v to the n, or the entropy is, is ten to the thirtieth.
The probability of finding yourself in a tiny little volume of phase space like this is not e to the entropy. It's, yeah, sorry, it is e to the minus the entropy.
E to the minus the entropy is the probability of finding yourself in a small region. It's the same as this one over two to the n here. It's the same as this one over two to the n. In the case that we studied over here, little v over big V was a half. But, and a half is not a terribly small number.
But when it's raised to the power ten to the thirtieth, that is a very small number. Okay, so the likelihood at a random draw of a point from the phase space that you find yourself. In this tiny volume of phase space here is, in this case here,
one over two raised to the ten, not to the thirtieth power, to the ten to the thirtieth power. That's a pretty small number. How long do you have to wait on the average to find yourself in that region?
Well, about a time of order two to the ten to the thirtieth. If the fraction of time that you spend in that odd region is one over two to the ten to the thirtieth, then how long do you have to wait till you find yourself in that region?
Two to the ten to the thirtieth. In what units? Years? Seconds? It doesn't matter. What is the objective of doing this? Oh, it's, what is the objective? Why is this interesting? Why are we doing this exercise?
We do this exercise just to understand it. Just to understand it. Understand in what sense systems are reversible. The answer is, if you wait long enough, they will reverse themselves. And if you really have a sealed room here, and you let it evolve,
let's say starting from the from the odd state, it would come to thermal equilibrium or looks like thermal equilibrium, it would spend a long, long time there. But every so often, every one to the two to the ten to the thirtieth years or whatever, you would find the molecules in half the room.
You wait long again, it equilibrates again, it looks conventional. And then, all of a sudden, you find the molecules in the other half of the room, or that corner of the room. And if you integrate it, or study it, over sufficiently long times, you will discover that the entropy goes down, or that the oddness goes up and down,
and up and down, and up and down, in a completely time-symmetric way. In a completely time-symmetric way. What's not time-symmetric is if you knowingly start in a very odd configuration.
In other words, you knowingly start in a tiny volume of phase space. Most likely, the next thing is to find yourself out of that volume. So if you start in an odd situation with all the molecules in the corner of the room,
you expect the next thing to find is the molecules spread out. In fact, you'll find the next thing, and the next thing, and the next thing is pretty much to spread out uniformly. And that sounds like it violates the reversibility of the physical laws. But in fact, if you were to have waited long enough, you'll find it reversing itself
and doing everything imaginable for a closed system. I have a question, please. You said that time didn't matter seconds. Who cares? But let's say it's 10 to minus 30, or 10 to minus 40 time. What happens? I mean, it seems to matter some. No, no, it doesn't. Well, some, yes, of course.
But let's just see. OK, so we have a number 2 to the 10 to the 30th, OK? Now, I'm going to change units. This unit is the units of seconds.
Supposing I change the units of hours. What does this number become? 2 to the 10 to the 30 divided by 3,000 or something.
Let's say 10 to the 3, all right? OK, that's the same as 2 to the 10 to the 30th minus 3. It doesn't matter whether, right, 10 to the 30th minus 3 is 10 to the 30th.
Does it mean that if we were to invent a short time equivalent to 10 to minus 3, you know, the opposite, will it for that infinitesimal time actually come back to that volume? Yeah, yeah, yeah. Infinitesimal fraction of the time.
How long it actually spends in the corner, that depends on how fast the molecules are moving and so forth. The quantum limitations, I assume. There is a quantum version of the Poincare recurrence theorem. But we want to get into it now. All right, so that's just, it's more than an interesting point.
It's a deep conceptual point that resolved and helped Boltzmann resolve the puzzle of the one-wayness, the apparent one-wayness of time and the two-wayness of the laws of motion.
Of course, what it required to make sense out of it is that we would eventually have to understand why the universe started in a little corner of phase space. That's a separate issue. Boltzmann knew that. He knew that and he, and he said so. He said. Well, I was thinking, could this have application that cause problems?
Absolutely. Absolutely, and it's still a open and question that's constantly being addressed. Why did the universe start in a small corner of phase space? I was thinking, after the universe spreads out, have you waited long enough?
This is a quite, this is a, it's a good question and it is one which we're working on. Question? I think the answer is the universe must not be a closed system. If it's a closed system, it will just recur and recur and recur and that does not,
that doesn't make for a good statistical explanation of the world. I, I didn't understand when you said, I think you said the time was symmetric. If you watch, if you watch the closed system long enough, you would find that let's call it some measure
of the localization of the particles would decrease if you started off in a corner and sit there pretty delocalized for a long time and then pop up and then go back down and pop up and then go back down. But the time scale to discover this reversibility that what goes up must come down,
so to speak, is this 2 to the 10 to the 30th. You're not saying that, you're not saying that it takes as long for it to move out as it does to come back to one corner? Oh, it does. Yeah, everything is, everything is, everything would be symmetric.
Every now and then, you would find the molecules in exactly the right configuration to swoosh into the corner, pretty much the same way they swooshed out of the corner. Well, I see you're comparing two, two, two exact same states. Okay, I got you. So, you'll see everything happen, you'll see everything happen in both directions.
Excuse me. Yeah. So, this doesn't really have anything to do with the fact that they started in a small area. You just, you can start with room just like this, right? And then eventually, all the air will be in a small area. Yes, yes, yes, but if we're trying to understand why the world looks the way it is,
you see, the problem, and I think Boltzmann knew this, it's a more recurrent problem in recent years, is if the universe really were a closed box, and you were to ask, what's the most likely configuration to find a planet
with, with people on it, what's the most likely possibility, you would discover that the most likely possibility is to have uniform gas everywheres, except the smallest possible amount of gas necessary to make up a planet
having condensed into the planet. The chances that you would see two planets, the chances that you see one planet are very, very, very small. The chances that you see two planets are vastly more negligible than that. So, if you were asking the question, what should astronomers expect to see
in a world, given that there are astronomers, given that there are astronomers, a conditional probability, the conditional probability that there is a planet, and on those planet, on that planet, there are astronomers, and on that, they do astronomical observations, what's the most likely thing they will see? The most likely thing they will see is they will look out and see nothing,
or they may see some gas out there, but they will not see it condensed into another planet. By far, the most likely thing to see is one planet, if you, if you know that you have one, and not two, what's the probability that you'll see the universe filled with stars? Absolutely negligible, unless you know that in the fairly recent past,
in the fairly recent past that you started with some very exceptional and unusual starting point, and then the flow out from that starting point
is likely to have certain kinds of structure that a random fluctuation would not have. Anyway, that's called the problem of Boltzmann brains. It's called the problem of Boltzmann brains because people went a little bit excessive on it and said the most probable astronomer would be a single astronomer's head
disconnected from anything else, but it is a problem, it's a problem of using statistics to understand the world the way it is, and it's always, there's always a conditional question, a conditional question is always given that we're here, that etc., all the various things,
what's the probability that we see x? And the probability in a closed universe that we see x would be, x would be much higher to see only x and not y, y meaning some other
planets and things like that. So it's not a good theory to think we are just a result of a random fluctuation. If we were the result of a random fluctuation where things just assembled themselves sort of accidentally or what is apparently accidentally into a planet with people on it,
we would have no explanation of the coherence of history, why history looks like it had a coherent past, and why, yes, why there's a consistency to historical evidence
if the universe is materialized in a, by random fluctuation, not the universe but the planet, just materialized by random fluctuation with us sitting here today. Yeah. It seems like if you're starting out with a white slate so to speak, then you can talk about the probability of finding one planet like this, it's very small, but then the probability of
finding one planet is much bigger than finding two. Right, but you can talk about the conditional probability that given that there is one planet, what's the probability of finding another one, that seems like it would be the same. No, the same as just one finding one? No. Given that you know what exists, some kind of conditional probability,
what's the probability of the second one? Extremely small. But I mean, it seems like the same as when you don't know it exists, it's the same as the probability. No, if what we're relying on is random fluctuations, all right, and here we have a situation where randomly in this room
a collection of molecules randomly materialized and formed the Boltzmann's head, okay, formed Boltzmann's head. That is a very small probability. Wait, wait, wait, wait. Maybe I didn't, but let's just explain it anyway. The probability that we discover a
Boltzmann's head is going to be a very, very small number. Given Boltzmann's head has formed, Boltzmann looks around and he says, I wonder what the probability is that my wife is here also.
Very, very, very tiny. Much, much tinier than this, just Boltzmann. Okay, now that's not what you were asking me. Well, this is the way I'm thinking of it. Imagine talking about playing the lottery and you've got two people who play the lottery the same number of times and one of them has one once and the other is one zero. Now, what's the probability the next
time those two play, what's, what are the two probabilities of them when you get given they're equal? You see what I'm saying? It doesn't, it doesn't matter that the other guy won once. And that's, it's because they're, because they're equal. That's right, that's right. So the probability that there's a discovery of a Boltzmann's wife in the room is very,
very tiny. It's equally tiny if Boltzmann happens to have been discovered in the room. Right, equally probable, equally improbable whether, that's right, but very improbable. So, in other words, Boltzmann ought to be very, very surprised that his wife is there.
He, first of all, is surprised that he's there. Well, maybe he doesn't know the theory very well. He doesn't know too much about the, he knows a little about the theory. And he discovers he's there. He says, oh, what a wonderful accident. How happy I am. I wonder if my wife is here. Nah, that's, that would be too much of a good thing.
He looks around. He finds her. He says, what is his conclusion? His conclusion is, you know, I think my world is probably not a world of random statistics. It's probably not a world of a closed volume of molecules which just randomly assembled me.
He couldn't say that before. He couldn't say that before. What he would say before is, look, I'm here. I know that if we wait long enough, I will be here. When will I be here? Is, is it a fantastic piece of luck that I happen to be here right now?
No. At some point in time, I'm going to be here. This happens to be the time I'm here. He says, he says, if that's the right theory, that I'm here just because of the, the, the gas in the room eventually assembled into me, and when else could I be here except when I'm here?
It's when I'm here. But I think the best prediction I can make is that if I look around, I will find the rest of the room pretty much in thermal equilibrium with no fancy structures in it. In particular, my wife won't be here. If he discovers his wife, he will say, that's extraordinarily unlikely, even much more unlikely, so my theory is probably wrong
that I'm the result of a random fluctuation. That's, yeah, okay. This, all right, this is good. You mentioned the word historic. Can you define it? We couldn't explain it. One textbook says George Washington chopped down a cherry tree,
and then you go and look at another textbook, it says the same thing. But if the world was just a fluctuation, it was just made by fluctuation, and one textbook happened to say George Washington chopped down the cherry tree. There'd be no reason to expect the other textbook to say the same thing. He chopped the tree. Why would it, why did they not?
No, no, he didn't chop the tree. The world just materialized accidentally in a configuration in which the textbook said he chopped the tree. Don't worry about it. This is not the right theory of nature. Yeah, doesn't this just say that we live in a, oh, we could live in a world that's closed system that's extremely long, time constant to reach equilibrium?
Yes, but it would still be true. Yeah, you could say that, but then you would never, nevertheless say if you wait long enough, there will be many, many, many replicas of you in the future, and almost all of them will not see a coherent, uh, history.
So if you make your best guess, you find yourself here today, and you ask how did I get here, the overwhelming majority of people who wake up in the morning and ask that question will be ones who came out of one of these random fluctuations.
Now, is this a serious concern? Is this, uh, should we, uh, worry about that? Uh, I can tell you cosmologists do worry about it, serious, uh, theoretical cosmologists, but I'm not going to try to sell you anything. It is something that, uh, is of concern, that if we want to use
statistics and ask what's the most probable thing we should, we should, we should see, given that we're here, we have to take into account all the ways we could have gotten here. Most of the ways we could have gotten here would be by random fluctuation, and, uh, history would not be coherent for them. Okay, that's the problem with Boltzmann brains.
Question? Um, is there anything that can be said about the fact that life seems to decrease entropy? No, of course it doesn't. No, no, of course it doesn't. Uh, life does not, yes, but it's always at the cost of something else increasing its entropy. Um,
the second law does not say that some subsystem of the world can't decrease its entropy, but it will always be at the cost of some other subsystem increasing its entropy even more. So, um,
But it seems like it sticks and it goes for a very long time. It's not just something that comes together and goes away. It seems to keep maintaining itself. It seems kind of a little strange. It's the flow of energy from the sun. The, uh, the earth is not in equilibrium. It's in a stationary configure. Stationary means it stays the same, but there's a flow.
If you have a system which has a flow moving through it, it can create interesting structures. For example, a flow of water through a pipe can create vortices that spin off and spin off. Those vortices have a structure, you know, little eddy currents and so forth. Eddy currents have a
structure. The water flowing through the pipe creates them, and you can imagine that little eddy currents could have enough structure to have some, uh, some interesting properties. But if you stop the flow by, you know, sealing off the ends of the pipe, then what happens? The eddy currents disappear and it just returns to a quiescent, dull, boring,
equilibrium. So what is the flow in the case of life that, uh, that allows, um, that allows this kind of apparent, uh, um, violation of the principle? Oh, and certainly,
of course, even in that flow situation, the total entropy of everything is increasing, even though you're making it's a pump and a sink of, uh, the water's coming in one end, going out the other end. It comes out warmer out the other end than it came in this end. So altogether, the second law is not being violated. Uh, the same thing is true on Earth.
The flow is the flow of energy from the sun. Uh, if we sealed up the Earth, didn't let sunlight in, didn't let sunlight out, everything would eventually come to thermal equilibrium. It would be dull, there would be no life, and we would just have featureless, uh, featureless,
uh, thermal equilibrium. Yeah, so life is a kind of eddy, uh, kind of eddy current, so little vortices that appear in a moving fluid, the fluid being energy from the sun.
Okay, well, we spent an hour talking about interesting things. Now we can get back to some dull things. Magnets. Magnets are, uh, when we talk about magnets, incidentally, in statistical mechanics, we're usually not talking about pieces of iron.
Uh, we're usually talking about mathematical models of a certain kind of system that, uh, magnetism. All right, so first of all, what is a magnet? Whatever a magnet is, an ordinary magnet,
it's made up of lots of little magnets. Little magnets could even be as small as a single atom, or they could be little crystal grains, but whatever a magnet is, it's made up of little
atom, little, uh, magnets. And typically, at room temperature, at some, uh, ordinary temperature, particular room temperature being rather high in this context, could be rather high,
but certainly at very high temperature, a thousand degrees or whatever, uh, those little magnets are randomly oriented in such a way that the net sample doesn't have a net, uh, orientation. The orientation is random, and not just as the orientation of the whole
thing random, but the relative orientation of the parts of it are random, and so there's no net magnetization. You don't, you don't see a magnetic, a macroscopic magnetic field from it. If you cool it down, and if the energies stored in pairs of these little magnets
is such that the magnets like to line up in the same direction, as you start to cool it down, you find out that lumps, groups of magnets, groups of little magnets, tend to be in alignment, but other little groups of magnets will also tend to be
in alignment, but in other directions, and you'll find sort of domains, domains which are magnetized, which means they tend to point in the same direction, but these domains are still fairly small, if you cool it down, now these, these are experimental facts, okay, and not,
not completely hard to understand, but as you cool it down more and more, energy, the energy consideration that things like to be in the same direction, like means that the energy is lower if the magnets are parallel. If the energy is lower if the magnets are parallel,
then as you suck energy out of the system, more and more of them will want to come into alignment, and these domains will start to grow, and eventually you may or may not hit a point at finite temperature, not at zero temperature, you may or may not hit a point at which
all of a sudden these domains become infinitely big so that the magnets tend to be somewhat lined up, everywheres in the same direction, that's called a ferromagnetic transition,
and it's a phase transition, and it's basically the simplest kind of phase transition, certainly at zero temperature, you'll expect them to be all lined up, why is that? Because at zero temperature, the only state of importance in the Boltzmann distribution
is the lowest energy state, when the temperature is zero, only the lowest energy state, and the lowest energy state, all the microscopic atoms like the, or the microscopic magnets line up, yeah. Question, if you have two, I'm thinking about macroscopic magnets,
uh, wouldn't they prefer to anti-align? Depends on, it depends on the details. In a piece of iron, they like to align. I know what you're thinking, yeah, you're thinking North Pole wants to grab the South Pole, it's just a little more complicated, and that's, that's partly why there aren't that many magnetic materials,
right, there's a tendency for them to want to anti-align but there's also competing things going on, and uh, do they have to have an external magnetic field when they cool to, to line up? No, no, no, but which direction they line up in may be random,
right, and that, that in itself is called spontaneous symmetry breaking, we're going to be talking about simple magnetic systems, the tendency toward order as you cool them, order means parallelness in this case, and the, uh, okay, so let me make a remark about
what you just asked, I think you just asked it, they could all line up this way, or they could all line up this way, or they could all line up that way, and uh, which way do they line up, which way do they wind up lining up,
and that itself might be defined or determined by the tiniest little stray magnetic field, just one molecule, just one little elementary atom being in a magnetic field which tends to
line it up a little bit, may govern the whole thing about the way the whole system lines up, there's a symmetry, the symmetry is which way things point, if they wind up pointing in a direction that symmetry is broken, that's called breaking the symmetry, there is no more symmetry, or at least it looks like there's
no more symmetry, but it's spontaneous, there's no magnetic field pushing everything in that direction, it just had to pick a direction, it picked a direction, it may be because of a tiny tiny tiny little stray magnetic field, but uh, we're going to talk about it, these are the things we're going to talk about, and the point at which the symmetry is broken,
the point at which the magnets tend to line themselves up in some direction, that's a phase transition, and that phase transition is called the magnetic phase transition, all right, so uh, first of all, don't think about literal magnets, because the the model systems
that are studied often are quite quite unrealistic as theories of ferromagnetic chunks of iron, what makes them interesting is of course that they resemble a lot of other things
in nature, and that they're mathematically simple enough to study, and interesting enough to exhibit features like phase transitions, that's what makes them interesting. Okay, so let's start with the very very simplest magnet, the very, and as I say, this, don't think it was a real magnet, this kind of magnet
either points up or it points down, it doesn't get the point in random directions, you could think of it as heads and tails if you like, but this very simple mathematical magnet either points up or it points down, okay, so, and we'll, it doesn't matter how they're laid
out on the blackboard, but let's lay them out in a line, some of them are up, some of them are down, and we want to make a statistical mechanics of this, and ask such questions of
what's the relative percentage of ups and downs, what's the energy of the magnet, and so forth. All right, so before we begin, if we're going to be talking about statistical mechanics in the Boltzmann distribution, we have to have an energy function, remember e to the minus beta times the energy, we have to know what the energy is,
so yeah, when you say them, try to think of these as particles, or you can think of them as atoms in a crystal, for example, yeah, you can think of them as atoms in a crystal, the atoms have electrical currents, or maybe they have spins, electrical currents make little
electromagnets, and so each atom is a magnet with a north pole and a south pole, but for the, for the simplest model that's ever studied, which we're going to begin with, the atoms point up or point down, and they can't point any other way, and again, the purpose of this is to be simple.
Okay, so there's a lot of them, how many of them? Capital N, and what is the energy, we're going to start with a very, very simple version, in the very simple version, there's no interaction between the atom, between the magnets at all, but there is a magnetic field, there's a magnetic field either pointing up or
pointing down, I'm not sure which way my notes actually correspond to, there's a magnetic field, each atom has a magnetic moment, that's just a little number attached to it, which tells you how strongly it interacts with the magnetic field, it has a magnetic moment called mu,
has to do with the strength of a magnet basically, there's a magnetic field, the magnetic field is either pointing up or down, and I can't remember which way I chose it, let's not worry about it, but the energy of one of these magnets is different if it's up or
if it's down, in particular, if the magnet is up, I think we give it a plus energy, and if the magnet is down, we give it a minus energy, so let's invent a variable for each magnet,
let's give it a name, let's call it sigma, this is the sigma for the first atom, then there's a sigma for the second atom, blah blah blah blah blah, and sigma is either plus or minus one, it's just a label or a variable which is plus or minus one,
so if the first spin is up, that means sigma one is up, is plus, if the second spin is down, it means sigma two is minus, and so forth, okay, what is the energy of this system? If the spin is up,
then the energy is positive, and it's just equal to mu times h, what if it spins down, what's the energy then? Okay, supposing there are little n, little n spins up and little m spins down, what's the energy? The energy is equal
to little n minus little m times mu times h, mu times h is this energy of one spin if it's up,
and minus mu times h is the spin if it's down, n, little n equals the number of ups, and little m is the number of downs, what's little n plus little m, big N,
so little n plus little m is equal to capital N, okay, we're good to go now, we can write down the Boltzmann distribution and we can calculate anything we want using statistical mechanics, so let's do that. What are the dimensions, please, excuse me, of
h and e? The dimensions? I mean, is h a number or magnetic field? For us it's a number, it's the strength of the magnetic field, it's an external magnetic field imposed on the magnet
from outside, so for our purposes it's a number, and mu is also a number, and we might as well put mu and h together and just call the whole thing a number, that's often done, sometimes it's called little h, but I thought I would just expose the various
pieces of it, would you prefer I called big h times mu little h and never see mu and the big h again? We could do that, it doesn't matter, okay, all right, now, excuse me, what is e? Oh, sorry, e equals, this is the energy and it equals, good, thank you,
energy equals that, okay, now how many configurations are there, how many configurations are there with little n ups and little m downs, without asking which
ones are which, how many configurations are there, we have capital n things and we want to group them into two groups, one group of little n and one group of little m, how many such configurations are there, how many such arrangements are there, that's a combinatoric
problem, all right, yeah, yeah, but let's write it down, the number of configurations with this value of energy, the number of configurations with this value of energy is capital n factorial, this is number, not number, number, number of states for a given
n minus m is n factorial over little n factorial little m factorial, remembering
that little n and little m add up to big N, that's the number of such configurations, let's take one of those configurations and ask what the Boltzmann weight is for that,
the Boltzmann weight means e to the minus beta times the energy, all right, in fact what we're going to be doing is working out the partition function, the partition function is the sum of all configurations, that means it's the sum of n
and m such that little n plus little m is equal to big N, I won't bother writing that, but keep that in mind, times e to the minus beta times the energy, which is mu h times n minus m, so we just take this thing and we add them all up,
now for each n minus m, there's going to be a certain number of configurations and that number of configurations is this combinatoric coefficient here, so we can write this,
yeah, and n is not too various, yeah they are, the number of ups and the number of downs, well I'm just saying it's not two variables that are independent of one, so they have to add up to capital N, that's all. So, but I'm just saying if you have, you can't use both of them as indices.
No, no, no, they're not, that's what I'm saying, you sum over n and m, making sure that n plus m, yeah, okay, we can write it another way. Well, m is equal to n minus n, so it's just... Yeah, so let's just leave it this way to keep the notation simple, but...
You don't need a combinatorial function inside the cylinder before the... No, each individual configuration gives this, and the number of configurations with a given energy is this. All right, so let me, you'll understand, you'll understand when I write the formula.
All right, and so sum over just n, little n, of n factorial over little n factorial times big N factor, big N minus little n factorial.
This is capital N factorial, little n factorial, little n factorial, all right, times e to the minus beta, and let's leave it this way for a minute. In fact, let's not leave it this way.
Let's write, let's write the following. e to the minus beta mu hn, let's call that, let's call that x, and let's call e to the plus beta mu h, I could call it one over x,
but I'm going to call it y for a minute. Let's take these two numbers here, call e to the minus beta hx, and the other one y. Okay, so what's e to the minus beta h times n? That's x to the power n, do you see that?
Can you see that? That's x to the power n, and what about the other factor here? That's y to the power m.
I'm just using x and y because maybe it'll stir some memories from high school. X plus y. X plus y, this is the binomial expansion. This is the binomial expansion, and this whole thing is just equal to x plus y to the capital
n, all right? That's, that's the binomial expansion, and so we've solved it. We've figured out what z is. Let's write it down. Z is just x, this is z is equal to x, which is e to the minus beta mu h, plus y, which
is e to the plus beta mu h, all raised to the capital n power. That's it. That's z.
That was easy. This function, does that function have a name? Well, let's multiply it and divide it by two. How about this function? Does it have a name? It's the hyperbolic cosine, right? So let's, let's call it that.
We might as well. So the answer then is two to the n. Now, two to the n is not going to be interesting. It's a number. A multiplicative factor in the partition function usually doesn't do anything, but we'll leave it there. And then hyperbolic cosine of mu h to the power n.
That's the partition function. Oh, sorry, beta mu h. My mistake. Beta is awfully important. It's the inverse temperature. Without it, we can't differentiate with respect to it.
All right. So that's our partition function. Now, supposing we're interested in the question, what is the relative percentage of upspins and downspins?
What's the relative percentage? That quantity has a name. It's called the magnetization. The magnetization is zero if there are as many upspins as downspins. The magnetization is plus if there are more upspins than downspins, and magnetization
is minus in the opposite situation. So let's define, first of all, let's define the magnetization. N minus m is the difference between upspins and downspins. It's sort of the magnetization, but it's usual to divide it by capital N so that
it becomes the magnetization per magnet, if you know what I mean. So let's call the magnetization m is equal to n minus m times mu h divided by capital
N. That's the definition. The magnetization, and what is it? It's the bias for each particle, whether it's up or down. If the magnetization is positive, it's sort of the average upness or downness of each.
Oh, I take that back. The magnetization is just this. It doesn't have the mu h there, and that's definition. That's definition. The magnetization is clearly related to the energy. Let's just write a few equations here,
and then we'll be able to use the partition function. The energy is equal to n times the magnetization times mu h. All I've done here is say that n minus m is the magnetization times n, that's this,
times mu h. All right, so we'll use this. We'll come back to it. Too many definitions, but magnetization is an important one. It is roughly speaking the probability of being up minus the probability of being down for a given spin. OK, how can we calculate the magnetization?
Well, one easy way is to calculate the energy of the system. If we know the energy of the system, and we know the number of particles, and we know mu, and we know h, we can calculate the magnetization.
So the first thing we will calculate using the partition function is the average energy. From that, we can read off the magnetization. But the magnetization is for a particular configuration.
Are you talking about some average magnets? No, we're talking about the average magnetization. We're talking about the average. Absolutely right. Absolutely right. This is a particular configuration, and we should say that the magnetization is the average of that. You're absolutely right. It's the average of it. It's the average over the statistical distribution
of the Boltzmann distribution. Right, and of course, this is also the average energy. Yeah, right. OK, hell, what do I do? There it is. All right, so what do we do to calculate the average energy?
We calculate. Is that obvious that it's true? Which is true? That the average energy equals the average magnetization? This is the energy. Right, but that's a particular case of it, right? No, that's the energy. Given n and given m, that's the average of it.
This is the energy for a given configuration. The average energy is the average value of n minus m. So the probability is taken into account. Would the probability is taken into account? Yeah, yeah. And all I'm asking is, is it obvious that that average energy is equal to that equation
involving the average magnetization? It's not, I mean, it's probably true, but it's not obvious to me. Call that e, say e sub a. Here's an equation that configuration by configuration,
all right, let's, it is obvious, you think about it. It is obvious, it is obvious. For every configuration, the energy is proportional to n minus m. If you average both sides, the average energy will be proportional to the average of n minus m.
So the average energy, we can put averages around all of these. If something is equal to something else configuration by configuration, then it will also be equal in the average. So the average magnetization also involves the probabilities.
Yeah, absolutely. Yes, yes, yes. All of these, yeah, everything in statistical mechanics is average. If you write that last equation with all the brackets you need, only need brackets on the right-hand side around n. No, around capital N? No, no, bottom equation.
Yes, yes, of course. Around this n here, big N? Around e and then around big M. Big M, big M, not big N. Big N is a number. Big M. That's right. Yes, all right. So that's right. All right, so let's write it the way you want it. The average energy is equal to n mu H,
all of which are fixed numbers, times the average magnetization, let's call it. Now, strictly speaking, with the usual definitions, you don't have to put an average here because the definition of magnetization is average. It's also true that in statistical mechanics,
no, average over the probability distribution, average over the same exact thing we did with the ideal gas, we have a probability distribution and we calculate averages. We calculate averages from that probability distribution.
How do you get them off the line north-south without biasing up or down? They are biased. Sorry, what? You get them not to be east-west, but- No, no, no, no. This is a model in which, by definition,
these things can only point up or down. This is a mathematical model- And they're not biased up or down, for them all. They may be biased up versus down, but there's no such thing as east and west. Well, that partition function is totally unbiased. Oh, it's very biased. The energy prefers the molecules to be down.
Remember, the energy is plus if they're up, minus if they're down. Systems like to have lower energy, meaning to say that the Boltzmann distribution favors lower energy. This is most definitely biased by the presence of the magnetic field.
So there's no symmetry here. This is a problem that has no symmetry. It's biased for the atoms to point down, and it costs the energy to tip them up. Which way are they likely to be? Okay, let's see if we can make some guesses.
Which way will they be at zero temperature? Right? No, definitely not right. Down. Right, so at zero temperature, what do we expect the magnetization to be? We expect everybody to be down, and that means the magnetization will be minus one.
What do we expect at infinite temperature? Well, I mean, they're all 50-50. 50-50. We expect, at average, at infinite temperature, everything is just maximally random.
All states are equally probable. And so, at infinite temperature, we expect the magnetization to be zero. So it goes from one at zero temperature to zero at infinite temperature. This is what we expect. This is right. This is correct. Sorry, it goes minus one to zero.
And no point will it be positive, because the average magnetization will not be positive because of the bias down. It's that the infinite temperature will defeat the bias. Infinite temperature is just so random that a little bit of magnetic energy is unimportant,
and so it will be random. But at no point will the average magnetization be up. It won't be positive. Okay, I hope I'm right. What would you have to do to make it up? Switch the magnetic field the other way. Yeah.
Make the magnetic field negative. Right, right, right. Switch the magnetic field. Okay, so where are we? Instead of calculating the magnetization, I'm gonna calculate the average energy. We know how to calculate the average energy from a partition function. Remember, the average energy,
and I'm just gonna write E, no averages, is equal to minus the derivative of the logarithm of Z with respect to beta. So, there's a little bit of algebra to do here. We might as well do it. I know it tends to put people asleep to watch me do algebra. The logarithm of Z has a constant from here.
That's gonna go away when we differentiate, so let's not even bother writing it. Is equal to N log of the hyperbolic cosine, this is a terrible function, of beta times mu H.
N times the logarithm. Notice, first of all, that it's proportional to N. All right, that's a good thing, because typically energies, when we differentiate, will be proportional to N, and that's natural.
Let's differentiate this with respect to Z. Oh, sorry, with respect to beta, the derivative of log Z with respect to beta. First of all, we'll have an N. Now, the derivative of logarithm of an argument of a thing is one over that thing,
so that will give us, in the denominator, hyperbolic cosine of beta mu H. And then in the numerator, we have to differentiate cosh beta mu H with respect to beta. So what happens when you differentiate cosh? What is the derivative of cosh?
Cinch. So that's cinch beta H, sorry, beta mu H. And then you'll have to differentiate the cosh, oh, sorry, you'll have to differentiate this thing with respect to beta. So that gives you another mu H, mu H outside.
Now, is that the energy? Not quite, minus sign. This is the energy, minus. Okay, so we have the energy,
and we want the magnetization. So what we wanna do with it is divide it by mu H and divide it by N. So the magnetization is equal to minus, and as I said in the first place, it comes out minus. We're dividing by N and we're dividing by mu H,
and so it's just exactly cinch beta mu H over cosh. That function also has a name. It's equal to the tanh of beta mu H.
With a minus sign, yes, with a minus sign. Okay, so now all we have to do to understand this system is understand what the tanh function looks like. Incidentally, beta is one over the temperature. Okay, so we just wanna plot this function as a function of the temperature.
Mu times H, that's just a number. It's not so interesting. We could absorb it into beta here. We could plot the thing as a function of beta mu H. They come in together. Okay, so the question is what does a tanh function look like?
You can work out what a tanh function looks like by yourself. I will show you what it looks like. Well, first of all, cinch and cosh, for very large values of the argument, become equal to each other. They're basically both exponentials of beta mu H.
Let's write them down. Cosh of X is equal to E to the X plus E to the minus X over two. Cinch of X equals E to the X
minus E to the minus X over two. When X gets large, let's go to large X. When X gets large, what happens to E to the minus X? It just goes away. So for large X, they both are equal to E to the X
and their ratio is one. So very far away, cinch over cosh, I'm not including the minus sign now, just the cinch over cosh, the tanh, the tanh function, goes to one.
Incidentally, for negative X, it goes to minus one. But let's not worry about that. X is gonna be positive in this problem. Now what does it do near the origin? Near the origin, cosh is equal to one.
X equals zero, near the origin. E to the X is one, this is one plus one is two. What about this one? That's zero, but what about the correction to it if we expand E to the X as one plus X?
So the E to the X will be one plus X minus E to the minus X is minus one plus X divided by two and the answer is just X. The first derivative here is one.
In other words, it starts out just looking like X and it very quickly just bends over. It's a very boring function. It starts out linear and then it gets tired quickly and it just flattens out. That's the tanh function. And this horizontal axis is beta times mu H.
Now keeping in mind that beta is one over the temperature, what is the magnetization when the temperature is small? That's when beta is large. When beta is large, we're way out here
and the tanh function is one. So the magnetization is minus one. Zero temperature, all spins align themselves down. What about infinite temperature? Infinite temperature is beta equals zero.
Beta equals zero, well, first of all, the magnetization is zero. Beta equals zero, the magnetization is zero as expected. And this just fills in the details for us. This just fills in the exact details for this problem of how the magnetization goes from one at low temperatures
and goes to zero at high temperatures. You asked me, how can you get the magnetization to go in the opposite direction? Well, the answer is to allow H to go negative. If H goes negative, then it looks like that.
So if the magnetic field flips sign, everything just reverses. May I have a question, please? This looks like a continuous, but you said there is a temperature with. No, no, not in this system, not in this system. This system does not have a phase transition, no. This is too simple.
The first interesting system that has a phase transition is the two-dimensional Ising model. But first we're gonna do the one-dimensional Ising model. Ising was not a phase transition,
not a very good student. He was a student of Lenz, L-E-N-Z, who was famous for a number of things. For one of the things he was famous for or he was not famous for was inventing the Ising model. He gave his student one problem to determine whether there was a phase transition
in the Ising model, in the one-dimensional Ising model, and his student got the wrong answer. He said there was a phase transition, there was not. That's all, as far as I know, that Ising ever did. So, why it's called the Ising model is just, and Ising is the most famous name in all of statistical mechanics.
So, here it goes. Win some, you lose some. You win some, sorry, you lose some, you lose some, you lose some, I don't know. Okay, what is the Ising model?
Now, the interesting thing about the Ising model is it is symmetric between up and down. So, therefore, if there is any magnetization, it's because somehow the system has spontaneously broken the symmetry.
In the one-dimensional Ising model, that does not happen. In the two-dimensional Ising model, it does happen. So, I will define all of these Ising models for you right now. They work the following way. The energy is not stored particle by particle.
There's no, we're now, we have no external field. All right, no external field. So, if the particles didn't interact with each other, if the little magnets didn't interact with each other, there would be no energy. And if there's no energy, all configurations are equally likely. Okay, in this case, the magnetic field
that each spin sees is due to its neighbors. If its neighbors are up, it feels a magnetic field up. If one neighbor, if both neighbors are down, it feels a magnetic field down. And if one is up and one down, it feels no magnetic field. So, what we're saying is that the energy
is associated with pairs, with pairs of neighboring spins. And if the pairs are in the same direction, let's take that. Let's see, let's take that to be lower energy. Just, we, we have to make a choice now. Do we want the the interactions to favor alignment or anti-alignment?
That's anti-alignment, this is alignment. This is alignment and this is also alignment. The energy is gonna be equal for this configuration as it is for that configuration, and unequal to this configuration or that configuration.
You get it? All right, good, great. So, we come back to these variables sigma. And we say if sigma is aligned, if the two neighboring sigmas are aligned, just con, just focus on two spins. If they're aligned, then the energy is lower.
If they're unaligned, the energy is larger. So, let's take the energy to be some number which is usually called j. Do not, I don't know what j stands for. It's usually just called j. It's just a number.
It has an energy scale. It's an energy scale for the problem. J times sigma of particle one times sigma of particle two. Only two particles for a moment. These are two neighboring particles on a lattice.
Now, later on, we'll allow the lattice. Now, the lattice is just a line. Later on, we can have the lattice be a two-dimensional lattice or a three-dimensional lattice. One and two are neighboring sites on the lattice. And this is the energy of the one-two pair. Now, this energy is gonna be lower if they're anti-aligned.
Because if they're anti-aligned, sigma one times sigma two is negative. I want the energy to be lower if they're aligned. All right, so I'm gonna put a minus sign here. With this energy, the energy is low if the spins are aligned with each other.
And it's higher if they're anti-aligned. Now, supposing we have a line of them, and we can write that the energy is equal to a sum minus j of sigma of n times sigma of n plus one.
Does everybody understand what that means? Product of the spin, spin, I call it spin, product of the magnetic moment at each site times its neighboring site.
Each one, each pair, each neighboring pair counted once. Okay, so this is our expression for the energy. Now, let's think about it for a moment. What do you expect to happen at infinite temperature?
The general rule is that infinite temperature is just a random chaos. Everything is equally likely. Zero magnetization, and we'll worry about the energy, but yeah, right.
Everything's just random, and so in particular, zero magnetization. Every product will be zero too, because one is one and one is zero. Every product will be on the average zero. That's right.
Why does that bother you? Well, don't forget, the energy, if they're all parallel, is negative. So you're starting with a negative bias. You're starting with the ground state having negative energy. So the zero of energy, so to speak, is a big negative number to all the so having zero energy is effectively having a lot of energy relative to the ground state.
Why are the products open at zero? I thought sigma was either plus one or minus one, am I? It is. But on the average, if the neighbors are randomly distributed.
Yeah, yeah, the average, the average. If you have random chaos, that means there is likely to be found parallel as anti-parallel, so the average energy will be zero, which is a lot of energy. But what about zero temperature?
What would you guess for zero temperature? It'll be a line, right? Don't wanna be a line. But which way are they gonna be, this way? All of them. Everybody a line this way or everybody a line this way? You can't tell offhand. There are two ground states. Ground states mean states of minimum energy.
And they will both come in with equal probability. They'll both come in with equal probability. But now let me add one more thing. Let me suppose that there's a magnetic field, external magnetic field, but it's only acting on one particle.
One out of ten to the 23rd has a little stray magnetic field. And let's say that magnetic field is along one axis. Then what is the ground state? The ground state has a definite orientation.
Even if that magnetic field is very small, the ground state still has a definite orientation. And at zero temperature, at strictly zero temperature, the Boltzmann distribution always favors infinitely strongly the lowest energy state. So that means that even the tiniest little magnetic field,
stray magnetic field, will, the Boltzmann distribution will favor all of the spins pointing along one axis. If you were to apply that tiny magnetic field, let the system come to equilibrium at zero temperature and then remove the magnetic field.
The system will remember it. Everybody's holding everybody else in place. And the possibility of them all simultaneously jumping to the opposite state is, is remote if there's enough of them. So that's called spontaneous symmetry breaking. That is what spontaneous symmetry breaking is.
In this case, it's very simple. And this example has a symmetry. We can actually say quantitatively or mathematically what that symmetry is. Symmetry is usually represented by a mathematical operation on the degrees of freedom. What mathematical operation would you do on sigma to change from up to down?
Multiply it by minus one, right? Okay, let's go back to the earlier case over here, where the energy, in this case we could say the energy was proportional to just sigma itself.
Not sigma times a neighboring sigma, but just sigma by itself. Does that have a symmetry? No, the energy itself changes sign when you change sigma, and that's not a sign. Symmetry is actions that you can do that don't change the energy. Okay?
What about this system? Supposing you change the sign of one spin, one of them, sigma one and not sigma two, is that a symmetry? No, the energy changes. But what if you change them both? If you change them both, then the energy doesn't change sign. So if you take, go from two up to two down, that's a symmetry.
Now we have this whole vast array of them. And what if we change all of the sigmas simultaneously? Write formally the equation sigma of i for all i goes to minus sigma of i.
We replace every sigma by minus its value. Then the energy doesn't change. Then the energy doesn't change, and that's what it means to have a symmetry. An operation that you can do on the coordinates of a system
that don't change the energy, no matter what state, for every state. Whatever the state is, there is another corresponding state, which has the same energy, but in which the spins are all reoriented, opposite to what you started with, and that ensures
that in some sense there's no bias to up or down. If the system is gonna flop itself all simultaneously into up at zero temperature, that means it could have also flopped itself into down. There's no way to predict in advance unless you know
that tiny little stray magnetic field. There's no way to predict in advance which way it's gonna go, but it's gonna go one way or the other, because the Boltzmann distribution says you've gotta be in the ground state for zero temperature. And as I said, the little tiny stray magnetic field,
which will determine which one it is, but it will be one of them, all right, so clearly the next thing we want to do is to solve the one-dimensional Ising model. We want to find, what do we want to do? We want to calculate exactly the same, not the same part. We want to calculate the partition function for, for the system.
All right, so we'll do that next time. I think we'll quit for tonight. We'll do that next time. We'll work it out, and we will see that there is not, it does not have a phase transition. At a finite temperature. Nothing funny happens at finite temperature. Contrary to what Ising thought.
It took a few more years for a couple of physicists named Kromers and Wannier to prove that if it's a two dimensional lattice, there is a phase transition, and that's a beautiful story and we'll try to do it.
For more, please visit us at stanford.edu.