Statistical Mechanics Lecture 4
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Part Number | 4 | |
Number of Parts | 10 | |
Author | ||
License | CC Attribution 3.0 Germany: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/14939 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Statistical Mechanics4 / 10
1
3
4
5
6
7
8
9
10
00:00
UnterseebootGas turbineH-alphaLumberBeta particleHose couplingDifferential (mechanical device)SubwooferTemperatureGasMitsubishi A6M ZeroCell (biology)Constraint (mathematics)Magic (cryptography)Noise figureRemotely operated underwater vehicleController (control theory)Energy levelGround stationElectric currentContainment buildingPartition of a setMultiplizitätHot workingKopfstützeRoll formingAustauschwechselwirkungIdeal gasTypesettingRulerMinuteAngeregter ZustandNeutron starWärmekapazitätCapital shipNanotechnologyElectric power distributionThermodynamic equilibriumVeränderlicher SternQuantum fluctuationSpare partBandwebereiAtomismBasis (linear algebra)Continuum mechanicsPower (physics)DensityParticleCrystal habitElectricityRoots-type superchargerMassRutschungDrehmasseOrder and disorder (physics)Joule heatingAlcohol proofData conversionDirect currentMeasurementNeutronQuantumFACTS (newspaper)Membrane potentialWeather frontVertical integrationBending (metalworking)ScoutingContinuous trackFocus (optics)Stellar atmospherehome officePackaging and labelingElektronenkonfigurationStarCylinder headWindMagnetizationBlackHeatPressureCartridge (firearms)ElectronSensorRefractive indexFullingElectronic componentBook designLastLambda baryonBildverstärkerThermometerSteckverbinderYearIceCocktail party effectSeries and parallel circuitsFaxRoman calendarWoodturningYachtMechanical fanField-effect transistorMinerFinger protocolBusBauxitbergbauCardinal directionToasterTrade windParade & ParadeLadungstrennungBlack holeBoltzmann constantWhiteLimiterFirearmSchwimmbadreaktorRopeSchneckengetriebeHourWind farmCanadair CL-44CableWater vaporSpeckle imagingStandard cellToolFlat engineDayMountainBoatGemstoneBird vocalizationCab (locomotive)AutumnInterval (mathematics)EraserFoot (unit)VideoForcePickup truckDVD playerMechanicHauptsatz der Thermodynamik 2Vehicle armourRegentropfenWire bondingShip classAnomale DispersionSerientor-Sampling-LeitungVehicleAmateur radio repeaterCrystal structureAutopilotFunkSchubvektorsteuerungVideotapeClockTuesdayBill of materialsSpeise <Technik>CogenerationSummer (George Winston album)Railroad carRivetSunriseArtillery batteryEngineMarch (territory)SwitcherEisengießereiRep (fabric)Duty cycleFrost (temperature)SuitcaseTone (linguistics)Ballpoint penModel buildingUniverseGameBrickyardField strengthSizingElectronic mediaFlugbahnPlant (control theory)VoltageDroughtNoise (electronics)AC power plugs and socketsEffects unitAbsorbanceBand gapWearSource (album)StormTruckCrystal twinningNozzleParametrischer OszillatorLecture/Conference
Transcript: English(auto-generated)
00:05
Stanford University. Tonight we want to get to the real heart of statistical mechanics, the Boltzmann distribution, and then I was hoping to work out an example, the ideal gas. So let's just very quickly review.
00:34
We worked out last time and the time before a basic principle or a set of rules
00:43
for calculating the probabilities piece of i. I won't repeat them. I was going to repeat them, but we're 15 minutes late. What did I call them? P of i or P sub i? I can't remember. P of i or P sub i?
01:03
The probability of the i-th state in any one of the large number of replicated identical systems, just to remind you very quickly, there were things called occupation numbers, namely the number of these subsystems here
01:20
in the i-th state. And if you divide that by total n, the total number of them, that defines the probability that any one of them is in the i-th state. And that gives us a probability distribution over the states of the system. From that, we can compute an entropy,
01:42
which is equal to minus summation over i of P sub i logarithm of P sub i. That was our definition of entropy that we discussed a while ago.
02:01
And in fact, what we found, it came from just that complicated binomial or multinomial distribution, the capital N factorial over little n factorial, product of little n factorials, that if we wrote that out, writing that little n sub i over big N is P sub i,
02:22
we found out that what we wanted to do to find out what set of occupation numbers was the most likely set of occupation numbers turned out to be to maximize the entropy.
02:41
Maximize the entropy, but subject to a constraint. Incidentally, let's think of this entropy here as a function. What is it a function of? It's a function of all of the P sub i's. Think of it as a function. But it's a function of all the P sub i's.
03:01
Fortunately, it's a rather simple function. It's a function which is a sum of terms, one term for each P sub i. So it's a simple thing, relatively simple, which is just a sum of all the variables in the problem, the probabilities i of a particularly simple function
03:20
of P sub i. Well, it's not that simple. It has a logarithm in it, but not that bad either. Okay, but it was important to remember that there are constraints. Constraints. Two constraints. The first constraint, you want to maximize S,
03:44
subject to two constraints. Constraint one is that the summation of P sub i is equal to one. The next constraint is we started with a certain total amount of energy.
04:02
Let's call it capital E times N, capital N. Why times capital N? Because I'm imagining there's a certain amount of energy on the average in each one of these boxes. And if I double the number of boxes, I will want to double the amount of energy that I have in the whole system. So we assume the total energy is proportional to N,
04:24
and the average energy in each subsystem then, two, is summation P sub i, E sub i, the probability that a system has is in the i state times the energy of the other i state.
04:42
And what is that equal to? It's equal to the energy per box, the average energy per box. So we'll just call that E. That's fixed. It's fixed. We've determined that once and for all, by saying there was a total amount of energy
05:01
in the set of boxes, that can't vary, and so it's necessary that the average energy in each box is fixed equal to E, where E is the total energy divided by the number of boxes.
05:21
Okay, so these are the two constraints. Let me just write minus the entropy, and write that minus the entropy is plus P sub i log P sub i, just for a particular linguistic reason. I want to talk about minimizing something
05:40
rather than maximizing it. It's just a habit, it's a linguistic habit for me to talk about minimizing something. So I will talk about minimizing the function minus S, subject to these constraints. The constraints also involve the same set of variables.
06:03
So what are the rules for finding the minimum of a function given some constraints? And there, we talked about it last time, the method is the method of Lagrange multipliers. Now it's a highly formal, formalized,
06:23
mathematical construction, but it's extremely powerful. Anybody who does, first of all, anybody who does statistical mechanics will use it over and over and over, but in all sorts of contexts where you want to find the minimum of something where you know some other things, this is the way to do it.
06:42
So what you do is you take your function, which in this case, let's call this F of P sub i, F of all the Ps. I'm calling it F because it's just a function that we want to minimize.
07:02
We want to minimize F subject to these constraints. So let me remind you what you do. You take F and you add to it the constraint equation.
07:21
By the constraint equation, let me write the constraint equation in this way. The left-hand side of the constraint equation, the thing which is equal to zero, you multiply each constraint by a Lagrange multiplier.
07:41
That's just a number. Later on, we will figure out what the numbers are. Let's call the first Lagrange multiplier alpha. I called the general Lagrange multiplier last time lambda, but now I'm going to have two Lagrange multipliers, one for one constraint,
08:02
the other for the other constraint, and so I'm going to call it alpha and beta. So the lambdas in this problem are alpha and beta, and I want to distinguish them because they're a little different. They play a little different role. So here's the first constraint equation. Alpha times summation over i p sub i minus one,
08:26
and then the second Lagrange multiplier is called beta, and that's summation e sub i p sub i minus the constant e.
08:42
Here e is a fixed constant that's been determined once and for all by the energy supply for the whole collection of identical systems. So e is also a constant. And here we have it. Now, what is the rule?
09:02
The rule is, and this is called now f prime. I'm using the same notation I did last time where you take the thing you want to measure, sorry, that you want to minimize, you add Lagrange multipliers to the constraints, and then you minimize f prime.
09:23
That will leave you with unknown values of the alphas and betas, but then you figure them out by imposing the constraints. We're also considering that f prime is a function of alpha and beta? Yes, it is a function of alpha and beta, but for the moment I want to focus on the fact, yeah, it is a function of alpha and beta.
09:42
Right, it is. But I want to focus on the fact mostly that it's a function of p. Okay, now we're very lucky in this case. Each term here, oh, well, first of all, what we're going to do to minimize it is differentiate the f's with respect to the variables
10:02
and set it equal to zero. That's the way you find a minimum. That means we're going to take this f of p and differentiate it with respect to the p's. When I differentiate it with respect to the p's, the one here won't give us anything, right? It's just a constant. They weren't differentiated with respect to p.
10:22
It contains alpha, but we're not differentiating with respect to alpha. So for practical purposes, I'm going to throw away this one here. Likewise, when I differentiate this thing with respect to p, the e won't contribute. So let's just get rid of it and forget it.
10:40
It's not going to be important because the equation that we're going to write, namely that the derivative of this thing with respect to each of the p's is equal to zero, it's not sensitive to that. So there we are. Here's what we have to minimize. And what is f? F is also a sum,
11:00
and it's a sum p sub i log p sub i. This is very fortunate that each term is a sum, each term of which contains only one of the p's, each term. So supposing I want to differentiate this with respect to some particular p,
11:25
just to focus attention on what we're doing, let's pick p7. Okay, p7. We want to differentiate this with respect to p7 and set it equal to zero. All right, so here's p7. Here's one term, which is p7 log p7.
11:43
What's the derivative of that with respect to p7? Okay. We want to differentiate this with respect, okay, let's, yeah, what we're writing now is we're writing derivative of f prime with respect to p7.
12:02
Why seven? No reason. Just, I, uh, one is too special. Seven is, uh, a randomly picked number. Hmm? Yeah, the lucky number. All right, so f prime, the derivative of f prime with respect to p7
12:21
is the derivative with respect to p7 of p7 log p7. And that is, okay, so we have p log p. There are two terms in the derivative. In the first term, we differentiate, we differentiate p keeping log and not log.
12:40
So that gives us just log p7. Differentiating p gives us one. What's the other term where you differentiate log? What's the derivative of log? One over p times p? One. So this is pretty easy. That's the first term.
13:01
What's the derivative of this thing here with respect to p7? Really easy, even easier than this one. Plus alpha. Same for every one of the p's, incidentally. And how about the derivative of this one?
13:22
Plus beta e7. That's pretty simple. We have the same thing for p8 and p4 and p9 and p, uh, p152. So, let's just write p i here.
13:41
Log p i one plus alpha times e i. Now, what is this thing supposed to be equal to? Zero. Well, this is kind of amazing.
14:00
At this point, you know, statistical mechanics is full of things like this. Sort of magic. Magical identities which all of a sudden, as you're playing with them, you look at it and say, wow, I just discovered something important. I just discovered something unexpected. Unexpected how? Look, this is an equation for p i.
14:20
It's gonna tell us what p i is. Let's write it as log p i equals minus one plus alpha. That's a number now. We haven't figured out what alpha is yet, but we'll figure it out soon enough. Minus beta e i.
14:40
And what does that tell us that p i is? We exponentiate, right? So, it's first of all e to the minus one plus alpha. Well, that's a number. That's a number. I'm gonna call it by its standard name.
15:00
Its standard name is just one over z. e to the minus, or e to the one plus alpha, is called in statistical mechanics z. It has a name. For the moment, I'm not going to write the name, but this is just, I didn't know what alpha was,
15:22
and I don't know what z is. So, I haven't lost or gained any information by replacing e to the minus one over alpha and calling it one over z. Now, I'm exponentiating this thing. The first factor is e to the minus one plus alpha. That's my one over z.
15:42
But then the second factor is e to the minus beta e sub i. This piece here doesn't distinguish the different states. It's just a number. It is important. It's not that it's not important,
16:00
but the relative probabilities of different states are all contained in here. And what have we found? We found that the relative probabilities of the different states are simply exponentials. Exponentials of e to the minus beta e sub i. Beta must mean something.
16:22
Beta must have some physical significance. It's the thing, it's the only variable in here that tells us what kind of, well, that, ah. It's the only variable in here that has any chance of telling us what the average energy is.
16:41
Notice what this looks like. Here's the probability. Here's the energy going horizontally. And it's just an exponential function which falls off. The bigger beta is, the faster it falls off.
17:02
The smaller beta is, the slower it falls off. What are these different curves going to be parametrized by? Well, they're parametrized by beta, but they also can be parametrized by the average energy.
17:21
It's quite clear that the further out this curve is like this, the larger the average energy. The bigger beta, the smaller the average energy. The smaller beta, the bigger the average energy. So beta has something to do with energy.
17:43
It's the thing that you tune to change the average energy. We're going to be more specific about that. It's the thing that you tune to change the average energy. But for now, it's just called beta, and its significance is a certain Lagrange multiplier.
18:01
What is its real significance? Inverse temperature. Beta is the inverse temperature. But how the hell do we prove that? We already have a definition of temperature. I'll remind you when we get to it what the definition of temperature is. But here's our probabilities in terms of two parameters.
18:24
One is z and one is beta. The two Lagrange multipliers. All right, now we have to come back. How do we fix the Lagrange multipliers? We fix the Lagrange multipliers by making sure the constraints are satisfied. So the first constraint is that the sum of all the probabilities has to add up to one.
18:45
That's the equation. Let's write it out. One over z times the summation over i of e to the minus beta e sub i is equal to one. Incidentally, where do the e sub i's come from? What are they?
19:01
Well, they're, yeah, you throw up your hands and you say some laws of physics that had to do with whatever is inside the box tells you what the possible energy levels are. So those from our point of view now are simply numbers that are known. Somebody has been smart enough to compute them and tells us what the energy levels are.
19:23
What this tells us is what the average, what the probability to be in the ith level is as a function of beta. And beta is a thing that has to do somehow with the average energy because as we vary beta, we shift the curve. Excuse me, question?
19:41
Yeah. I'm trying to understand your graph over there. On the vertical axis, that's a particular p i, is that correct? No, that's, yeah, no, no, no, no. The vertical axis? This is e i, yeah. So there's a whole bunch of i's here. This is i equals one, this is i equals two, right?
20:02
Okay. Right. But you can also think of it as a function of the energy of the level by just saying every level has a certain energy, right? So the bigger beta is, the more rapidly this function falls to zero, if beta is huge, then it's dominated by the lowest energy levels, right?
20:23
Just make the energy even a little bit bigger and you'll lose a whole bunch of probability. On the other hand, if beta is near zero, then this curve is practically flat and it's going to contain important contributions from large energy.
20:41
So by tuning beta, you tune the energy or you tune the average energy. We'll come back to that in a moment. Okay. From this, from here now, we see what z is. Multiply by z and now you have a formula for z for the unknown Lagrange multiplier.
21:04
And incidentally, in general, Lagrange multipliers depend on, well, they are numbers but they may depend on each other. But here, the first Lagrange multiplier, that's the alpha or strictly speaking, it's e to the one plus alpha that we see is a function of beta.
21:27
In general, it is a function of beta but it doesn't depend. It's a sum over all the states of the system. So this is z of beta and it has a name. Here, you can forget where it came from now. Z of beta, for our purposes from here on in, is by definition the sum over all of the states of the system of e to the minus beta e sub i.
21:55
Where does it fit into the probability distribution? It fits in as a kind of normalization constant that makes sure the total probabilities add up to one.
22:04
But in its own right, it is a function of beta. Let's keep, let's remember, almost everything in statistical mechanics is buried in this function of beta. We're going to find this as an extremely powerful thing. It is called the partition function.
22:20
Now what it has to do with partitions, I don't know. Partitions I felt were walls in a room. This is the partition function and it's a function of beta.
22:41
As we'll see, beta is essentially the inverse temperature but we don't know that yet. Next equation. Summation of p sub i e sub i is the average energy. Write that out. That's going to determine the other Lagrange multiple.
23:00
That's going to determine beta basically. So let's turn, let's work it out. We have summation over i. p sub i is one over z e to the minus beta e sub i.
23:20
That's p sub i. And now we multiply it by e sub i. That's equal to e. Supposing we've calculated z, we know z. Supposing we've calculated it.
23:41
The rest of this equation determines beta. The rest of the equation, having figured out what z is, this now becomes an equation for beta. In terms of what? In terms of the average energy. If we know the average energy and we're lucky, this equation might have a simple enough
24:00
form that we can find out what the average, what beta is in terms of the energy. But we can also read it the other way. We can say whatever beta is, it determines the average energy. Both are legitimate ways to look at it. For the moment, let's just stare at the equation.
24:21
I'm going to rewrite it. A trick, a very famous trick. And the famous trick goes as follows. Let's differentiate, no, let's begin, let's take e to the minus beta e sub i.
24:54
That's of course the partition function. This is the partition function. And let's differentiate it with respect to beta.
25:02
This is z. And let's differentiate both sides with respect to the partition, with respect to beta. Again, another trick. dz by d beta. What do we get on this side?
25:22
How do you differentiate this with respect to beta? You just pull down a factor of minus e sub i. Derivative of an exponential with respect to the argument in the exponential just gives you a factor of what's in the exponential here. So it gives you a minus sign, an e sub i, and an e to the minus beta e sub i.
25:48
Does everybody recognize when I say that the derivative of respect to, of each of these terms just pulls down a factor of minus e sub i? Okay. But now, the equation for z, didn't you, uh, couldn't it be z1 over the equation just above z equals equation there?
26:19
Here.
26:20
Here's z. This one? No, no, I just took the definition of z, right. I took the definition of z, the top equation, and differentiated it. And notice, look what I get. I get something which looks like, which has a piece of the second equation here.
26:44
Okay, so let's look at what we have. Let's divide by z on both sides, put a z downstairs here, and now what is the sum?
27:00
The sum is exactly the thing that we identified as the average energy. So, we have that the average energy, e, the average energy is just 1 over z, dz by, it's minus, there's a minus sign, uh, here.
27:21
Minus plus. So, we have a remarkable formula for the energy in terms of the derivative of the partition function with respect to beta. Let's write it out. e, energy, what this is suggesting to us is that the thing that you want to compute is this partition function.
27:48
The partition function we're going to find has everything in it, but in particular, it has the average energy. And it's equal to 1 over z minus 1 over z, derivative of z with respect to beta.
28:03
So, if we knew the partition function, which we will calculate in various cases, we differentiate it with respect to beta, and that tells us the average energy. It's also, of course, a function of beta. Everything here is a function of beta. So, what this is telling us is the energy as a function of beta.
28:24
Yeah, there's one step first. Before I answer the question, let me just note that this is minus the derivative of the logarithm of z with respect to beta. It's really the logarithm of z which is going to occupy us a lot.
28:44
1 over z times the derivative of z is the derivative of the logarithm of z. Okay, yeah, question? Yeah, I did this as homework because I thought it would be really interesting. And I got to this equation and it reminded me of an eigenvalue equation. I don't know if that's what it is.
29:01
Does it look like the differential with respect to beta transforms the function z into a constant times z? Is there any connection? Well, it should not be thought of, no.
29:23
E is not a constant. It's a function of beta. In this equation, for the purposes of this equation, this is e of beta. What does it mean? Since we're going to find that b is closely connected to the temperature, this is going to tell us the energy as a function of the temperature is minus d log z by d beta.
29:55
This is the first of many sort of magical formulas.
30:04
In other words, statistical mechanics is really, really, it's extremely easy and it's extremely hard. It's subtle. It's full of surprises. It's full of the application of very simple formulas which then yield extremely surprising and powerful results.
30:28
What makes it hard is that half the time you can't guess or you can't look at the formulas and say, oh, of course this is obvious. E is equal to the derivative of the logarithm of the partition function.
30:44
The whole story here was complicated. You couldn't have guessed that, but it sure is a simple formula and it's a powerful formula. So when you talk about the E over here, the average energy, that's the total energy or the average energy per partition?
31:03
That's the average energy per box. Remember, it was the, yeah, per box. Well, actually, if you just look at the logarithm of z, it's the sum of the energies times the temperature.
31:20
What? The logarithm of z is the sum of the EIs times the temperature. The logarithm of the sum of exponentials is not the sum of the logarithms of each individual exponential. You can't do that.
31:40
It's the product of the logarithms. I mean, it's the product of the... You take the logarithm, you bring them down and do... You have z? Oh, right. It's sums, not products. Yeah. That's right. Sum, not products. Right. Question?
32:00
Yeah. Let's make sure I get this. We've got unknowns, pi. The ei are knowns, is that right? They're givens? Yeah. They're given. So when you have a Lagrange multiplier kind of problem, you've got these unknowns, p1 through pn or whatever, and then you introduce this alpha and beta.
32:20
And when you do the solutions we've done here, in the end, you solve for everything. There are numbers, like alpha and beta, they're numbers. Right. But here we're like, alpha and beta are like variables now. Well, it may turn out that the alphas and betas have some interesting physical significance.
32:41
Remember, there are other things in the problem besides just a piece of ice. For example, there's the average energy. Now, what we're finding is that there's a tradeoff between average energy and beta. You can either fix the average energy and then determine what beta is.
33:04
That would be the standard way of looking at Lagrange multipliers. But you can read it the other way. You can say the average energy is determined in terms of beta. And we're going to find, as I said, that beta has a significance of its own.
33:20
It has a life of its own as the temperature, as the inverse temperature. And that's the next thing we're going to do. Not quite. And alpha's gone from the picture now? Is what? Alpha is gone from the picture? Yeah, alpha went away a long time ago. We just redefined it. Yeah, that, yeah. Right.
33:41
But it's embedded in z, right? Hmm? It's embedded in z. Well, it's e to the one plus alpha is z. It's just a redefinition. This is predicated on the idea that the second equation, summation p of i e p of i minus equal 0,
34:01
and that supposes a lot about the The second equation here. Right there, yeah. That presupposes a lot of stuff that may or may not be true, given the system. It's the definition of the average energy. Yeah, but there may be other constraints.
34:21
There may be. So the energy is not the way, something makes the energy happen in one cell for some reason. Can you add a constraint to that and then refine it? There may be other things going on. For example, you may want to constrain the total momentum or some other conserved quantity.
34:41
For the moment, let's suppose there are no other conserved quantities. Then there's nothing left to constrain. Now, you may come in and you may say, yeah, okay. You may be interested in changing the volume of your box that your gas is in. For the moment, we've supposed that all the parameters which define the system,
35:04
such as the volume of the box, such as the shape of the box, all the other things that you could imagine varying are kept fixed. Those are called control parameters. Changing the volume of the box, changing the shape of the box, changing the magnetic field on the system,
35:27
those are control parameters and they're important. But at the moment, we're imagining they're absolutely fixed. What do they determine incidentally? The energy levels. Yeah.
35:41
So when you change the control parameters, you in general change the energy levels and in so doing, you change everything including the partition function. Should we be writing over there in the energy with the parameter beta, shouldn't alpha be as a parameter also?
36:01
No. It's in z. It's in z. You don't need both z and alpha because z and alpha are the same thing. This was just a definition of z. It's a number.
36:21
It's a number. Well, it's a number that depends on beta. It depends on beta because when we worked out what it is, we found out, where is it? We found out that it has implicit dependence on beta. So it's a beta dependent number, it's a number. So that's how it captures the system. Yes, it depends on the EIs which we're imagining are a fixed set of numbers for the moment.
36:45
Now, when we vary other things in the system, volume of the box, electric or magnetic field, or other things that we might vary, other control parameters, voltages, whatever you have,
37:03
you may change the energy levels. And when you do, you change z. But for the moment, those are fixed. Excuse me. Not to beat a dead horse, but... So when we had to set up this problem, we've got unknowns, PIs.
37:24
The EIs are fixed. We then introduce the Lagrange multipliers. We solve those. Those are actual numbers that we went into the trouble of solving. They depend on beta. Yes. And beta is a number.
37:41
For a given EI, beta is an actual number. But now that we found this... No, beta is a number that depends on the average energy. If you fix the average energy, then beta is a number. We're going to now vary the EI and the E. No, we're not going to vary the EI. Not now, anyway. The EI are a fixed set of numbers, the energy levels of the system.
38:02
Well, then you're going to get a fixed... You're going to get a particular number of beta. Why? Because you solve the Lagrange multiplier, and it's n plus 2 equations or whatever, and... The EIs are just a fixed set of numbers of the energy levels.
38:21
They are not the energies of the system. The energy of the system depends on the PIs. The IIs are just a set of numbers, okay? The average energy really depends on the Ps. Okay, so we're going to vary the E, the E then, not the EI?
38:42
The E, not the EIs. I'm just trying to see that for a fixed E and a fixed EI, beta is a number. Right. Okay, but now we're going to vary the E and hence beta itself can vary. Right. So beta is a function of E or E is a function of beta.
39:00
Right. Either one of them can be used as the parameter that you might vary. Now, in a box of gas, it's easier, much easier to measure the temperature than it is to measure the energy in the box. How do you measure the energy of a box of gas? Well, you can weigh it and use E equals mc squared.
39:21
That's a lousy way. I mean, it's a great way, but not helpful. A little hard to measure the energy. It's much easier to measure the temperature and you stick a thermometer in, you measure the temperature, and if you know the connection between E and beta,
39:41
then you can determine the energy. But there's one hang up. We haven't shown yet that beta is connected with the temperature. So let's do that now. Well, one step first. Yeah. Before we do that, let's talk about entropy.
40:01
Remember, entropy comes before temperature. Entropy, uh, logically comes before temperature, so it behooves us to calculate the entropy. I don't think we need this anymore.
40:21
Let's see if we can get an expression for the entropy in terms of the partition function. Here's more magic. Entropy is equal to minus summation over i, p sub i log p sub i,
40:52
equals minus summation over i, 1 over z, e to the minus beta e sub i,
41:05
p sub i is 1 over z e to the minus beta e sub i. Okay, where is that? Uh, we lost it. But that's the probability. Times log p sub i. And what is logarithm of p sub i? Let's, uh, let's write that.
41:21
P sub i is equal to 1 over z e to the minus beta e sub i, right? Okay, logarithm of p is minus beta e sub i, minus log z.
41:49
Everybody see what I did? Log of p sub i, p sub i is a product. So it's log of 1 over z, that's minus log z, and log of e to the minus beta e i, that's minus beta e i.
42:07
Okay. What's the first term here with the minor, uh, let's see. This is s, so let's get rid of some minus signs. Minus sign here, plus, plus.
42:21
That's good because entropy ought to be positive. Okay, what's the first term? P sub i times beta e sub i. Beta is a number, right? Beta is a number, it doesn't depend on i, so let's take it outside for the first term. Beta times, this is essentially just the sum of p sub i, e sub i.
42:44
That's the first term, what is that? It's the average energy. So the first term here is beta times the average energy. Okay, that's good. We know how to compute the average energy from z.
43:05
So given z, we would know how to calculate beta times the average energy. That's the first term. Can everybody see? Anybody cannot see? Yell out right now if you can't see that the first term is beta times the average energy.
43:22
That's part of the definition of p sub i. Okay, good. Now, the second term, let's look at the second term. Plus one over z, log z, one over z from here, log z from here, and then sum over i, e to the minus beta e sub i.
43:54
What is the sum of i of e to the minus beta c sub i? That's z.
44:02
So this z just cancels this z. Now, there's a lot of fancy gymnastics here. At some point, you know, you go on to autopilot and you just do some mathematics until you find the formula that you like.
44:21
Here we have sum over i, e to the minus beta e sub i is equal to z. That cancels this z. And now we have a formula that we like, I like. What was this thing? Anybody remember what I calculated? The entropy.
44:42
So we have an interesting formula for the entropy. Beta times e, I could write e in terms of a certain derivative of the log z, but I don't need to. I'm just going to leave it as the average energy. Plus the logarithm of z. The logarithm of z is a thing which comes up over and over again.
45:06
I'm not going to give it a name at the moment. It does have a name. Actually, it's related to, by a factor of temperature, the Helmholtz free energy.
45:21
But for the moment, we don't need to know. This is a formula. If we calculate the partition function, and we can calculate it as a function of beta, because it is a function of beta, then we're capable of calculating the average energy. We're capable of calculating the entropy.
45:41
And in fact, if we calculate the entropy and the energy, we can calculate the temperature. So let's go to the next step, which is calculating the temperature. Should there be a minus sign in front of that S sign? I don't think so. I don't think so, no.
46:01
No minus sign. Let's go on to autopilot again and see if we can find out something about the temperature. Now, to find out something about the temperature, we have to remember something about the definition of temperature.
46:25
Not bad. Pretty good. Temperature. The definition is that the change in energy, or sorry, the change in energy when you change the entropy by one unit is called the temperature.
46:48
DE equals TDS. It's basically the derivative of the energy with respect to the entropy.
47:00
That's temperature, okay? That's the way we defined it. When we used the definition of it in this way, we found that heat always flows from hot to cold. That's a pretty good indication that it's got something to do with temperature. And it is the definition of the temperature of a system. The rate of change of energy with respect to entropy.
47:22
So, let's try to calculate. This of course, this of course is also equal to, or one over the temperature is equal to the rate of change of S with respect to energy.
47:40
The inverse temperature is the rate of change of S with respect to entropy. Let's see if we can do that derivative. The derivative of S with respect to energy. So, let's differentiate this with respect to energy first.
48:01
Now keep in mind that it would be simple if beta didn't depend on the energy. But beta does depend on the energy. So, there's two terms from the first term. There's the S by the E is equal to beta plus E times the beta by the E.
48:30
Does everybody see what I did? I differentiated this with respect to energy and I got beta. That's one term. And the other term is the energy times the beta by the energy.
48:46
Everybody with me? Next term. This one over here. Let's differentiate log Z. Let's see how I did this.
49:05
I'm doing this badly. Let me go back a step. Let me go back a step. Let's go back. I'm doing this badly. I want to start over. I want to start completely over again.
49:21
Bad first move. A better move would have just been to write dS. dS. I'm going to take a differential of S and that is two terms from the first term. Beta dE plus E d beta. This is easier.
49:40
And from the second term, what do I get? I get the derivative of the log of Z with respect to beta times d beta. Oh boy, this is much easier than I thought. It's also easier than I wrote in the notes. See what I did? I wrote dS is equal to a term from here.
50:01
That's this. And then derivative of log Z with respect to beta times d beta. Oh, this is so easy. How did I make it so complicated in the notes? Energy is equal to minus d log Z by d beta.
50:25
So we have energy times d beta minus energy times d beta equals beta dE.
50:40
dS. What was the definition of temperature? Temperature was, let me remember, dE equals T dS, right? Ah, dE equals T dS. Remember that one? Okay, here we have dE equals one over beta dS.
51:07
Thus, it follows that the temperature is equal to one over beta. For years I've been teaching this through a complicated series of steps,
51:22
which is much more complicated than this. Okay, it's very easy. Beta dE plus E d beta plus d log Z by d beta. That's dS. These two cancel identically.
51:40
And the result is the temperature is equal to one over beta. That's wonderful because we've now found out what the significance of beta is, the physical significance. It started out as simply a Lagrange multiplier. We manipulated it, used a couple of identities, a little bit of calculus,
52:00
and we find out that beta is one divided by the temperature. Does that assume that the log Z doesn't depend on E? It does depend on E. It depends on either E or beta. Then don't I have to say d log Z dE? It depends on only one variable, either E or beta.
52:26
There's only one independent variable, and I could have differentiated it with respect to E, but that would have made it more complicated. Z only depends on one variable. It can either be taken to be E or beta.
52:43
It's a good point. The units of temperature here are units of energy, right? Yeah, good, good, good. There would be a Boltzmann constant here if I wanted to keep track of centigrade or Fahrenheit and so forth.
53:02
It would actually be the Boltzmann constant times the temperature is one over beta. Beta is one over kT. All right, so in laboratory units, beta is one over kT, k Boltzmann T. But in the theorist units where temperature has units of energy,
53:25
then it's just one over beta. When I'm dealing with a volume of gas, that average energy is usually the average of a molecule.
53:50
No, well, not if your system is defined to be the box of gas. If the system is defined to be the box of gas,
54:00
then it's the energy of a collection of molecules. Now, you can go back and try to define the system as a single molecule, and you might be able to get away with it under certain circumstances. The reason you can't get away with it in all circumstances is because the molecules interact with each other,
54:22
and it was assumed that these boxes were extremely weakly interacting, only interacting enough to be able to exchange energy a little bit. Right, so in many cases, you can take the system to be one molecule.
54:43
But we'll do it, we'll do it by taking the system to be the box of molecules, and we'll see the relationship between them. Okay, so we have temperature is one over, and we, oh, good, good, so we, let's, um, let's, let's accumulate our equations here someplace of important equations.
55:08
Let's summarize them from over here. P sub i is one over z, e to the minus beta, uh, e sub i.
55:26
Z equals summation over i, e to the minus beta e sub i. Z is a function of beta. E is equal to minus the derivative of log z with respect to beta.
55:49
Temperature is one over beta. And we have one more equation here that's interesting. The entropy is equal to, where did I have it, beta times e plus log z.
56:18
Now we want to do an example, an example drawn from real physics,
56:23
approximate physics to be sure, meaning to say that we'll make an approximation, the ideal gas. The ideal gas is a gas of molecules in a box where each molecule is a point molecule,
56:44
and either you can say it in a number of ways, you can say that the molecules are so weakly interacting that we ignore the interaction between them, or we can say that the gas is so dilute that the probability of two molecules being close enough together to interact is negligible.
57:02
Uh, either way, the approximation is that the molecules do not interact at all. Another way of saying that is that the energy is a sum of just, in this case, let's say the kinetic energy of the molecules and nothing more inside a box.
57:20
So the box, here's the box. It has volume v. The number of molecules in the box is n. The density is n over v, the density of molecules, not the density of energy, n over v, and I'll call that rho when we get to it.
57:41
And we want to calculate the partition function. Oh, first of all, what's the energy? First of all, what are the states? What are the states? The states are the collection of values of position and momentum for each molecule. That's the way we label the states.
58:01
So a state then is a collection of three n coordinates, x1 through x3n, y3n and y, not just n. Because each molecule has three coordinates, that's y, x, y, and z.
58:20
We'll just lump them all together, uh, in this fashion. What about the momenta? The momenta are p1 through p3n. And a state of the system is just a point in this six n dimensional space.
58:44
A set of values of x and a set of values of p label a point. What is a sum over states? A sum over states is replaced by an integral over the x's and the p's. In other words, we have to make use now of the relationship or the correspondence between sums and integrals.
59:10
I'll not go through that in detail. Uh, I think you know how to do that, how to replace an integral by a sum or a sum by an integral.
59:20
So let's just do it. Instead of writing a sum over i, we write an integral over d3nx and d3np, over all the x's and p's. I'm interested in the partition function.
59:41
The partition function is a sum to begin with, where is it? A sum over i, that is replaced by an integral over x's and p's. Next, we want to write e to the minus beta.
01:00:02
times the energy of the i-th state. What is the energy in the i-th state of a particle in a box? It's just the kinetic energy. For the moment, we're ignoring the interaction between the particles. It's just the kinetic energy. So what is the kinetic energy of a particle?
01:00:21
It's P1 squared plus P2 squared plus P3 squared, one over twice the mass of the particle. P1, P2, and P3 are the three components of the momentum, PX, PY, and PC, so to speak. Notice the three directions of space enter in exactly the
01:00:41
same way. What if I take all of the particles? If I take all of the particles, all I do is add up all three in contributions of this type here. So this just becomes sum over little n labels the particles.
01:01:02
The little n labels which particle I'm talking about. Oh no, it labels which coordinate I'm talking about. From n equals 1 to 3n times Pn squared divided by twice the mass of the particle.
01:01:26
It's just e to the minus beta. The beta is the beta here, but the energy of a given state is just the sums of the squares of the momenta of all of the 3n momenta divided by twice the mass.
01:01:42
Yeah, let's take the masses to all be equal. Let's take, let me clean this up a little bit. Minus beta over twice the mass times P sub n squared. Now it's a sum in the exponent, but that means it's actually
01:02:03
just a product. It's a product over all n. I'm not going to write a fancy product sign. Maybe I'll just leave it as sum here, but remember this is a fairly simple expression.
01:02:20
It's just a product, one factor for each coordinate e to the minus P squared over 2m times beta. Where? Well, the sum goes from n equals 1 to 3n.
01:02:40
There are 3n momenta all together. There's 3n particles. Yeah, but then- No, no, there's n particles. n particles, 3n coordinates. Right. Why would I have 3n particles? That would really be perverse.
01:03:01
Well, 3n coordinates. Okay. Professor? Yes? You're um, you're summing over all the mo-momentum, like you're also, it seems like you're double counting versus the
01:03:20
formula for z. No, you're summing over n. Okay, let's write out what this is. Let's write out what this is. This is integral dP1 e to the minus beta over 2m P1 squared dP2 e to the minus beta over 2m P2 squared, dP3 e to the
01:03:47
minus beta over 2m P3 squared. You see what I've done? Oh, there's also, sorry, there's also the x integrations which I haven't included here. The momentum integrations factorize, they factorize into a
01:04:04
product, dP1 for the first mo-component of momentum, e to the minus beta over 2m P1 squared. And in fact, if you think about it for a minute, this is just a 3nth power of one of these integrals.
01:04:21
Each integral is exactly the same as every other integral. The integral over P1 gives a certain thing, a number. Integral P1. Integral P2, when the integral is done, gives the same number. It's just the capital, it's just the 3nth power of one of
01:04:43
these integrals. That's going to make it very easy. But before we do, yeah, let's write it out. We might as well write that down now. It's a definite integral of all momentum, all possible
01:05:04
momentum that the particle could have. It's a definite integral. It's an integral over all possible momentum, all possible states. The sum over I is the sum over all possible states of the system.
01:05:21
The integrals dP and dX are integrals over all possible P and X. Now, the momentum in the box can be anything. What about the X integrations? They have to be inside the box. They're constrained to be inside the box.
01:05:41
So let's focus on them first of all. Let's take the first particle. That's the first particle. That's just integral d3X for the first particle. And it's an integral over the box. What's its value? The volume of the box, right? So the X integrations here, each particle, for each
01:06:03
particle separately, the X integral gives a volume of the box. So altogether, the X integrations and the integrations factor. They're not connected together. There's nothing in this integrand that depends on X.
01:06:20
The integrand only depends on P. So the integral over X is very simple. The integral over X gives a factor of volume to the 3N. No, volume to the N. Okay?
01:06:47
I'm going to put an extra factor in here. If you like, we can get rid of it. Well, let's not do it. Let's leave it. Let's not.
01:07:00
Let's not. We'll come back to it. No, let's. The factor 1 over N factorial is a little bit contentious whether it ought to be there or it oughtn't to be there. Fortunately, it doesn't matter whether you put it there or not.
01:07:22
I will keep track of it. I will put it there, but I'm going to tell you what it comes from. It comes from identifying, here's a state with molecule 1 over here and molecule 2 over here. Here's another state with molecule 2 over here and 1 over
01:07:43
here. Are they the same state or not? Well, it's a little bit ambiguous. Do particles carry labels which have names attached to them so that putting Larry over here and Fred over here is
01:08:02
different than putting Fred over here and Larry over here? Or are they sort of nameless things where you can just say the states correspond to a particle over here and a particle over here, just two of them, one here and one here? The answer is in classical physics, it doesn't matter.
01:08:21
In quantum mechanics, it does matter. And in quantum mechanics, particles do not carry names. Electron over here and electron over here is the same as an electron over here and an electron over here. So that means if there was two particles, you would say
01:08:44
you over count by a factor of 2 by considering all possible configurations, giving the particles names and then allowing them to have all possible configurations gives you 2, a factor of 2 to many states.
01:09:04
How about 3 if there were 3 particles? What's the over counting factor? 3 factorial, 6. What if there are n particles? N factorial. So, the usual argument goes, as I said, it doesn't
01:09:24
matter. In the end, it would make no difference to any of the formulas. I'll try to show you where it goes in the end. But it's a common practice to divide this by n factorial and to say that we over counted everything by a factor
01:09:42
of 1 over n factorial. I'll show you what that does. That does a nice thing. Okay. So that's the volume integral. What about the p integral here? That gives us something to the 3n power because there's
01:10:02
3n of these integrals. It gives us something to the 3n power, namely integral dp e to the minus beta over 2m p squared, all of that to the 3n power.
01:10:25
Looks terribly complicated, but it's not. This integral is just an integral, a definite integral. What can it depend on? It could depend on beta. It could depend on m. It doesn't depend on p. That's the integration variable.
01:10:40
So, we have an integral to do. I'm gonna do the integral over here on this side of the blackboard. I'm not gonna, oh, no, I think I'll do it underneath.
01:11:00
Let's take the integral. Integral dp e to the minus beta over 2m p squared. Okay. First, I'm gonna change variables. I hate the fact that there's a complicated expression in the
01:11:21
exponent here. So, I'm gonna define beta over 2m p squared to be equal to q squared. That's just a change of variables. Then, the integral becomes e to the minus q squared.
01:11:44
That was easy. I got rid of all the stuff in here. But not quite, because what happens to dp? What happens to dp?
01:12:01
P is equal to the square root of 2m over beta times q. So, dp becomes the square root of 2m over beta dq.
01:12:21
But now, this here integral is just a number. This really is just a number. Dq e to the minus q squared is really just a number. From minus infinity to infinity. Anybody know what the number is? Square root of pi. Where did it come from? You'll look it up in a book.
01:12:43
I could show you some ways to do the integral, but the main thing is that the integral is just a number. It converges very quickly, because e to the minus q squared goes to zero very fast. The integrand is finite everywhere. It goes from minus infinity to infinity.
01:13:00
It's just a number. And that number happens to be square root of pi. So, we've calculated it. You just put a square root of pi inside the square root here. And that's your integral. Ladies and gentlemen, we've calculated the partition function.
01:13:22
The partition function for the free gas, for the ideal gas, let's write it down now. Z is the volume to the n over n factorial times the square
01:13:43
root, I forgot what it is now, 2m over beta, 2m over beta, 2m pi over beta. To what power? To the 3n, so we can call that, we can take the square root
01:14:01
away and write that this is to the power 3n over 2. The 1 half is the square root. It's fine in this form, but let me just show you what this n factorial does for you if you include it.
01:14:24
If you don't include it, you just go through the calculation the same way and you find no difference, but still, let's, let's, I'll show you where it goes, but let's take v to the n over n factorial and make n big. We're talking about, you know, 10 to the 23rd molecule.
01:14:42
So, n is a huge number, n factorial is a huge number, and we can approximate n factorial by the Stirling formula. So, v to the n, that just stays v to the n, n factorial, that becomes n to the n, e to the minus n, remember that?
01:15:07
Or that gives us e to the n upstairs, e times v to the power, e times v over n, all to the power n.
01:15:25
Well, e is e, there's nothing I can do with that. I wish it weren't there, but it is there, so let's leave it there. It doesn't do very much. But what is the ratio of v to n? Better yet, what's the ratio of n to v? The density of particles.
01:15:40
So, we wind up being able to replace all of this volume dependence by e divided by rho, this is not the electric charge, incidentally, it's just the stupid number e, 2.7, divided by the density to the power n.
01:16:03
That's the factor coming from here. If I hadn't had this n factorial there, I wouldn't have been able to make this nice trick. The answer would have simply been, you know, okay, let's leave it that way.
01:16:20
All right, so all together then, here's our formula, z is equal to this factor times e over rho to the power n. Let's get rid of what doesn't belong.
01:16:43
That's z. All right, so we, again, we knew what we wanted to calculate. We went on to autopilot, started doing integrals, and at the end of the day, here is z. And note what it is, it first of all depends on the density.
01:17:00
The density for a fixed number of particles in a fixed box, the density is just a number. All right, so it's interesting that it depends on the density, but the density is a number. Two m and pi are numbers, capital n is a number, only beta here is, it's also a number, of course, but it's an interesting number.
01:17:28
It's one that we may want to change in the course of the problem. Uh, for example, we may want to differentiate with respect to it, which we will do momentarily.
01:17:40
This is z, let's calculate logarithm of z. I told you that logarithm of z is the thing which comes up over and over, it comes up over here, so let's calculate logarithm of z. Incidentally, whenever you get a product and you want to differentiate it, I guess there's an old saying, uh, due to, um, I don't
01:18:11
know, zeratustra, this big zeratustra, it's easier to, uh, to differentiate a sum than a product.
01:18:20
So, so that, that's why you take logarithms of things. This is a product, partition functions are products, typically it's good to take the logarithm because then they're easier to differentiate. Okay, so what's the log of z? The log of z is equal to n times three halves logarithm of two m pi over beta plus e over rho, I believe.
01:19:06
No, plus log e over rho. Or better yet, minus log rho over e.
01:19:23
That's the logarithm, but does everybody see what I did? Well, you can just say, plus one. From where? Log e is one. It is indeed. It is indeed.
01:19:40
Log e is one, so I think you're right, I think there's a plus one. Minus log rho plus one. Now we're gonna be doing things like differentiating, uh, the log of z and things like that. This one isn't gonna do anything.
01:20:02
When you differentiate a one, you don't get anything, so it's not, it's not in any way important in the formula. Logarithm of two m pi over beta, that's logarithm of pi plus logarithm of n, m plus logarithm of two minus log of beta.
01:20:22
So it contains a bunch of terms, the only one of which was, which is important when you differentiate it with respect to beta is the three halves minus three halves log beta. Everything else functions as a constant. So we can write then that this is three halves minus three halves logarithm of beta plus a bunch of constants.
01:20:53
Constants that include the density, which we're not different, we're not changing, pi, m, two, and one.
01:21:11
And there's also a constant, so indeed this is a constant times n, but for our purposes it's just another constant.
01:21:26
I think you're right, I think it's log one over, um, no, that's what the minus was here. The minus made it log of one over beta, okay? Everybody got it? Minus sign because it's log of one over beta.
01:21:43
Three halves, where did the three come from? Three dimensions of space. Suppose there were seven, eleven dimensions of space. Eleven, eleven halves. Where did the one half come from? No, it didn't, uh, it didn't come from there.
01:22:02
It came from the square root, which was this Gaussian integral. It came from the change of variables from this thing here to q. We had to take a square root. So, is that where it came from? No. No, it didn't come from there. I take that back.
01:22:21
Yeah, it did, it did, it did, it did, it did, it did. That's exactly where it came from. Yeah. Remember, the Gaussian integral is called the Gaussian integral. The Gaussian integral is the square root of this thing here. Okay, so that's where all the pieces and all the parts come from. Here's log z. What can we calculate?
01:22:41
We can calculate. We can calculate the entropy. But even more interesting for the moment and simpler is to calculate the total energy of the system. So, let's calculate the total energy of the system. We have the partition function. Oh, can't slide this one up. Okay. Um, let's do it over here.
01:23:08
The energy is the derivative of log z with respect to beta with a minus sign. Alright, the important term here and the only important term is minus three halves n log beta.
01:23:26
So, we just have to differentiate log z with respect to beta and that gives us the derivative of log z with respect to beta is equal to minus three n over two.
01:23:44
And what's the derivative of log beta with respect to beta? What is one over beta? Temperature. Uh oh, what about that minus sign? Oh, yeah, the minus sign is the energy is actually minus this thing.
01:24:05
So, the energy is plus three halves. I told you where the three and the two came from. In, well, certainly it makes sense to say the total energy of the box of gas is proportional to the number of particles.
01:24:24
And this is telling us that for an ideal gas, this is not, this is not very general, although it's often a good approximation. For an ideal gas, it's an almost ideal gas, it's either exact or a good approximation to say that the energy per particle,
01:24:46
the energy per particle is three halves energy per particle is three halves the temperature. Where would Boltzmann's constant be?
01:25:05
Whenever you see a formula like this and you want to convert it to laboratory units, the temperature becomes k Boltzmann times temperature. K Boltzmann is of course a very small number. And so if the temperature is 300 degrees, you multiply 300 degrees by this terribly small Boltzmann factor and you get the number of joules.
01:25:29
Uh, k Boltzmann is of order 10 to the minus 23rd or something. Well, then you multiply by n. Yeah, yeah, yeah, yeah, yeah, yeah, but you shouldn't, right. But, uh, without this k Boltzmann, you might say, hey, wait a minute, the energy per particle is a huge number, 300.
01:25:49
300 what? Okay. It's 300, um, but you gotta use the conversion factor k Boltzmann, which makes it a small number, the energy per particle.
01:26:03
But this is where the idea that the particles move around with the kinetic energy which is proportional to the temperature. You can say it another way. Since the, since the kinetic energy is the sum of three terms, px squared, py
01:26:23
squared, and pz squared, you can say for each direction, for each direction of space, the energy stored in the momentum in that direction is one half kt. So for each direction of space, the energy is one half kt.
01:26:43
If there are three directions of space, then it's three, three halves kt. So the, uh, that result of the average energy per particle. Yeah. That does not depend on, um, what you did with the n factorial and approximating the price service for it.
01:27:03
That's right, because that would, that would factor off the partition function. When you took the logarithm, it would be an additive independent constant, and so when you differentiate, it would have made no difference. Right. It's just a neat little trick to include the n factorials, and then things wind up depending on the density in an elegant way, but that's right.
01:27:27
The, the, the numerical factor, uh, in front of the partition function never makes any difference because when you take the logarithm, it makes an additive constant to the logarithm, and then when you differentiate it, it goes away.
01:27:46
So in classical physics, there's no particular content to the one over n factorial. Um, in quantum mechanics, it is important. If, if you took that same box of gas, and you set it down on the surface of the neutron star, were you going
01:28:07
to do major damage to the z, um, would you be able to solve that by getting another Lagrange multiplier that, or how would you? Uh, I'm, I'm not sure what aspect of the neutron star is important to have. Well, you would have the, the, uh, the, um, probably the, would have very little gas at the very top of it.
01:28:30
It would all be concentrated at the bottom, so the stakes would not be much. Are you trying to use the g word, gravity? Yeah. No. Okay.
01:28:40
So, before we deliver our neutron star, let's ask how this might have changed if, um, if we had a gas in this room and we worried about the fact that the energy of the gas has a potential energy term. What's the potential energy? Y. I'm, I'm, I always use the vertical axis y.
01:29:01
X and z are the other ones. All right. Uh, so what's the kinetic energy? Um, the mass times g times the height. All right. So, there'd be another term up here, which would be one term for each particle. For each particle, we would have an e to the minus mg times y summed over n again.
01:29:30
Sorry. Summed in the exponent. So, it would be part of this product structure, summed in the exponent.
01:29:43
All right. So, let's see what that would do. First of all, the x and z integrations would be unchanged. This only depends on y. It would not change the x and z integrations. But the x and z integrations, instead of giving the volume to the nth, would give the area of the base to the n.
01:30:05
Okay. So, let's say area of the base of the box to the n. And then we would have another integral here, would be e to the minus mgy integral dy from where to where?
01:30:32
Let's call the bottom of the box y equals zero. It doesn't make any difference, but y equals zero to infinity. And what is this integral?
01:30:42
Hmm? Say it again. Oh, yeah. Sorry. Uh, right. You're right. To the height of the box. Good. Well, at zero, gravity is going to act in x and y, too. Hmm? What?
01:31:00
No. If you set the bottom of your box at... Y equals zero. ...y equals zero, now you're going to have gravity acting in... Stop pointing at the surface. Well, maybe now you're not pointing at the surface, right? Yeah. Yeah, it's just the bottom of the box. I'm thinking the box is bigger than your... Oh, okay.
01:31:22
Okay. So you're right. We, uh, you're absolutely right. We should integrate from zero to the height of the box, which we'll call L. Height of the box is L. And the area of the box. So we have to do the integral.
01:31:43
And, sorry, and this comes into the nth power. What's the integral of e to the minus mgy dy?
01:32:01
One of the, uh, you know, I'm going to let you do this. No, no, it's easy. The integral of e to the minus mgy is, um, minus, uh, one over mge to the minus mgy. But you have to evaluate it between the two limits, and these two terms, so it's a little bit of a nuisance.
01:32:23
Um, I'll let you do the nuisance work. This gives you another factor. Okay? But notice that the other factor does not depend on beta. Oh, sorry, it does depend on beta. Whoops, whoops, whoops, beta. Beta.
01:32:40
Somebody didn't catch me. It's e to the minus beta times the energy. Right? e to the minus beta times the energy. So there is a factor, there is some additional beta dependence there. Yes, there is. There's, uh, additional beta dependence there. Um, and you can work with it.
01:33:01
You can work it out. Maybe we'll do it next time. I'm getting a little tired. It's, uh, it's getting a little too late. But we could work out what the partition function is for the box of gas in the gravitational field. Um, it has the same factor that it had before plus another factor from here.
01:33:22
Okay. So, um, I'll let, uh, I'll let you work that out. Since this term is beta dependent, it will give some contribution to the energy per particle.
01:33:41
But of course it will. It will just be giving you the, um, the potential energy per particle. The, the, okay, let's, eh, what the hell, let's go through it. Let's go through it. It's not that hard. I'll tell you what, to, to simplify it a little bit, let's let the room be infinitely high.
01:34:06
So this is literally the problem of the atmosphere between the Earth's surface and infinity. Uh, okay, so what's the integral?
01:34:28
The integral, the, the integral over here is just one divided by beta m g to the power n.
01:34:42
And then the other one is exactly what it was before. Anybody remember what it was before? Uh, this area to the n over n factorial, that's fixed, that's just a number.
01:35:04
And what about the other stuff? Square root of two m pi over beta to the power three n. Yeah, okay, over two.
01:35:25
Notice everything is to the nth power. That's nice because when you take the logarithm, that tells you the log of z, this is z. Log of z is, contains three n over two minus log beta.
01:35:45
I'm keeping only the things which depend on beta now. Minus three n over two log beta. Two, three n over two, two.
01:36:00
Plus a bunch of constants. Minus n log beta plus logarithm of the area, blah, blah, blah. We don't care about that. So when we differentiate with respect to beta, we will get the same term as before.
01:36:23
Three halves, three halves times the temperature for each particle. And then another term which looks like it's, when you differentiate this with respect to beta, you're going to get n over beta,
01:36:47
which is n times, n times t it looks like. And that must be the potential energy.
01:37:00
I can't remember, it must be the average potential energy. Oh, no, I guess that's true.
01:37:20
Looks like it. In any case, that's the procedure. That's an example, those are a couple of examples of very, very simple calculations of partition functions. And they show you, among other things, oh incidentally, as a homework problem, calculate the entropy of this gas.
01:37:43
Remember what the entropy is? The entropy is S is equal to beta E plus log Z. Calculate the entropy and the entropy per particle just as an exercise.
01:38:01
And of course it's very straightforward, very easy. Maybe we'll use it for something. And that finishes us up for tonight. Any questions? So these equations are effectively the Boltzmann distribution, is that right?
01:38:21
The Boltzmann distribution is this one over here. And that's equilibrium and maximum entropy? Maximum entropy subject to the constraint of a given total energy, which has to be computed from here, and the constraint that the total probabilities add to one, which is hardly a constraint, but you have to include it in the formal mathematics.
01:38:49
It's pretty nifty. After a while, you never get to the point where you're not surprised by the equations.
01:39:01
But after a point, you get familiar with the simple tricks and you know which equations to use one following the other. But it's always the case, it's always the case that you follow a set of procedures and then all of a sudden you look at the thing and say, whoa, that's an interesting equation.
01:39:22
All the ones before were totally uninteresting. So it's a very curious subject. So since you've already introduced the G word. The G word, yes. Will we be able to at some point apply that to maybe a collapsing star and get on the inside of that what the entropy looks like?
01:39:45
If you mean a black hole, yes, that's easy. If you mean a collapsing star or a star, stars are hard. But they're hard because it's complicated details, not because it's conceptually hard.
01:40:01
Okay, well, what I'm trying to wrap my head around is how that collapsing star and more black holes, I'll take whichever one you want, winds up with increasing entropy since everything gets pulled in smaller and smaller space. The entropy does go up to a point, to a point until it gets to the Hawking-Bekenstein entropy and then it cannot go higher than that.
01:40:27
Not unless you add more energy. For a given amount of energy in a given volume, that's about, that's it. Okay, yes. I seem to have some inkling in my statistics.
01:40:41
You can use a partition function to calculate the fluctuations, the distribution. The fluctuations in energy, for example. Yeah. Okay, I hadn't intended to do that tonight and I'm not going to, but I'll tell you what the trick is. If you want to calculate the fluctuations in a quantity, what you want to first calculate is the average of the squares.
01:41:07
The square of that quantity, right? Excuse me. Okay. So to calculate the square of the energy, you want to differentiate twice with respect to beta.
01:41:22
And you'll find a formula involving some second derivatives and I'll work it out maybe next time, the fluctuations. And this is a very nice, rather beautiful formula for the fluctuations of energy. Fluctuations in energy means the width of the energy of the distribution and it's an important formula.
01:41:45
It relates the fluctuations in energy to the specific heat and it's a neat formula. We'll work it out next time. Right. Fluctuations were not a part of the original thermodynamic theory as it was originally known.
01:42:03
But it was really Einstein and Gibbs who really understood that the way to find out if what you're talking about is the statistical mechanics of a finite collection of atoms versus a complete continuum with no basis in statistics was to study fluctuations.
01:42:26
So we'll do a little bit of that next time. For more, please visit us at stanford.edu.
Recommendations
Series of 10 media