Lecture 22: Ideal Quantum Gases Part 1

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Description: This is the first of five lectures on Ideal Quantum Gases.

Instructor: Mehran Kardar

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR: Particles in quantum mechanics. In particular, the ones that are identical and non-interacting. So basically, we were focusing on a type of Hamiltonian for a system of N particles, which could be written as the sum of contributions that correspond respectively to particle 1, particle 2, particle N. So essentially, a sum of terms that are all the same. And one-particle terms is sufficient because we don't have interactions.

So if we look at one of these H's-- so one of these one-particle Hamiltonians, we said that we could find some kind of basis for it. In particular, typically we were interested in particles in the box. We would label them with some wave number k. And there was an associated one-particle energy, which for the case of one-particle in box was h bar squared k squared over 2m. But in general, for the one-particle system, we can think of a ladder of possible values of energies. So there will be some k1, k2, k3, et cetera. They may be distributed in any particular way corresponding to different energies.

Basically, you would have a number of possible states for one particle. So for the case of the particle in the box, the wave functions and coordinate space that we had x, k were of the form e to the i k dot x divided by square root of V. We allowed the energies where h bar squared k squared over 2m. And this discretization was because the values of k were multiples of 2 pi over l with integers in the three different directions. Assuming periodic boundary conditions or appropriate discretization for closed boundary conditions, or whatever you have.

So that's the one-particle state. If the Hamiltonian is of this form, it is clear that we can multiply a bunch of these states and form another eigenstate for HN. Those we were calling product states. So you basically pick a bunch of these k's and you multiply them. So you have k1, k2, kN. And essentially, that would correspond, let's say, in coordinate representation to taking a bunch of x's and the corresponding k's and having a wave function of the form e to the i k alp-- sum over alpha k alpha x alpha, and then divided by V to the N over 2.

So in this procedure, what did we do? We had a number of possibilities for the one-particle state. And in order to, let's say, make a two-particle state, we would pick two of these k's and multiply the corresponding wave functions. If you had three particles, we could pick another one. If you had four particles, we could potentially pick a second one twice, et cetera. So in general, basically we would put N of these crosses on these one-particle states that we've selected.

Problem was that this was not allowed by quantum mechanics for identical particles. Because if we took one of these wave functions and exchanged two of the labels, x1 and x2, we could potentially get a different wave function. And in quantum mechanics, we said that the wave function has to be either symmetric or anti-symmetric with respect to exchange of a pair of particles. And also, whatever it implied for repeating this exchange many times to look at all possible permutations.

So what we saw was that product states are good as long as you are thinking about distinguishable particles. But if you have identical particles, you had to appropriately symmetrize or anti-symmetrize these states. So what we ended up was a wabe of, for example, symmetrizing things for the case of fermions. So we could take a bunch of these k-values again.

And a fermionic, or anti-symmetrized version, was then constructed by summing over all permutations. And for N particle, there would be N factorial permutations. Basically, doing a permutation of all of these indices k1, k2, et cetera, that we had selected for this.

And for the case of fermions, we had to multiply each permutation with a sign that was plus for even permutations, minus for odd permutations. And this would give us N factorial terms. And the appropriate normalization was 1 over square root of N factorial. So this was for the case of fermions. And this was actually minus. I should have put in a minus here. And this would have been minus to the P.

For the case of bosons, we basically dispensed with this factor of minus 1 to the P. So we had a Pk.

Now, the corresponding normalization that we had here was slightly different. The point was that if we were doing this computation for the case of fermions, we could not allow a state where there is a double occupation of one of the one-particle state. Under exchange of the particles that would correspond to these k-values, I would get the same state back, but the exchange would give me a minus 1 and it would give me 0. So the fermionic wave function that I have constructed here, appropriately anti-symmetrized, exists only as long as there are no repeats.

Whereas, for the case of bosons, I could put 2 over same place. I could put 3 somewhere else, any number that I liked, and there would be no problem with this. Except that the normalization would be more complicated. And we saw that appropriate normalization was a product over k nk factorial. So this was for fermions. This is for bosons.

And the two I can submerge into one formula by writing a symmetrized or anti-symmetrized state, respectively indicated by eta, where we have eta is minus 1 for fermions and eta is plus 1 for bosons, which is 1 over square root of N factorial product over k nk factorials. And then the sum over all N factorial permutations. This phase factor for fermions and nothing for bosons of the appropriately permuted set of k's.

And in this way of noting things, I have to assign values nk which are either 0 or 1 for fermions. Because as we said, multiple occupations are not allowed. But there is no restriction for bosons. Except of course, that in this perspective, as I go along this k-axis, I have 0, 1, 0, 2, 1, 3, 0, 0 for the occupations. Of course, what I need to construct whether I am dealing with bosons or fermions is that the sum over k nk is the total number of particles that I have.

Now, the other thing to note is that once I have given you a picture such as this in terms of which one-particle states I want to look at, or which set of occupation numbers I have nk, then there is one and only one symmetrized or anti-symmetrized state.

So over here, I could have permuted the k's in a number of possible ways. But as a result of symmetrization, anti-symmetrization, various ways of permuting the labels here ultimately come to the same set of occupation numbers. So it is possible to actually label the state rather than by the set of k's. By the set of nk's. It is kind of a more appropriate way of representing the system.

So that's essentially the kinds of states that we are going to be using. Again, in talking about identical particles, which could be either bosons or fermions. Let's take a step back, remind you of something that we did before that had only one particle. Because I will soon go to many particles. But before that, let's remind you what the one particle in a box looked like.

So indeed, in this case, the single-particle states were the ones that I told you before, 2 pi over l some set of integers. Epsilon k, that was h bar squared k squared over 2m.

If I want to calculate the partition function for one particle in the box, I have to do a trace of e to the minus beta h for one particle. The trace I can very easily calculate in the basis in which this is diagonal. That's the basis that is parameterized by these k-values. So I do a sum over k of e to the minus beta h bar squared k squared over 2m.

And then in the limit of very large box, we saw that the sum over k I can replace with V integral over k 2 pi cubed. This was the density of states in k. e to the minus beta h bar squared k squared over 2m. And this was three Gaussian integrals that gave us the usual formula of V over lambda cubed, where lambda was this thermal [INAUDIBLE] wavelength h root 2 pi mk.

But we said that the essence of statistical mechanics is to tell you about probabilities of various micro-states, various positions of the particle in the box, which in the quantum perspective is probability becomes a density matrix. And we evaluated this density matrix in the coordinate representation. And in the coordinate representation, essentially what we had to do was to go into the basis in which rho is diagonal. So we had x prime k.

In the k basis, the density matrix is just this formula. It's the Boltzmann weight appropriately normalized by Z1. And then we go kx. And basically, again replacing this with V integral d cubed k 2 pi cubed e to the minus beta h bar squared k squared over 2m. These two factors of xk and x prime k gave us a factor of e to the kx, xk I have as ik dot x prime minus x.

Completing the square. Actually, I had to divide by Z1. There is a factor of 1 over V from the normalization of these things. The two V's here cancel, but Z1 is proportional to V. The lambda cubes cancel and so what we have is 1 over V e to the minus x minus x prime squared pi over lambda squared.

So basically, what you have here is that we have a box of volume V. There is a particle inside at some location x. And the probability to find it at location x is the diagonal element of this entity. It's just 1 over V. But this entity has off-diagonal elements reflecting the fact that the best that you can do to localize something in quantum mechanics is to make some kind of a wave packet. OK.

So this we did last time. What we want to do now is to go from one particle to the case of N particles. So rather than having 1x prime, I will have a whole bunch of x primes labeled 1 through N. And I want to calculate the N particle density matrix that connects me from set of points x to another set of points x prime.

So if you like in the previous picture, this would have been x1 and x1 prime, and then I now have x2 and x2 prime, x3 and x3 prime, xN and xN prime. I have a bunch of different coordinates and I'd like to calculate that. OK.

Once more, we know that rho is diagonal in the basis that is represented by these occupations of one-particle states. And so what I can do is I can sum over a whole bunch of plane waves. And I have to pick N factors of k out of this list in order to make one of these symmetrized or anti-symmetrized wave functions.

But then I have to remember, as I said, that I should not over-count distinct set of k-values because permutations of these list of k's that I have over here, because of symmetrization or anti-symmetrization, will give me the same state. So I have to be careful about that.

Then, I go from x prime to k. Now, the density matrix in the k-basis I know. It is simply e to the minus beta, the energy which is sum over alpha h bar squared k alpha squared over 2m. So I sum over the list of k alphas that appear in this series. There will be n of them.

I have to appropriately normalize that by the N-particle partition function, which we have yet to calculate. And then I go back from k to x. Now, let's do this.

The first thing that I mentioned last time is that I would, in principle, like to sum over k1 going over the entire list, k2 going the entire list, k3 going over the entire list. That is, I would like to make the sum over k's unrestricted. But then I have to take into account the over-counting that I have.

If I am looking at the case where all of the k's are distinct-- they don't show any double occupancy-- then I have over-counted by the number of permutations. Because any permutation would have given me the same number. So I have to divide by the number of permutations to avoid the over-counting due to symmetrization here.

Now, when I have something like this, which is a multiple occupancy, I have overdone this division. I have to multiply by this factor, and that's the correct number of over-countings that I have. And as I said, this was a good thing because the quantity that I had the hardest time for, and comes in the normalizations that occurs here, is this factor of 1 over nk factorial.

Naturally, again, all of these things do depend on the symmetry. So I better make sure I indicate the index. Whether I'm calculating this density matrix for fermions or bosons, it is important. In either case-- well, what I need to do is to do a summation over P here for this one and P prime here or P prime here and P here. It doesn't matter, there's two sets of permutations that I have to do. In each case, I have to take care of this eta P, eta P prime. And then the normalization.

So I divide by twice, or the square of the square root. I get the N factorial product over k nk factorial. And very nicely, the over-counting factor here cancels the normalization factor that I would have had here. So we got that. Now, what do we have?

We have P prime permutation of these objects going to x, and then we have here P permutation of these k numbers going to x. I guess the first one I got wrong. I start with x prime. Go through P prime to k. And again, symmetries are already taken into account. I don't need to write that.

And I have the factor of e to the minus beta h bar squared sum over alpha k alpha squared over 2m divided by ZN.

OK, so let's bring all of the denominator factors out front. I have a ZN. I have an N factorial squared. Two factors of N factorial. I have a sum over two sets of permutations P and P prime. The product of the associated phase factor of their parities, and then I have this integration over k's. Now, unrestricted.

Since it is unrestricted, I can integrate independently over each one of the k's, or sum over each one of them. When I sum, the sum becomes the integral over d cubed k alpha divided by 2 pi-- yeah, 2 pi cubed V. Basically, the density in replacing the sum over k alpha with the corresponding integration. So basically, this set of factors is what happened to that. OK, what do we have here?

We have e to the i x-- well, let's be careful here. I have e to the i x prime alpha acting on k of p prime alpha because I permuted the k-label that went with, say, the alpha component here with p prime.

From here, I would have minus because it's the complex conjugate. I have x alpha k p alpha, because I permuted this by k. I have one of these factors for each V. With each one of them, there is a normalization of square root of V. So the two of them together will give me V. But that's only one of the N-particle So there are N of them. So if I want, I can extend this product to also encompass this term. And then having done so, I can also write here e to the minus beta h bar squared k alpha squared over 2m within the product.

AUDIENCE: [INAUDIBLE] after this-- is it quantity xk minus [INAUDIBLE].

PROFESSOR: I forgot an a here. What else did I miss out?

AUDIENCE: [INAUDIBLE] quantity.

PROFESSOR: So I forgot the i. OK, good?

So the V's cancel out. All right, so that's fine. What do we have? We have 1 over ZN N factorial squared. Two sets of permutations summed over, p and p prime. Corresponding parities eta p eta of p prime. And then, I have a product of these integrations that I have to do that are three-dimensional Gaussians for each k alpha. What do I get?

Well, first of all, if I didn't have this, if I just was doing the integration of e to the minus beta h bar squared k squared over 2m, I did that already. I get a 1 over lambda cubed. So basically, from each one of them I will get a 1 over lambda cubed. But the integration is shifted by this amount.

Actually, I already did the shifted integration here also for one particle. So I get the corresponding factor of e to the minus-- ah. I have to be a little bit careful over here because what I am integrating is over k alpha squared. Whereas, in the way that I have the list over here, I have x prime alpha and x alpha, but a different k playing around with each. What should I do?

I really want this integration over k alpha to look like what I have over here. Well, as I sum over all possibilities in each one of these terms, I am bound to encounter k alpha. Essentially, I have permuted all of the k's that I originally had. So the k alpha has now been sent to some other location. But as I sum over all possible alpha, I will hit that.

When I hit that, I will find that the thing that was multiplying k alpha is the inverse permutation of alpha. And the thing that was multiplying k alpha here is the inverse permutation of p. So then I can do the integration over k alpha easily. And so what do I have?

I have x prime of p prime inverse alpha-- the inverse permutation-- minus x of p inverse alpha squared pi over lambda squared.

Now, this is still inconvenient because I am summing over two N factorial sets of permutations. And I expect that since the sum only involves comparison of things that are occurring N times, as I go over the list of N factorial permutation squared, I will get the same thing appearing twice. So it is very much like when we are doing an integration over x and x prime, but the function only depends on x minus x prime. We get a factor of volume.

Here, it is easy to see that one of these sums I can very easily do because it is just repetition of all of the results that I have previously. And there will be N factorial such terms. So doing that, I can get rid of one of the N factorials. And I will have only one permutation left, Q. And what will appear here would be the parity of this Q that is the combination, or if you like, the relative of these two permutations.

And I have an exponential of minus sum over alpha x alpha minus x prime Q alpha squared pi over lambda squared. And I think I forgot a factor of lambda to the 3 [INAUDIBLE]. This factor of lambda 3. So this is actually the final result.

And let's see what that precisely means for two particles. So let's look at two particles. So for two particle,s I will have on one side coordinates of 1 prime and 2 prime. On the right-hand side, I have coordinates 1 and 2. And let's see what this density matrix tells us.

It tells us that to go from x1 prime x2 prime, a two particle density matrix connecting to x1 x2 on the other side, I have 1 over the two-particle partition function that i haven't yet calculated. Lambda to the sixth. N factorial in this case is 2. And then for two things, there are two permutations. So the identity maps 1 to 1, 2 to 2. And therefore, what I will get here would be exponential of minus x1' minus x1 prime squared pi over lambda squared minus x2 minus x2 prime squared pi over lambda squared. So that's Q being identity and identity has essentially 0 parity. It's an even permutation.

The next thing is when I exchange 1 and 2. That would have odd parity. So I would get minus 1 for fermions, plus for bosons. And what I would get here is exponential of minus x1' minus x2 prime squared pi over lambda squared minus x2 minus x1 prime squared pi over lambda squared.

So essentially, one of the terms-- the first term is just the square of what I had before for one particle. I take the one-particle result, going from 1 to 1 prime, going from 2 to 2 prime and multiply them together. But then you say, I can't tell apart 2 prime and 1 prime. Maybe the thing that you are calling 1 prime is really 2 prime and vice versa. So I have to allow for the possibility that rather than x1 prime here, I should put x2 prime and the other way around. And this also corresponds to a permutation that is an exchange. It's an odd parity and will give you something like that.

Say OK, I have no idea what that means. I'll tell you, OK, you were happy when I put x prime and x here because that was the probability to find the particle somewhere. So let me look at the diagonal term here, which is a probability. This should give me the probability to find one particle at position x1, one particle at position x2. Because the particles were non-interacting, one particle-- it could be anywhere. I had the 1 over V. Is it 1 over V squared or something like that?

Well, we find that there is factor out front that we haven't yet evaluated. It turns out that this factor will give me a 1 over V squared. And if I set x1 prime to x1, x2 prime to x2, which is what I've done here, this factor becomes 1. But then the other factor will give me eta e to the minus 2 pi over lambda squared x1 minus x2 squared.

So the physical probability to find one particle-- or more correctly, a wave packet here and a wave package there is not 1 over V squared. It's some function of the separation between these two particles. So that separation is contained here.

If I really call that separation to be r, this is an additional weight that depends on r. This is r squared. So you can think of this as an interaction, which is because solely of quantum statistics. And what is this interaction?

This interaction V of r would be minus kT log of 1 plus eta e to the minus 2 pi r squared over lambda squared. I will plot out for you what this V or r looks like as a function of how far apart the centers of these two wave packets are.

You can see that the result depends on eta. If eta is minus 1, which is for the case of fermions, this is 1 minus something. It's something that is less than 1. [INAUDIBLE] would be negative. So the whole potential would be positive, or repulsive. At large distances, indeed it would be exponentially going to 0 because I can expand the log at large distances. So here I have a term that is minus 2 pi r squared over lambda squared.

As I go towards r equals to 0, actually things become very bad because r goes to 0 I will get 1 minus 1 and the log will diverge. So basically, there is, if you like, an effective potential that says you can't put these two fermions on top of each other. So there is a statistical potential. So this is for eta minus 1, or fermions.

For the case of bosons, eta plus 1. It is log of 1 plus something, so it's a positive number inside the log. The potential will be attractive. And it will actually saturate to a value of kT log 2 when r goes to 0. So this is again, eta of plus 1 for the case of bosons.

So the one thing that this formula does not have yet is the value for this partition function ZN. It gives you the qualitative behavior in either case. And let's calculate what ZN is.

Well, basically, that would come from noting that the trace of rho has to be 1. So ZN is trace of e to the minus beta H. And essentially, I can take this ZN to the other side and evaluate this as x e to the minus beta H x. That is, I can calculate the diagonal elements of this matrix that I have calculated-- that I have over there. So there is an overall factor of 1 over lambda cubed to the power of N. I have N factorial. And then I have a sum over permutations Q eta of Q. The diagonal element is obtained by putting x prime to be the same as x.

So I have exponential of minus x-- sum over alpha x alpha minus x of Q alpha. I set x prime to be the same as x. Squared. And then there's an overall pi over lambda squared.

And if I am taking the trace, it means that I have to do integration over all x's. So I'm evaluating this trace in coordinate basis, which means that I should put x and x prime to be the same for the trace, and then I have to sum or integrate over all possible values of x. So let's do this.

I have 1 over N factorial lambda cubed raised to the power of N. OK. Now I have to make a choice because I have a whole bunch of terms because of these permutations. Let's do them one by one.

Let's first do the case where Q is identity. That is, I map everybody to themselves. Actually, let me write down the integrations first. I will do the integrations over all pairs of coordinates of these Gaussians. These Gaussians I will evaluate for different permutations.

Let's look at the case where Q is identity. When Q is identity, essentially I will put all of the x prime to be the same as x. It is like what I did here for two particles and I got 1. I do the same thing for more than one particle. I will still get 1.

Then, I will do the same thing that I did over here. Here, the next term that I did was to exchange 1 and 2. So this became x1 minus x2. I'll do the same thing here. I look at the case where Q corresponds to exchange of particles 1 and 2. And then that will give me a factor which is e to the minus pi over lambda squared x1 minus x2 squared. There are two of these making together 2 pi over lambda squared, which I hope I had there, too.

But then there was a whole bunch of other terms that I can do. I can exchange, let's say, 7 and 9. And then I will get here 2 pi over lambda squared x7 minus x9 squared. And there's a whole bunch of such exchanges that I can make in which I just switch between two particles in this whole story.

And clearly, the number of exchanges that I can make is the number of pairs, N N minus 1 over 2. Once I am done with all of the exchanges, then I have to go to the next thing that doesn't have an analog here for two particles. But if I take three particles, I can permute them like a triangle. So presumably there would be next set of terms, which is a permutation that is like 1, 2, 3, 2, 3, 1. There's a bunch of things that involve two permutations, four permutations, and so forth. So there is a whole list of things that would go to here where these two-particle exchanges are the simplest class.

Now, as we shall see, there is a systematic way of looking at things where the two-particle exchanges are the first correction due to quantum effects. Three-particle exchanges would be higher-order corrections. And we can systematically do them in order. So let's see what happens if we compare the case where there is no exchange and the case where there is one exchange.

When there is no exchange, I am essentially integrating over each position over the volume. So what I would get is V raised to the power of N. The next term?

Well, I have to do the integrations. The integrations over x3, x4, x5, all the way to x to the N, there is no factors. So they will give me factors of V. And there are N minus 2 of them. And then I have to do the integration over x1 and x2 of this factor, but it's only a function of the relative coordinate. So there is one other integration that I can trivially do, which is the center of mass gives me a factor of V. And then I am left with the integral over the relative coordinate of e to the minus 2 pi r squared over lambda squared.

And I forgot-- it's very important. This will carry a factor of eta because any exchange is odd. And so there will be a factor of eta here. And I said that I would get the same expression for any of my N N minus 1 over 2 exchanges. So the result of all of these exchange calculations would be the same thing. And then there would be the contribution from three-body exchange and so forth. So let's re-organize this.

I can pull out the factor of V to the N outside. So I would have V over lambda cubed to the power of N. So the first term is 1. The next term has the parity factor that distinguishes bosons and fermions, goes with a multiplicity of pairs which is N N minus 1 over 2. Since I already pulled out a factor of V to the N and I really had V to the N minus 1 here, I better put a factor of 1 over V here. And then I just am left with having to evaluate these Gaussian integrals. Each Gaussian integral will give me 2 pi times the variance, which is lambda squared divided by 2 pi. And then there's actually a factor of 2. And there are three of them, so I will have 3/2. So what I get here is lambda cubed divided by 2 to the 3/2.

Now, you can see that any time I go further in this series of exchanges, I will have more of these Gaussian factors. And whenever I have a Gaussian factor, I have an additional integration to do that has an x minus something squared in it. I will lose a factor of V. I don't have that factor of V. And so subsequent terms will be even smaller in powers of V. And presumably, compensated by corresponding factors of lambda squared-- lambda cubed.

Now, first thing to note is that in the very, very high temperature limit, lambda goes to 0. So I can forget even this correction. What do I get?

I get 1 over N factorial V over lambda cubed to the power of N. Remember that many, many lectures back we introduced by hand the factor of 1 over N factorial for measuring phase spaces of identical particles. And I promise to you that we would get it when we did identical particles in quantum mechanics, so here it is.

So automatically, we did the calculation, keeping track of identity of particles at the level of quantum states. Went through the calculation and in the high temperature limit, we get this 1 over N factorial emerging.

Secondly, we see that the corrections to ideal gas behavior emerge as a series in powers of lambda cubed over V. And for example, if I were to take the log of the partition function, I would get log of what I would have had classically, which is this V over lambda cubed to the power of N divided by N factorial. And then the log of this expression. And this I'm going to replace with N squared. There is not that much difference.

And since I'm regarding this as a correction, log of 1 plus something, I will replace with the something eta N squared 2V lambda cubed 2 to the 3/2 plus higher order. What does this mean?

Once we have the partition function, we can calculate pressure. Reminding you that beta p was the log Z by dV. The first part is the ideal gas that we had looked at classically. So once I go to the appropriate large-end limit of this, what this gives me is the density n over V.

And then when I look at the derivative here, the derivative of 1/V will give me a minus 1 over V squared. So I will get minus eta. N over V, the whole thing squared. So I will have n squared lambda cubed 2 to the 5/2, and so forth. So I see that the pressure of this ideal gas with no interactions is already different from the classical result that we had calculated by a factor that actually reflects the statistics. For fermions eta of minus 1, you get an additional pressure because of the kind of repulsion that we have over here. Whereas, for bosons you get an attraction.

You can see that also the thing that determines this-- so basically, this corresponds to a second Virial coefficient, which is minus eta lambda cubed 2 to the 5/2, is the volume of these wave packets. So essentially, the corrections are of the order of n lambda cubed that is within one of these wave packets how many particles you will encounter. As you go to high temperature, the wave packets shrink. As you go to low temperature, the wave packets expand. If you like, the interactions become more important and you get corrections to ideal gas wave.

AUDIENCE: You assume that we can use perturbation, but the higher terms actually had a factor [INAUDIBLE]. And you can't really use perturbation in that.

PROFESSOR: OK. So what you are worried about is the story here, that I took log of 1 plus something here and I'm interested in the limit of n going to infinity, that finite density n over V. So already in that limit, you would say that this factor really is overwhelmingly larger than that. And as you say, the next factor will be even larger. So what is the justification in all of this?

We have already encountered this same problem when we were doing these perturbations due to interactions. And the answer is that what you really want to ensure is that not log Z, but Z has a form that is e to the N something. And that something will have corrections, potentially that are powers of N, the density, which is N over V. And if you try to force it into a perturbation series such as this, naturally things like this happen. What does that really mean?

That really means that the correct thing that you should be expanding is, indeed, log Z. If you were to do the kind of hand-waving that I did here and do the expansion for Z, if you also try to do it over here you will generate terms that look kind of at the wrong order. But higher order terms that you would get would naturally conspire so that when you evaluate log Z, they come out right.

You have to do this correctly. And once you have done it correctly, then you can rely on the calculation that you did before as an example. And we did it correctly when we were doing these cluster expansions and the corresponding calculation we did for Q. We saw how the different diagrams were appearing in both Q and the log Q, and how they could be summed over in log Q. But indeed, this mathematically looks awkward and I kind of jumped a step in writing log of 1 plus something that is huge as if it was a small number.

All right. So we have a problem. We want to calculate the simplest system, which is the ideal gas. So classically, we did all of our calculations first for the ideal gas. We had exact results. Then, let's say we had interactions. We did perturbations around that and all of that. And we saw that having to do things for interacting systems is very difficult.

Now, when we start to do calculations for the quantum problem, at least in the way that I set it up for you, it seems that quantum problems are inherently interacting problems. I showed you that even at the level of two particles, it is like having an interaction between bosons and fermions. For three particles, it becomes even worse because it's not only the two-particle interaction. Because of the three-particle exchanges, you would get an additional three-particle interaction, four-particle interaction, all of these things emerge.

So really, if you want to look at this from the perspective of a partition function, we already see that the exchange term involved having to do a calculation that is equivalent to calculating the second Virial coefficient for an interacting system.

The next one, for the third Virial coefficient, I would need to look at the three-body exchanges, kind of like the point clusters, four-point clusters, all kinds of other things are there. So is there any hope?

And the answer is that it is all a matter of perspective. And somehow it is true that these particles in quantum mechanics because of the statistics are subject to all kinds of complicated interactions. But also, the underlying Hamiltonian is simple and non-interacting. We can enumerate all of the wave functions. Everything is simple. So by looking at things in the right basis, we should be able to calculate everything that we need.

So here, I was kind of looking at calculating the partition function in the coordinate basis, which is the worst case scenario because the Hamiltonian is diagonal in the momentum basis. So let's calculate ZN trace of e to the minus beta H in the basis in which H is diagonal. So what are the eigenvalues and eigenfunctions?

Well, the eigenfunctions are the symmetrized/anti-symmetrized quantities. The eigenvalues are simply e to the minus beta H bar squared k alpha squared over 2m. So this is basically the thing that I could write as the set of k's appropriately symmetrized or anti-symmetrized e to the minus beta sum over alpha H bar squared k alpha squared over 2m k eta.

Actually, I'm going to-- rather than go through this procedure that we have up there in which I wrote these, what I need to do here is a sum over all k in order to evaluate the trace. So this is inherently a sum over all sets of k's. But this sum is restricted, just like what I had indicated for you before.

Rather than trying to do it that way, I note that these k's I could also write in terms of these occupation numbers. So equivalently, my basis would be the set of occupation numbers times the energy. The energy is then e to the minus beta sum over k epsilon k nk, where epsilon k is this beta H bar squared k alpha squared over 2m. But I could do this in principle for any epsilon k that I have over here. So the result that I am writing for you is more general.

Then I sandwich it again, since I'm calculating the trace, with the same state. Now, the states have this restriction that I have over there. That is, for the case of fermions, my nk can be 0 or 1. But there is no restriction for nk on the bosons. Except, of course, that there is this overall restriction that the sum over k nk has to be N because I am looking at N-particle states.

Actually, I can remove this because in this basis, e to the minus beta H is diagonal. So I can, basically, remove these entities. And I'm just summing a bunch of exponentials. So that is good because I should be able to do for each nk a sum of e to something nk.

Well, the problem is this that I can't sum over each nk independently. Essentially in the picture that I have over here, I have some n1 here. I have some n2 here. Some n3 here, which are the occupation numbers of these things. And for that partition function, I have to do the sum of these exponentials e to the minus epsilon 1 n1, e to the minus epsilon 2 n2. But the sum of all of these n's is kind of maxed out by N. I cannot independently sum over them going over the entire range.

But we've seen previously how those constraints can be removed in statistical mechanics. So our usual trick. We go to the ensemble in which n can take any value. So we go to the grand canonical prescription. We remove this constraint on n by evaluating a grand partition function Q, which is a sum over all N of e to the beta mu N ZN. So we do, essentially, a Laplace transform. We exchange our n with the chemical potential mu.

Then, this constraint no longer we need to worry about. So now I can sum over all of the nk's without worrying about any constraint, provided that I multiply with e to the beta mu n, which is a sum over k nk. And then, the factor that I have here, which is e to the minus beta sum over k epsilon of k nk. So essentially for each k, I can independently sum over its nk of e to the beta mu minus epsilon of k nk.

Now, the symmetry issues remain. This answer still depends on whether or not I am calculating things for bosons or fermions because these sums are differently constrained whether I'm dealing with fermions. In which case, nk is only 0 or 1. Or bosons, in which case there is no constraint. So what do I get?

For the case of fermions, I have a Q minus, which is product over all k. And for each k, the nk takes either 0 or 1. So if it takes 0, I will write e to the 0, which is 1. Or it takes 1. It is e to the beta mu minus epsilon of k.

For the case of bosons, I have a Q plus. Q plus is, again, a product over Q. In this case, nk going from 0 to infinity, I am summing a geometric series that starts as 1, and then the subsequent terms are smaller by a factor of beta mu minus epsilon of k.

Actually, for future reference note that I would be able to do this geometric sum provided that this combination beta mu minus epsilon of k is negative. So that the subsequent terms in this series are decaying.

Typically, we would be interested in things like partition functions, grand partition functions. So we have something like log of Q, which would be a sum over k. And I would have either the log of this quantity or the log of this quantity with a minus sign.

I can combine the two results together by putting a factor of minus eta because in taking the log, over here for the bosons I would pick a factor of minus 1 because the thing is in the denominator. And then I would write the log of 1. And then I have in both cases, a factor which is e to the beta mu minus epsilon of k. But occurring with different signs for the bosons and fermions, which again I can combine into a single expression by putting a minus eta here. So this is a general result for any Hamiltonian that has the characteristic that we wrote over here.

So this does not have to be particles in a box. It could be particles in a harmonic oscillator. These could be energy levels of a harmonic oscillator. All you need to do is to make the appropriate sum over the one-particle levels harmonic oscillator, or whatever else you have, of these factors that depend on the individual energy levels of the one-particle system.

Now, one of the things that we will encounter having made this transition from canonical, where we knew how many particles we had, to grand canonical, where we only know the chemical potential, is that we would ultimately want to express things in terms of the number of particles. So it makes sense to calculate how many particles you have given that you have fixed the chemical potential. So for that we note the following.

That essentially, we were able to do this calculation for Q because it was a product of contributions that we had for the individual one-particle states. So clearly, as far as this normalization is concerned, the individual one-particle states are independent. And indeed, what we can say is that in this ensemble, there is a classical probability for a set of occupation numbers of one particle states, which is simply a product over the different one-particle states of e to the beta mu minus epsilon k nk appropriately normalized.

And again, the restriction on n's being 0 or 1 for fermions or anything for bosons would be implicit in either case. But in either case, essentially the occupation numbers are independently taken from distributions that I've discussed [INAUDIBLE]. So you can, in fact, independently calculate the average occupation number that you have for each one of these single-particle states.

And it's clear that you could get that by, for example, bringing down a factor of nk here. And you can bring down a factor of nk by taking a derivative of Q with respect to beta epsilon k with a minus sign and normalizing it, so you would have log. So you would have an expression such as this.

So you basically would need to calculate, since you are taking derivative with respect to epsilon k the corresponding log for which epsilon k appears. Actually, for the case of fermions, really there are two possibilities. n is either 0 or 1. So you would say that the expectation value would be when it is 1, you have e to the beta epsilon of k minus mu.

Oops. e to the beta mu minus epsilon of k. The two possibilities are 1 plus e to the beta mu minus epsilon of k. So when I look at some particular state, it is either empty. In which case, contributes 0. Or, it is occupied. In which case, it contributes this weight, which has to be appropriately normalized.

If I do the same thing for the case of bosons, it is a bit more complicated because I have to look at this series rather than geometric 1 plus x plus x squared plus x cubed is 1 plus x plus 2x squared plus 3x cubed, which can be obtained by taking the derivative of the appropriate log. Or you can fall back on your calculations of geometric series and convince yourself that it is essentially the same thing with a factor of minus here. So this is fermions and this is bosons.

And indeed, I can put the two expressions together by dividing through this factor in both of them and write it as 1 over Z inverse e to the beta epsilon of k minus eta, where for convenience I have introduced Z to be the contribution e to the beta. So for this system of non-interacting particles that are identical, we have expressions for log of the grand partition function, the grand potential. And for the average number of particles, which is an appropriate derivative of this, expressed in terms of the single-particle energy levels and the chemical potential.

So next time, what we will do is we will start this with this expression for the case of the particles in a box to get the pressure of the ideal quantum gas as a function of mu. But we want to write the pressure as a function of density, so we will invert this expression to get density as a function of-- chemical potential as a function of density [INAUDIBLE] here. And therefore, get the expression for pressure as a function of density.