Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Description: In this lecture, Prof. Kardar introduces Series Expansions, including Low-temperature Expansions, High-temperature Expansions, and Exact Solution of the One Dimensional Ising Model.
Instructor: Prof. Mehran Kardar
Lecture 15: Series Expansio...
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: OK, let's start. So it's good to remind ourselves why we are doing what we are doing today. So we've seen that in a number of cases, we look at something like the coexistence line of gas and liquid that terminates at the critical point. And that in the vicinity of this critical point, we see various thermodynamic quantities and correlation functions that have properties that are independent of the materials that are considered.
So this led to this concept of universality, and we were able to justify that by looking at properties of this statistical field. And we ended up with [INAUDIBLE] normalization group procedure, which classified the different universality classes according to the number of components of the order parameter, the thing that categorizes the coexisting phases, and the dimensionality of space.
And that, in particular something like a liquid-gas system, would correspond to n equals to 1. Another example that would correspond to that would be, for example, a mixture of two metals in a binary alloy. You can have the different components mixed or phase separated from each other.
So the normalization group method gave us the reason for there is this universality, but we found that calculating the exponent was a hard task coming from four dimensions. So the question is, given that these models or these numbers, the singularities here are universal, can we obtain them from a different perspective?
And so let's say we are focused on this kind of liquid-gas system, which belong to this n equals to 1 universality class. So we can try to imagine the simplest model that we can try to solve that belongs to that universality class. And again, maybe thinking in terms of a binary alloy, something that has two possible values.
In the liquid-gas, it could be cells that are either empty or filled with a particle. And so this binary model is this Ising model, where, at each side of a lattice, we put a variable that is minus plus 1.
And so the idea is, again, if I take any one of these Ising models and I coarse-grain them, I will end up with the same statistical field, and it would have the same universality class. But if I make a sufficiently simple version of these models, maybe I can do something else and solve them in a manner that these critical behaviors can come up in an easier fashion.
So let's say we are interested in two dimensions or three dimensions. I can draw two dimensions better. We draw a square lattice. On each side of it, we put one of these variables. And in order to capture this tendency that there is a possibility of coexistence where you have patches that are made of liquid or gas, or made of copper or zinc in our binary alloy, we need to have a tendency for things that are close to each other to be in the same state so that we can capture by a Hamiltonian, which is a sum over nearest neighbors, that gives an enhanced weight if they are parallel.
And whatever that coupling is, once it is rescaled by kT, this combination, the energy divided by kT, we can parametrize by a dimensionless number k. And calculating the behavior of the system as a function of temperature, as the strength of the coupling in this simplified model, amounts to calculating the partition function as a function of a parameter k, which is a sum over, if I'm in a system that has n sites all to the n configurations, of a weight that tries to make variables that are next to each other to be in the same state.
So clearly, what is captured here is a competition between energy-- energy would like everybody to be in the same state-- versus entropy. Entropy wants to have different states at each site. So you'll have a factor of 2 per site as opposed to everybody being aligned, which is essentially one state. And so that competition potentially could lead you to a phase transition between something that has coexistent at low temperature and something that is disordered at high temperatures.
So now we have just recast the problem. Rather than having a partition function which was a functional integral over all configurations of the statistical field, I have to do this partition function, which is finite number of configurations, but it's still an interacting theory. I cannot independently move the variable at each site.
So the question is, are there approaches by which I can calculate this? And one set of approaches is to start with a limit that I can solve and start expanding on that. And these expansions that are analogous to the perturbation expansions that we learned in 8.333 about interacting systems, in this case are usually called series expansions. One would perform them on a lattice.
Now, I kind of hinted at two limits of the problem that we know exactly what is happening, and those lead to two different series expansions. One of them is the low-temperature expansions. And here the idea is that I know what the system is doing at T equals to 0. At T equals to 0, I have to find the configuration that minimizes the energy.
T equals to 0 is also equivalent to k going to infinity. I have to find a state that maximizes this weight, and that's obviously the case where all of the spins are either plus or minus. So all sigma i equals to plus 1 or all sigma i equals to minus 1.
But for the sake of doing one or the other, let's imagine that they are all plus and that I am solving the problem for the generalization of square cube to d-dimensional lattice. After all, we were doing d dimensions in general. So in d dimensions, each spin will have d neighbors.
And so if I ask, what is the weight that I will get at 0 temperature, essentially each spin would have d factors of k. So the weight that I would get at T equals to 0-- let's call that Z of T equals to 0-- is simply e to the dNk. There are N sites. Each one of them has d neighbors in d dimensions. Of course, each one of them has 2d neighbours, but then I have to count the number of neighbors per site.
So basically, this bond is shared by two neighbors, so half of it contributes to this site. And there are two possibilities. So the partition function at T equals to 0 is simply this. It's just the contribution of the two ground states.
Now, we are interested in the limit where T goes to 0. So at T equals to 0, I know what is happening. Now, what I will get as I allow temperature to be larger, at some cost I am able to flip some of these spins from, say, the plus to minus. And I will get, in this case, islands of negative spin in sea of plus.
And these islands will give a contribution that is going to be exponentially small in k and something to do with the bonds that I have broken. And by broken, I mean gone from the high energy, well, highly satisfied plus-plus state to the unsatisfied plus-minus state, in fact, 2k times number of broken bonds.
So we can very easily write the first few terms in this series. So let's make a list of the excitation, or island that I can make, how many ways I can make this, which I will call degeneracy, and the number of broken bonds. So clearly, the simplest thing that I can do in a sea of pluses is to make one island, which is simply a site that has been previously plus and now has gone to become a minus.
And this particular excitation can occur any one of N places if I have a lattice of size N. And I'm going to ignore any corrections that I may have from the edges. If you want you can do that, and that'd be more precise. But let's focus, essentially, on things that are proportional to N.
Then how many bonds have I broken? You can see that in two dimensions, I have broken four bonds. In three dimensions, it would have been six. So essentially, it is twice the number of dimensions that is taking place. And so the contribution to energy is going to be e to the minus 2d. And I went from plus k to minus k, so in fact, I would have to multiply by 2k.
Now, the next thing that I can do, the lowest energy excitation is to put two minuses that are next to each other in this sea of pluses. OK? Now, you can see that in two dimension, I can orient this pair along the x-direction or along the y-direction. And in general, there would be d directions, so I would have dN.
Roughly, you would say that the number of bonds that you have broken is twice what you had before if the two were separate. But there is this thing in between that is now actually a satisfied bond. So you can convince yourself that, actually, if the two of them were separate, these two minus excitations, I would have 4d. But because I joined them, essentially I have 2d minus 1 from each one of them, and there's two of those.
And of course, the next lowest excitation would indeed be to have two minuses that have no site, are totally separate from each other. And the contribution-- the number of these, well, this is something to count. The first one can any one of N places.
The next one can be in any one of N minus 2d minus 1 places. It cannot be on the same one, and it cannot be in any of the 2d neighbors. And I should have double counting, so there is a factor of 2 here. And the cost of this is simply twice that, so this is 4d.
So if I want to start writing a partition function expanded beyond what I have at 0 temperature, what I would have would be 2e to the dNk. There's zero temperature contribution. I would have 1 plus N e to the minus 4dk plus dN e to the minus 4 2d minus 1 k. And then from the other one that I have written, N N minus 2d minus 1 over 2e to the minus 8dk. And I can keep going and adding higher and higher order terms of the series. OK?
OK. Once I have the partition function, I can start calculating the energy, which would be minus d log Z with respect to d beta. What is beta? Well, I said that this factor is something like 1 over kT, which is beta and J.
So assuming that I have a fixed energy and I'm changing temperature, and the variations of k are reflecting the inverse temperature beta, then I can certainly multiply here a J and a J, which is a constant. And all I need to do is to take a J d by dk of log of the expression that I have above there. OK?
So let's take the log of that expression. I have log of 2 plus dNk from here. And then I have log of 1 plus terms in a series that I have calculated perturbity. Now, log of 1 plus a small quantity I can always expand as a small quantity minus x squared over 2. You may worry whether or not, with N ultimately going to infinity, this is a small quantity.
Neglecting that for the time being, if I look at this as log of 1 plus a small quantity, from here I would get N e to the minus 4dk plus dN e to the minus 4 2d minus 1 k plus N N minus 1 minus 2d over 2e to the minus 8dk, and so forth.
But then remember that log of 1 plus x is x minus x squared over 2 plus x cubed over 3, and so forth. So if x is my small quantity, I will have a correction, which is minus x squared over 2. Let's just do it for this first term. I will get minus N squared over 2e to the minus 8dk, and there will be a whole bunch of higher-order terms. OK?
Now, where am I going with this? Ultimately, I want to calculate various quantities that are extensive in the sense that they are proportional to N, and when I divide by N I will get something like energy per site.
So if I do that, I have to divide this whole thing by N, I can see that here I have a term that is log T divided by N. In the N goes to infinity limit, it's a term that has order of 1 over N I can neglect. But all of these other terms are proportional to N. And when I divide by N, I can drop these factors of N.
Well, except that I have a couple of terms that, if I had left by themselves, potentially could have been order of N squared. I have N squared over 2, but fortunately, you can see that it cancels out over there.
Now, the reason this happens, and also the reason this series is legitimate, is because we already did something very similar to that in 8.333 when we were doing these cumulant expansions. And when we were doing these cumulant expansions, we obtained the series for, then, the grand partition function, which was a whole bunch of terms.
But when we took the log, only the connected terms survived. And the connected terms were the things that, because they had a center of mass, were giving you a factor that was proportional to volume. And here you expect that ultimately everything that I will get here, if I calculate, let's say, log Z properly and then divide by N, it should be something that is order of 1. It shouldn't be order of N, or N cubed, or any of these other terms.
So essentially, the purpose of all of these higher-order terms is really to subtract off things such as this that would arise in the counting when we look at islands and excitations that are disconnected. So I could have something right here, something right here.
So this would be, essentially, a product of the contributions of these different islands. As long as they are disconnected from some term in the series, there would be a subtraction that would get rid of that and would ensure that these additional factors of N, because I can move each island over the entire lattice, would disappear.
So I have this series, and now I can basically take the derivatives. So I have minus J, and I take d by dk of the various terms that have survived. The first one is d. So dJ is essentially the energy pair site that I would have at 0 temperature. I have strength J, deeper site. And then the excitations will start to reduce that and correct that.
And so from here, I would get minus 4d e to the minus 4dk. From here, I would get minus 4 2d minus 1 d e to the minus 4 2d minus 1 k. The N-squared terms disappeared. So I would have 2d plus 1 over 2. But then it gets multiplied by 8d when I take a derivative. So I will get plus 4d 2d plus 1 e to the minus 8dk, and so forth in the series. OK?
So these terms that are subtraction, you can see that you can really easily connect to these primary excitations. If you like, this term corresponds to taking two of these and colliding them with each other. They cannot be on top of each other. They cannot be next to each other. And so there is a subtraction because a number of configurations are not allowed. So this is, in some sense, a kind of expansion in these excitations and the interactions among these excitations. OK?
Now presumably, what is happening is that at very low temperature, you are going to get these individual simple excitations with a little bit of interaction between them. As you increase the temperature, the size of these islands will get bigger and bigger.
They start to merge into each other. Configurations that you would see will be big islands in a sea. And presumably, the size of these islands is some measure of the correlation length that you have in this low temperature state.
Eventually, this correlation length will hit the size of the system. And then the starting point, that you had a sea of pluses and you're exciting around it, it is no longer valid. If you like, that vacuum state has become unstable, and this series, the way that we are constructing it, ceases to go beyond that point. OK?
So let's take another step. If I've calculated the energy, I could also calculate the heat capacity, which is d by dT. Actually, I expect the heat capacity to be extensive also, so I'll divide by N. So I will look at the heat capacity per site. I know that the natural units of heat capacity are kB, which has dimensions of energy divided by temperature. So I divide by kB. So here I will have kBT.
But then I notice that kBT, these are related inversely to K, capital K. It is J over K. So I can write this as J over K, and d by d-- 1 over K will give me a minus K squared. I will have a factor of 1 over J, and the 1 over J actually cancels this factor of J here. So all I need to do-- well, actually, let me write it, J d by dK of this E over N.
So the expression that I have above, I have to take another derivative with respect to K multiplied by minus K squared over J. The J's cancel out, and so I will have a series that will be proportional to K squared. Good, I made everything dimensionless.
And then the first term that will contribute will be 16 d squared e to the minus 4dK, from here. And from here, I will get 16 2d minus 1 squared d e to the minus 4 2d minus 1 K. And from here, I would get minus 32d 2d plus 1 e to the minus 8dK, and then so forth. OK?
So you can see that this is something that is a kind of mechanical process, that in the '40s and '50s, without even the need for any computers, people could sit down and draw excitations, provide these terms, and go to higher and higher order terms in the series.
Now, the reason that they were going to do this is that the expectation that if I look at this heat capacity as a function of something like temperature, which is e to the 1 over k, for example, then it starts at 0. And if we get corrections from these higher and higher order terms in the series-- I calculated the first few-- I don't know what will happen if I were to include higher and higher order terms.
But my expectation is that, say, at least at some point when this expansion from low temperature breaks down, I will have a divergence, let's say, of the heat capacity. Or maybe I calculated susceptibility or some other quantity, and I expect to have some singularity. And maybe by looking and fitting more terms in the series, one can guess what the exponent and the location of the singularity is.
So you can see that, actually in this case, the natural variable that I am expanding is not K, but e to the minus 2dK-- sorry, e to the minus 2K because each excitation will have a number of broken bonds that I have to calculate. Each one of them makes a contribution like this. So maybe we can call this our new variable. And we have a series that has a function of this or some other variable, has a singularity.
Actually, you should be able to, first of all, convince yourself that the nature of the singularity is not modified by any mapping that is analytical at the point of the singularity. So if the heat capacity as a function of k has a particular divergence, as a function of u it will have exactly the same divergence.
In particular, we expect that as u approaches some critical value, the kinds of functions that we are interested have a behavior, a singular behavior, that is something like 1 minus u over uC. Let's say for the heat capacity, I would expect some kind of a singularity such as this.
If I had a pure function such as this and I constructed an expansion in u, what do I get? I will get 1 plus alpha u over uC plus alpha alpha plus 1 over 2 uC squared u squared, and so forth. It's just a binary series expanded. The l term in the series would be alpha alpha plus 1 alpha plus l minus 1 divided by l factorial-- that's actually 2 factorial-- uC to the power of l u to the l and so forth. OK?
Now, typically, one of the ways that you look at series and decide whether it's a singular convergent series or what the behavior is is to look at the ratio of subsequent terms. So let's say that when I calculated my function C as a function of u, I constructed a series whose terms had coefficients that I will call al. OK?
So here, if you had exactly this series, you would say that the ratio al divided by al minus 1 is essentially the ratio of one of these factors compared to the previous one. And every time you add one of these factors, you add a term that is like this alpha plus l minus 1, l factorial compared to l minus 1 factorial has a factor of l, and then you have uC. And I can rewrite this as uC inverse, l divided by l is 1, and then I have minus 1 minus alpha divided by l. OK?
So a pure divergence of the form that I have over here would predict that the ratio of subsequent terms would be something like this. And presumably, if you go sufficiently high in the series, in order to reproduce this divergence you must have that form.
So what you could do as a test is to plot, for your actual series, what the ratio of these terms is as a function of 1 over l. So you can start with the ratio of the second to first term. You would be at 1/2. Then you would go 1/3, then you would go 1/4, you would have 1/5, and basically you would have a set of points. And you would plot what the location is for the first term in the series, the next term in the series, the next term in the series, and so forth.
And if you are lucky, you would be able to then pass a straight line at large distances in the series. And the intercept of that extrapolated line would be your inverse of the singular point. And the slope of this line would give you 1 minus alpha or minus 1 minus alpha. OK?
So there is really, a priori, not much reason to hope that that will happen because you can say that if I look at the series that is A 1 minus u over uC to the minus alpha, plus I add an analytic part, which is sum p equals 1 to, say, 52 of bl u to the l. For any bl in this function has exactly the same singularity as the original one. And yet the first 52 terms in the series, because of this additional analytical form, have nothing to do with the eventual singularity. They're going to be massing that.
So there is no reason for you to expect that this should work. But when people do this, and they find that, let's say, for d equals to 2 up to some jumping up and down, they get a reasonable straight line. And the exponent that they get would correspond very closely to the alpha of 0, which is the logarithmic divergence that one gets.
So this is, for d equals to 2, and then they repeat it, let's say, for d equals to 3, they get a different set of points. OK? Maybe not perfectly on a straight line, but you can still extrapolate and conclude from that that you'll have an alpha which is roughly 0.11 when d equals to 3, which is quite good.
So for some reason or other, these lattice models are kind of sufficiently simple that, in an appropriate expansion, they don't seem to give you that much of a problem. And so people have gone and calculated series, let's say, this was in '50s and '60s, just by drawing things on hand. And maybe some primitive computers, you can go to order of 20 terms in this series, and then extrapolate exponents for various quantities. OK?
But it's not as simple as that. And the reason I calculated the first three terms for you was to show you that what I told you here was clearly a lie. Why is that? Because of the three terms that I explicitly calculated for you in that series, the third one is negative. Right? So clearly, if I were to plot that, I will get something over here. Right?
So what's going gone there is a different issue. And people have developed kind of methodologies and ways to look at series and guess what is going on and yet continue to extract exponents. So one potential origin for alternating signs-- and any series that has a divergence such as the one that I have indicated for you will have, eventually, signs that need to be positive-- has to do with the following.
Let's say if I take a series, which is 1 over 1 minus z/2. OK? This is a very nice series. It's 1 plus 0/2 z squared/4, z cubed/8. You could apply this ratio test to this series and conclude that you have a linear divergence.
Now, suppose I multiply that by 1 over 1 plus z squared, which is a function that's perfectly well-behaved as a function of z. Yet if I multiply it here, I will get 1 minus z squared plus z to the fourth minus z to the sixth. And what it does is it kind of distorts what is happening over here. Actually, in this series you can see it kind of becomes ill-defined when z is of order of 1. It changes the signs, et cetera.
But the function itself has a perfectly good singularity that appears at z equals to 2. And starting from an expansion from 0, there should be no problems along the line until you hit z of 2. What is the reason for these alternating signs? It is because you should be looking at the complex z plane. And in the complex z plane, you have poles at plus and minus i which are located closer to the origin than you have at 2.
So basically, your series will start to have problems by the time you hit here, and that problem is reflected in the alternating behavior. It's also showing up over there. Yet it has nothing to do with going along the real axis and encountering the singularity that you are after. OK?
So one thing that you can do is to say, well, who said I should use z as my variable? Maybe I can choose some other function v of z. OK? And then when I choose the appropriate thing, the singularity on the real axis will be pushed to v of 2. But maybe I chose appropriate function of v of z such that the other singularities are pushed very far away so that the first singularity that I encounter is over here. OK?
And it turns out that if you take this series over here and rather than working with e to the minus k, we recast things in terms of tanh K-- let's call that v-- which is e to the K plus e to the minus K-- well actually, tanh K I can also write as e to the 2K minus 1 e to the 2K plus 1. I mean, it's just a transformation. So I can replace e to the minus 2K with some function v, substitute for u in that series, and I will have a different function as an expansion in powers of v.
And once people do that, same thing happens as here. You'll find a function that all of its terms are, in fact, positive, and the things that I mentioned to you over here were applied. After such transformation, you get very nice behaviors. OK? So there seems to be some guesswork into finding the appropriate transformation.
There are other methods for dealing with series and extracting singularities called Pade approximants, et cetera, which I won't go into. But there are kind of, again, clever mathematical tricks for extracting singularity out of series such as this. OK?
So I'll tell you shortly why this tanh K is really a good expansion factor. It turns out that for Ising models, it's actually the right expansion factor if we go to the other limit of high temperatures. OK? So basically, now at T going to infinity, you would say that sigma i is minus or plus 1 with equal probability.
As T goes to infinity, this factor that encodes the tendency of spins to be next to each other has been scaled to 0, so I know exactly what is going on at infinite temperature. Basically, at each site, I have an independent variable that is decoupled from everything else. So I can start expanding around that for, say, the partition function.
Let's think of it for a general spin system. So I will write it as a trace over, let's say, if I have Potts model rather than two values, I would have K values of something like e to the minus beta H, again, trying to be reasonably general. And the idea is that as you go to infinite temperature, beta goes to 0, and this function you can expand in a series 1 minus beta H plus beta squared H squared over 2, and so forth.
Now, the trace of 1 is essentially summing over all possible states. Let's say the two states that you would have for the Ising model or however many that you have for Potts models at each site independently. So that can give me some partition function that I will call Z0. It is simply 2 to the n for the Ising model.
But once I factor that, you can see that the rest of the terms in the series can be regarded as expectation values of this Hamiltonian with respect to this weight in which all of the degrees of freedom are treated as independent, unconstrained variables.
And of course, the thing that I'm interested is log of the partition function. And so that will give me log of Z0, and then I have the log of this series. And then you can see that that series is a generating function for the moments of the Hamiltonian. So its log will be the generating function for the cumulant, so H to the l 0, the cumulant. So the variance at the second order and appropriate cumulant at higher orders. OK?
So let's try to calculate this for the Ising model, where my minus beta H is K sum over i, j sigma i sigma j. OK? Then at the lowest order, what do I get? The average of beta H is K sum over i, j average of sigma i sigma j with this zeroed weight.
But as I emphasized, at zeroed weight, every site independently can be plus or minus. Because of the independence, I can do this. And then since each site has equal probability to be plus or minus, its average is 0. So basically, this will be 0. OK?
So the first thing that can happen in that series-- if I go to the next order. So at next order, beta H squared would involve K squared sum over i, j K, l sigma i sigma j sigma K sigma l. And I have to take an average of this, which means that I have to take an average of something like this.
OK. And you would say, well, again, everything is 0. Well, there is one case where it won't be 0-- if these two pairs are identical. Right? So this is going to give me K squared sum over pair i, j being the same as K, l. Then I will get, essentially, sigma i squared sigma j squared. Sigma i squared is 1. Sigma j squared is 1. So basically, I will get 1. And this is going to give me K squared times the number of bonds. OK?
So you can see that I can start thinking of this already graphically. Because what I did over here, I said that on my lattice this sum says you pick one sigma i sigma j. If I were to pick the other sigma i sigma j over here, the average would be 0. I am forced to put two of them on top of each other.
If I go to three, there is no way that I can draw a diagram that involves three pairs in which every single site occurs twice, which is what I need. Because a single site appearing by itself or three times will give me sigma i cubed is the same as sigma i. It will average to 0.
So the next thing that I can do is to go to level four. At the level of four, I can certainly do something like this. I can put all four of them on top of each other, and then I get a K to the fourth contribution. Or I could put a pair here, and if they're here, for log Z that would be unacceptable because that will get subtracted out when I calculate the variance. It's not a connected piece. It's a disconnected piece. But I could have something like this, two of them turned like this. So that's four.
But really, the one that is nontrivial and interesting is when I do something like this, like a square. So I go here sigma 1 sigma 2, sigma 2 sigma 3. That sigma 2 has been repeated twice and becomes sigma 2 squared and goes away. Sigma 3 sigma 4, sigma 3 repeated twice, sigma 4 repeated twice, sigma 1 repeated twice, [INAUDIBLE]. OK?
So you can see that this kind of expansion will naturally lead you into an expansion in terms of loops on a lattice. So the natural form of high temperature expansions are these closed strings or loops, if you like, that you have to draw on the lattice.
Now, it's also clear that the thing that goes between two sites, that I'm indicating by K, in all cases is likely to be repeated by putting more and more things on top of each other without modifying the effect. So I can go here to 4 and things like actually 3 and things like that. So basically, you can see that I should really do a summation over the contribution of 2, 4, et cetera all on top of each other, or 1, 3, 5 on top of each other, and call them new variables.
So when we were doing the cluster expansion for particles interacting, we encountered this thing that we thought v was a good variable to expand it. But then because of these repeats, we decided that e to the minus beta v minus 1 was a good variable to expand it.
So a similar thing happens here. And for the Ising model, it is a very natural thing to recast this series in a slightly different way. You see that the contribution of each bond to the partition function, and by a bond I mean a pair of neighboring sites, is a factor e to the K sigma i sigma j. OK?
Now, since we are dealing with binary variables, this product, sigma i sigma j, can only take two values. It's either plus K or minus K depending on where things are aligned or misaligned. So I can indicate the binary nature of this in the following fashion. I can write this as e to the K plus e to the minus K over 2 plus sigma i sigma j e to the K minus e to the minus K over 2.
So that when I'm dealing with sigma sigma being plus, I add those two factors. e to the minus K's disappear. I will get e to the K. When I'm dealing with this thing to the minus, the e to the K's disappear, and I will get e to the minus K. So it's correct rewriting of that factor.
The first term you, of course, recognize as the hyperbolic cosine of K, the second one as the hyperbolic sine of K. And so I can write the whole thing as hyperbolic cosine 1 plus hyperbolic tanh of K sigma i sigma j. OK?
So this tanh is really same thing as here. It's the high-temperature expansion variable. As K goes to 0 at high temperature, tanh K also goes to 0. And it turns out that a much nicer variable to expand is this quantity tanh K. And so that I don't have to repeat it throughout, I will give it the symbol t. So small t stands not for reduced temperature anymore, but for hyperbolic tanh of K.
So my partition function now, Z-- maybe I'll go to another page. So my partition function is a sum over the 2 to the N binary variables e to the K sigma i sigma j sum over all bonds. I can write that as a product of these exponential factors over [INAUDIBLE].
Each of these exponential factors I can write as cosh K 1 plus t sigma i sigma j. All the factors of cosh K I will take to the outside. So I will get cosh K raised to the power of the number of bonds that I have in my lattice because each bond will contribute one of these factors. And then I have this sum over sigma i product over bonds. So this is the product of 1 plus t factors.
So for each-- maybe I'll do it over here. So for each i, j, I have to pick one of these factors. I can either pick 1, nothing, or I can pick a factor of t sigma i sigma j. OK? So the first term in this series-- since it's a series in powers of t, the first term in the series is to pick 1 everywhere.
The next term is to pick one factor at some point. But then when I pick that factor, that term in the series, I have to sum over sigma i. And when I sum over sigma i, since this sigma i can be plus or minus with equal probability, it will give me 0. OK?
So I cannot leave this sigma i by itself. So maybe I will pick another higher-order term in the series that has a t, a sigma i that would make this into a sigma i squared, and then I will have a sigma K here. OK?
Now, note it was kind of similar to what I was doing here. But here I could pick as many bonds as I like on as many factors of K. Now what has happened here is, effectively, I have only two choices.
One choice is having gone many, many times, so summing all of the terms that had 2, 4, et cetera. That's what gives you the cosh K. Or including something like this, sum of 1, 3, 5, et cetera. That's what gives you tanh K. But the good thing is that it's really now a binary choice. You either draw one line, or you don't draw anything. OK?
So again, your first choice is to somehow complete the series by drawing something like this. And quite generically-- OK, so after that has happened, then this is sigma i squared. This is sigma j squared. These are all-- they have gone to 1. And then you do the sum over sigma i, you will get a factor of 2.
So the answer is going to be 2 to the number of sites, N, cosh K to the power of the number of bonds. And then I would have a series, which is the sum over all graphs with even number of bonds per site like here. So I either have 0 bond going here, or I can have two bonds. I could very well have something like this, four bonds. That doesn't violate anything.
So all I need to ensure in order that sigma i does not average to 0 is that I have an even number per site. And then the contribution of the graph is t to the number of bonds in the graph. And at this stage when I'm calculating a partition function, there is no reason why I could not have disconnected graphs. For the partition function, there is no problem. Presumably, when I take the log, the disconnected pieces will go away. OK? Yes?
AUDIENCE: Where does the 2 to the N come from again?
PROFESSOR: OK. So at each site, I have to sum over sigma i. So sigma i is either minus 1 or plus 1. What I'm doing is sum over sigma i sigma i to some power. And this is either going to give me 2 or 0 depending on whether P is even or P is odd. All right? OK?
So you can try to calculate general terms for this series. Let's say we go to hypercubic lattice, which is what we were doing before. The number of bonds per site is d. So this, for the hypercubic lattice, the number of bonds will be dN. You could do this calculation for a triangular lattice. You don't have to stick with FCC lattice. You don't have to stick with these hypercubic lattices.
The first diagram that you can create is always the square. OK? And in d dimensions, one leg has a choice of d direction. The next one would be d minus 1. So this would be d d minus 1 over 2 t to the fourth. But you could start it from any site on the lattice so you would have something like this.
The next term that you would have in the series is something that involves, let's say, six bonds. So the next term will be N t to the 6. And I think I sometimes convince myself that the numerical factor was something like this, but doesn't matter. You could calculate out of this. Yes?
AUDIENCE: What if we have diagrams of order of t squared, just [INAUDIBLE] there and back?
PROFESSOR: OK. Where would I get the t squared from here? OK? So from this bond, I have this factor, 1 plus t sigma i sigma j. There is no t squared. I would have had K squared, K to the fourth, et cetera. But I re-summed all of them into hyperbolic cosine and the hyperbolic sine. So this--
AUDIENCE: So [INAUDIBLE] taking this product along all the bonds, you can kind of go along the same bond.
PROFESSOR: We already summed all of those things together into this factor t.
AUDIENCE: OK.
PROFESSOR: Yeah? OK? Yeah, it's good. And that's why this tanh is such a nice variable. OK? So there is actually the nicer series to work with in terms of trying to extract exponent is this high-temperature series in terms of these new diagrams, et cetera. But I'm not going to be doing diagrammatics.
What I will be using this high-temperature series is the following. One, to show that in a few minutes we can use it to exactly solve the one-dimensional Ising model and gain a physical understanding of what's happening, and 2, to re-derive Gaussian model.
Turns out that there is a close connection between all of these loops that you can draw on a lattice through some kind of a path integral way of thinking about it with the Gaussian model. And that we actually will use as a stepping stone towards where we are really headed, which is the exact solution of the 2D Ising model. OK?
So the 1D Ising model. And actually, the method is sufficiently powerful that we can compare and contrast two cases, one when you have open chain. So this is a system that is composed of sites 1, 2, 3, 4, N minus 1, N. On each one of them I have an Ising variable.
And if I follow my nose, it's a Z is 2 to the number of sites cosh K to the power of the number of bonds. Actually, clearly with open systems, the number of bonds is 1 less than the number of sites. So I can be extremely precise. It is N minus 1. And then I have to draw all graphs that I can on this lattice that have an even number of bonds emanating from each site. Find one.
[LAUGHTER]
OK. Since you won't have one, that stands. So you can take the log of that. You have this free energy, whatever you like. We can't. 1 is essentially not the zeroth order term in this series. Yes? That was the question. OK. All right?
You can use the same thing, same technology, to calculate spin-spin correlation. So I pick spins m and n on this chain. Let's say this is spin m here, and somewhere here I put spin n. And I want to know the average of that quantity. What am I supposed to do?
I'm supposed to sum over all configurations with the weight sigma i sigma i plus 1 product over all-- well, actually, we can be general with this. Let's call it product over all bonds, which, in this case, are near neighbors, sigma i sigma j. That weight I have to multiply by sigma m sigma n. And then I have to divide by the partition function so that this is appropriately weighted. OK?
So I can do precisely the same decomposition over here. So I will have 2 to the N cosh K to the number of bonds. In fact, this I can do in any dimensions. It's not really what I would have only in one dimension. And the partition function, you have seen, is the sum over all graphs, where t to the number of bonds in graph is called g.
Now I can do the same kind of expansion that I did over here. If I multiply with an additional sigma m sigma n, it is just like I have already a sigma m and a sigma n somewhere. And when I sum over sigmas, I have to make sure that these things don't average to 0.
So what I need to do is to draw graphs that have an even number at all sites and an odd number at these two sites. All right? So this is sum over g with even number except on m and n, where you have to have an odd number, and t is subset of graphs. OK?
So if I do this for the 1D model, sigma m sigma n, I have to draw graphs that have, essentially, an odd number. Essentially, sigma m and sigma n should be the origins or ends of lines. And clearly, I can draw a graph that connects these two. And so what I will get is t to the number of steps that I have to make between the two of them.
The rest of the it is going to be the same, 2 to the N cosh K to the N minus 1 in the numerator and denominator, they cancel each other. OK? So you can see explicitly that this is a function that decays since t is less than 1 as I go further and further out. And that it is a pure exponential.
So you remember that we said in general you would have a power law line in front that would have an exponent [? theta. ?] And when we did r of g, I told you, well, [? theta ?] came out to be 1 such that you have pure exponential.
Well, here is the proof. And furthermore, from this we see that c is minus 1 over log of the hyperbolic tanh of K. And if you expand that, you will find that as K goes to infinity, it has precisely that e to the 2K divergence that we had calculated.
So you can see that calculating things using this graphical method is very simple. And essentially, the interpretation of t is that it is the fidelity with which information goes from one site to the next site. And so the further away you go every time, you lose a factor of t in how sure you are about the nature of where you started with. And so as you go further, you have this exponential decay. OK?
And the other thing that we can do at no cost is periodic boundary conditions. So we take, again, our spins 1, 2, 3, except that we then bend it such that the last one comes and gets connected to the first one. OK? So what's the partition function in this case? It is 2 to the N.
The number of bonds, in this case, is exactly the same as the number of sites. It's one more than before, so I get to cosh K raised to the power of N. And then is it just one? There is one thing that goes all the way around, so I have 1 plus t to the N. So this is an exponentially small correction as we go further and further out. You can kind of regard that as some finite-size interaction.
I can similarly calculate sigma m sigma n, the expectation value. OK? And in the denominator from the partition function, I have this factor of 1 to the t to the N. In the numerator, again, you should be able to see two graphs. We can either connect this way or we can connect that way. So you'll have t to the power of n minus m, but you don't know which angle is the smaller one, so you'll have to also include the other one. OK?
So again, if we take N to infinity and these two sufficiently close, you can see that all of these finite-size effects, boundary effects, et cetera disappear. But this is, again, a toy model in which to think about what the effects of boundaries is, et cetera. You can see how nicely this graphical method can enable you to calculate things very rapidly. We'll see that, again, it provides the right tools conceptually to think about what happens in higher dimensions.