Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Description: In this lecture, Prof. Kardar introduces the Perturbative Renormalization Group, including the Niemeijer-van Leeuwen Cumulant Approximation and the Migdal-Kadanoff Bond Moving Approximation.
Instructor: Prof. Mehran Kardar
Lecture 14: Position Space ...
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: OK. Let's start. So last lecture we started on the topic of doing renormalization in position space. And the idea, let's say, was to look at something like the Ising model, whose partition function is obtained, if you have insides, by summing over all the 2 to the n configuration of a weight that tends to align variables, binary variables that are next to each other. And this next to each other is indicated by this nearest neighbor symbol, sigma i, sigma j. Potentially, we may want to add the magnetic field like term that [INAUDIBLE].
The idea of the renormalization group is to obtain a similar Hamiltonian that describes interactions among spins that are further apart. We saw that we could do this easily in the case of the one-dimensional system. Well, let's say you have a line, and you have sites on the line. Each one of them wants to make their neighbor parallel to itself.
And what we saw was that I could easily get rid of every other spin and keep one set of spins. If I do it that, I get a partition function that operates between the remaining spins. Was very easy to sum over the two variables that spin in between these two could have and conclude that after this step, which corresponds to removing half of the degrees of freedom, I get a new interaction, k prime, which was 1/2 log hyperbolic cosine of 2K, if h was zero.
And we saw that that, which is also a prototype of other systems in one dimension, basically is incapable of giving you long-range order or phase transition at finite temperature that corresponds to finite k. So the only place where you could have potentially ordering over large landscape is when k becomes very large at zero temperature. We saw how the correlation length behaves and diverges as you approach zero temperature in this type of model.
Now, the next step would be to look at something that is two-dimensional. And in this context, I describe how it would be ideal if, let's say, we start with a square lattice. We have interactions k between neighbors. And we could potentially do the same thing. Let's say remove one sublattice of spins, getting interactions among the other sublattice of spins that would again correspond to removing half of the spins in the system. But in terms of length scale change, it corresponds to square root of 2. This length compared to the old length is different by a factor of square root of 2.
But the thing that I also indicated was that this spin is now coupled to all four of them. And once I remove the spin, I can generate new interactions operating between these spins. In fact, you will generate also a four-spin interaction, so your space of parameters is not closed under this procedure. The same applies to all higher dimensional systems, and so they are not really solvable by this approach unless you start making some approximations.
So the particular approximation that I introduced here was done and applied shortly after Cavenaugh brought forth this idea of removing degrees of freedom and renormalization by-- I'll write this once, and hopefully I will not make a mistake. Niemeijer and van Leeuwen And it's a kind of cumulant expansion, as I will describe shortly. It's an approximation.
And rather than doing the square lattice, it is applied to the triangular lattice. And that's going to be the hardest part of this class for me, to draw a triangular lattice. Not doing a good job.
So basically, we put our spins, sigma i plus minus 1, on the sides of this. And we put an interaction k that operates between neighbors. And we want to do a renormalization in which we reduce the number of degrees of freedom. What Niemeijer and van Leeuwen suggested was the following. You can group the sublattices of the triangular lattice into three. I have indicated them by 1, 2, 3. Basically, there is going to be some selection of sublattice sites on this lattice.
What they suggested was to basically define cells-- so this would be one cell. This would be another cell. This would be another cell. This would be another cell over here-- such that every side of the original lattice belongs to one and only one site of these cells. OK. So basically, I guess the next one would come over here. All right.
So let's call the site by label i. And let's give the cells label that I will indicate by Greek. So sites, I will indicate i, j, cells, by Greek letters, alphabet, et cetera. So that, for example, we can regard this as cell alpha, this one, this triangle, as forming cell beta.
Now, the idea of Niemeijer and van Leeuwen was that to each cell, we are going to assign a new spin that is reflective of the configuration of the site spins. So for this, they propose the majority rule. Basically, they said that we call the spin for site alpha to be the majority of the site spins.
So basically, if all three spins are plus or all three of them are minus, you basically go to plus or minus. If two of them are plus and one of them is minus, you would choose the one that is a majority. So you can see that this also has only two possibilities, plus 1 and minus 1, which would not have taken place if I had tried to do a majority of two sites. It would have worked if I had chosen a majority of three sites on a one-dimensional lattice clearly.
So that's the rule. So you can see that for every configuration that I have originally, I can do these kinds of averaging and define a configuration that exists for the cells. And the idea is that if I weigh the initial configurations according to the weight where the nearest neighbor coupling is k, what is the weight that governs these new configurations for the averaged or majority cell spins?
Now, to do this problem exactly is subject to the same difficulty that I mentioned before. That is, if I somehow do an averaging over the spins in here to get the majority, then I will generate interactions that run over further neighboring, as we will see shortly. So to remove that, they introduced a kind of uncontrolled break-up of the Hamiltonian that governed the system. That is, they wrote beta H minus beta H, which is the sum over all neighboring site spins, as the sum of a part that then cancels perturbatively and a part of that will treat as a correction to the part that we can solve exactly.
The beta H zero is this sum over all cells alpha. What you do is basically you just include the interactions among the cell spins. So I have K sigma alpha 1 sigma alpha 2 sigma alpha 2 sigma alpha 3 sigma alpha 3 sigma alpha 1. So basically, these are the interactions within a cell.
What have I left out? I have left out the interactions that operate between cells, so all of these things, which, of course, are the same strength but, for lack of better things to do, they said OK. We are going to sum over all, you can see now, neighboring cells. So the things that I left out, let's say, interactions between this cell alpha and beta, evolve-- let's say the spin number 1 in this labeling of beta times spin number 2 of alpha and spin number 3 of alpha.
Now, of course, what I call 1, 2, or 3 will depend on the relative orientation of the neighboring cells. But the idea is the same, that basically, for each pair of neighboring cells, there will be two of these interactions. So again, these no a priori reason to regard this as a perturbation. Both of them clearly carry the same strength for the bond, this parameter K. OK? The justification is only solvability.
So the partition function that I have to calculate, which is the sum of all spin configuration with the original weight, I can write as a sum over all spin configurations of e to the minus beta H 0, and then minus u, and the idea of perturbation is, of course, to write the part that depends on u as a perturbation. It's expanding the exponential.
Now, solvability relies on the fact that the term that just multiplies 1 is the sum of triplets that are only interacting between themselves. They don't see anybody else. So that's clearly very easily solvable, and we can calculate the partition functions 0 that describes that. And then we can start to evaluate all of those other terms, once I have pulled out e to the minus beta H zero sum over all configurations in Z 0. The series that I will generate are averages of this interaction calculated with the 0 [INAUDIBLE] Hamiltonian.
So my log of Z-- OK. That's how if I was to calculate the problem perturbatively. Now I'm going to do something that is slightly different. So what I will do is, rather than do the sum that I have indicated above, I will do a slight variation. I will sum only over configurations. Maybe I should write it in this fashion. I will sum only over configurations that under averaging give me a particular configuration of site spins.
So, basically, let's say I pick a configuration in which this is-- the cell spin is plus, the cell spin is minus, whatever, some configuration of cell spins. Now, depending on this cell spin being plus, there are many configurations-- not many-- but there are four configurations of the site spins that would correspond to this being plus. There are four configurations that would correspond to this. So basically, I specify what configuration of cell spins I want, and I do this sum. So the answer there is some kind of a weight that depends on my choice of [INAUDIBLE] sigma alpha [INAUDIBLE]. OK.
So then I have the same thing over here. And then, in principle, all of these quantities will become a function of the choice of my configuration. OK.
So this is a weight for cell configurations once I average out over all site configurations that are compatible with that. So if I take the log of this, I can think of that as an effective Hamiltonian that operates on these variables. And this is what we have usually been indicating by the prime interactions.
And so, if I take the log of that expression, I will get log of z 0 that is compatible with the choice of cell spins. And then the log of this series-- we've seen this many times-- it starts with minus U 0 as a function of the specified interactions. And then I have the variance, again compatible with the interactions. OK?
So now it comes to basically solving this problem. And I pick some particular cell. And I look at what configurations I could have that are compactible for a particular sign of the cell spin. As far as the site spins are concerned. And I will also indicate what the weight is that I have to put for that cell coming from minus beta H0.
one thing that I can certainly do is to have the cell spin plus. And I indicated that that I can get either by having all three spins to be plus or just the majority, which means that one of the three can become minus. So there are these configurations that are consistent, all of them, with the sigma alpha prime, the majority that is plus. And there are four configurations that correspond to minuses we shall obtain by essentially flipping everything.
And the weights are easy to figure out. Basically I have a triplet of spins that are coupled by interaction K. In this case, all three are positive. So I have three into the K factor, so I will get e to the 3K. Whereas if one of them becomes minus and two remain plus, you can see that there are two unhappy misaligned configurations. So I will get minus K, minus K, plus K. I will get e to the minus K. It doesn't matter which one of these three it is.
If all three are minuses, then again, the sites are aligned. So I will get e to the 3K. If two minuses and 1 plus, 2 bonds are unhappy. One is happy. I will get e to the minus K, e to the minus K, e to the minus K.
So once I have specified what my cell spin is, the contribution to the partition function is obtained by summing over contributions things that are compatible with that. So what is it if I specify that my cell spin is plus? The contribution to the partition function is e to the 3K plus e to the 3 to the minus K. It's actually exactly the same thing if I had specified that it is minus.
So we see that this factor, irrespective of whether the choice here is that the cell spin is plus or minus, is log of e to the 3K plus 3e to the minus K. Pair any one of the cells and how many cells I have, 1/3 of the number of sites with the number of sites I had indicated by N, this would be N over 3. OK.
Now, let's see what this U average is. So U-- I made one sign error. I put both of them as minuses, which means that in the notation I had-- no. That's fine. Minus U0. So I put the minus sign here -- is plus K sum over all pairs that are neighboring each other, for example, like the pair alpha beta I have indicated, but any other pair of neighboring cells.
I have to write an expression such as this. So I have the K. I have sigma. I have beta 1 sigma alpha 2 plus sigma beta 1 sigma alpha 3. So this is the expression that I have for you. I have to take the average of this quantity, basically the average of the sum. I will have two of these averages.
Now in my zero to order weight, there is no coupling between this cell and any other cell. So what the spin on each cell is on average cares nothing about what the spin is on any other cell, which means that these averages are independent of each other. I can write it in this fashion. So all I need to do is to calculate the average of one of these columns, given that I have specified what the cell is.
So let's pick, let's say sigma alpha 1 average in this zero to order weight. Now I can see immediately that I will have two possibilities, the top four or the bottom four. The top four corresponds to sigma cell being plus. The bottom four correspond to the sigma cell being minus. So the top four is essentially I have to look at the average on this column. I have-- it is either plus, and then I get a weight e to the 3K.
Or it is minus. I get weight e to the minus K plus e to the minus k plus e to the minus k. So once I add 2 plus e to the minus K and subtract 1e to the minus K, I really get this. Now, of course, I have to normalize by the weights that I have per cell. And the weights are really these factors but divided by e to the 3K plus 3 to the minus K.
Whereas if I had specified that the cell spin is minus, and I wanted to calculate the average here, I would be dealing with these numbers. You can see that I will have a minus e to the 3K. I will have one plus and two minuses e to the minus K, so I will get minus e to the minus K e to the 3K plus 3e to the minus K, which is the normalizing weight. So it's just minus the other one.
And I can put these two together and write it as e to the 3K plus e to the minus K e to the 3K plus 3e to the minus K sigma alpha prime. So the average of any one of these three site spins is simply proportional to what you said was the cell spin. The constant of proportionality depends on K according to this.
So now if I substitute this over here, what do I find? I will find that minus U at the lowest level, each one of these factors will give me the same thing. So this K becomes 2K. I have a sum over alpha and betas that are neighboring. Each one of these sigmas I will replace by the corresponding average here. Add the cost of multiplying by one of these factors. And there are two such factors. So I basically get this.
So add this order in the series that I have written, if I forget about all of those things, what has happened? I see that the weight that governs the cell spins is again something that only couples the nearest neighbor cells with a new interaction that I can call K prime. And this new interaction, K prime, is simply 2K into the 3K plus e to the minus K e to the 3K plus 3 e to the minus K squared.
So presumably, again, if I think of the axes' possible values of K running all the way from no coupling at 0 to very strongly coupling at infinity, this tells me under rescaling where the parameters go. If I start here, where do I go back and forth? So let's follow the path that we followed in one dimension. We expect something to correspond to essentially no coupling at all. So we look at the limit where K goes to 0. Then you can see that what is happening here is that K prime is 2K. And then from here, when K goes to 0, all of these factors become 1. So numerator is 2, denominator is 4. The whole thing is squared. So basically in that limit, the interaction gets halved.
So if I have a very [INAUDIBLE] coupling of 1/8, then it becomes 1/16 and then it becomes 1 over 32. I get pulled towards this. So presumably anything that is here will at long distance look disordered, just like one dimension.
But now let's look at the other limit. What happens when K is very large, K goes to infinity? Then K prime is-- well, there's the 2K out front, but that's it. Because e to the 3K is going to dominate over e to the minus K when K is large, and this ratio goes to 1.
So we see that if I start with a K of 1,000, then I go to 2,000 to 4,000, and basically I get pulled towards a behavior of infinity. So this is different from one dimension. In one dimension, you were always going to 0. Now we can see that in this two-dimensional model, recoupling disappears, goes to no coupling. Strong enough coupling goes to everybody's following in line and doing the same thing at large scale.
So we can very well guess that there should be some point in between that separates these two types of flows. And that is going to be the point where I would have KC, or let's call it-- I guess I call it K star in the notes. So let's call it K star.
So K star is 2K star e to the 3K star plus e to the minus K star e to the 3K star plus e to the minus-- 3e to the minus K star squared. So we can drop out the K star. You can see that what you have to solve is e to the 3K star plus e to the minus K star divided by e to the 3K star plus 3e to the minus K star. This ratio is 1 over square root of 2.
I can multiply everything by e to the plus K star so that this becomes 1. And I have an algebraic equation to solve for e to the 4K star. So I will get root 2e to the 4K star plus root 2 is e to the 4K star plus 1. And I get the value of K star, which is 1/4 log of-- whoops, this was a 3-- 3 minus root 2 divided by root 2 minus 1. You put it in the calculator, and it becomes something that is of the order of 0.233. No, 27.
So yes?
AUDIENCE: So, first thing, do we want to name what is the length factor by which we change the characteristic length?
PROFESSOR: Absolutely. Yes. So the next-- yeah.
AUDIENCE: But we never kind of bothered to do it so far.
PROFESSOR: We will need to do immediately. So just hold on a second. The next thing, I need this B factor. But it's obvious, I have reduced the number of degrees of freedom by 3, so the length scale must have in two dimensions increased by square root of 3.
And you can do also the algebra analogous to this to convince you that the distance, let's say, from the center of this triangle to the center of that triangle is exactly [INAUDIBLE].
AUDIENCE: Also, when you're writing the cumulant expansion--
PROFESSOR: Yes.
AUDIENCE: In all of our previous occasions when we did perturbations, the convergence of the series was kind of reassured because every perturbation was proportional to some scalar number that we claimed to be small, and thus series would hopefully converge.
PROFESSOR: Right.
AUDIENCE: But In this case, how can you be sure that for modified interaction and renormalized version, you don't need [INAUDIBLE]?
PROFESSOR: Well, let me first slightly correct what you said before I think you meant correctly, which is that previously we had parameters that we were ensuring were small. That did not guarantee the convergence of the series or the lattice. In this case, we don't have even a parameter that we can make small. So the only thing that we can do, and I will briefly mention that, is to basically see what happens if we include more and more terms in that series and we compare results and whether there is some convergence or not. Yes?
AUDIENCE: Can you explain again how we got the K prime equation?
PROFESSOR: OK. So I said that I have some configuration of the cell spins. Let's say the configuration is plus plus minus plus. Whatever, some configuration. Now there are many configurations of site spins that correspond to that. So the weight of this configuration is obtained by summing over the weights of all configurations of site spins that are compatible with that. And that was a series that we had over here.
And K prime, or the interaction, typically we put in the exponent, so I have to take a log of this to see what the interactions are. The log has this series that starts with the average of this interaction. OK? So this was the formula for U. It's over here.
And then here, it says I have to take an average of it. Average, given that I have specified what the cell spins are. And I see that that average is really product of averages of site spins. And I was able to evaluate the average of a site spin, and I found that up to some proportionality constant, it was the cell spin.
So if the cell spin is specified to be plus, the average of each one of the site spins tends to be plus. If the cell spin is specified to be minus, since I'm looking at this subset of configuration, the average is likely to be minus. And that proportionality factor is here. I put that proportionality factor here, and I see that this average is a product of neighboring cell spins, which are weighted by this factor, which is like the original weights that you write, except with a new K. Yes?
AUDIENCE: So after renormalization, we get some new kind of lattice, which is not random. It's completely new. Because what you did here is you take out certain cells--
PROFESSOR: Yeah.
AUDIENCE: And call them [INAUDIBLE].
PROFESSOR: Right. But what is this new lattice? This new lattice is a triangular lattice that this is rotated with respect to the original one. So it's exactly the same lattice as before. It's not a random lattice.
AUDIENCE: Yes. But on the initial lattice, you specified that these cells would contribute to--
PROFESSOR: Yes. I separated K and K prime. Yes. K and U, yes.
AUDIENCE: OK. So if you want to do a renormalization group again, we'll need to--
PROFESSOR: Yeah. Do this.
AUDIENCE: Again [INAUDIBLE].
PROFESSOR: Exactly. Yeah. But we do it once, and we have the recursion relation. And then we stop.
AUDIENCE: Yeah.
PROFESSOR: OK. Yes?
AUDIENCE: Is this possible for other odd number lattices? Will you still preserve the parameter?
PROFESSOR: Yes. It's even possible for square lattices with some modification, and that's what you'll have in one of the problems. OK? Fine. But the point is-- OK.
So I stopped here. So K star was 0.27, which is the coupling that separates places where you go to uncorrelated spins, places you go to everything ordered together. It turns out that the triangular lattice is something that one can solve exactly. It's one of the few things. And you'll have the pleasure of serving that also in a problem set. And you will show that KC, the correct value of the coupling, is something like 0.33. So that gives you an idea of how good or bad this approximation is.
But the point in any case is that the location of the coupling is not that important. We have discussed that it is non-universal. The thing that maybe we should be more interested in is what happens if I'm in the vicinity of this, how rapidly do I move away? And actually I have to show that we are moving away. But because of topology, it's more or less obvious that it should be that way.
So what I need to do is evaluate this at K star. OK. Now you can see that K prime is a function of K. So what you need to do is to take derivatives. So thers' some algebra involved here. And then, once you have taken the derivative, you have to put the value of K star. And here, some calculator is necessary. And at the end of the day, the number that you get, I believe, is something like 1.62. Yes. And so that says since being it's larger than 1, that you will be pushed away.
But these things have been important to us as indicators of these exponents. In particular, I'm on the subspace that has symmetry, so I should be calculating yt here. As was pointed out, important to this step is knowing what the value of B is, which we can either look at by the ratio of the lattice constants or by the fact that I have removed one third of the spins. It has to be root 3 to the power yt. So my yt is log of 1.62 divided by log of root 3. So again you go and look at your calculator, and the answer comes out to be 0.88.
Now the exact value of yt for all two-dimensionalizing models is 1. So again, this is an indicator of how good or bad you have done at this ordering perturbation tier. OK.
Now, answering the question that you had before, suppose I were to go to order of U squared? Now, order of U squared, I have to take this kind of interaction, which is bilinear. Let's say pair of spins here, multiply two of them, so I will get a pair of spins here and pair of spins there. As long as they are distinct locations when I subtract the average squared, they will subtract out. So the only place where I will get something non-trivial is if I pick one here and one here.
And by that kind of reasoning, you can convince yourself that what happens at next order is that in addition to interactions between neighbors, you will degenerate interactions between things that are two apart and things that are, well, three apart, so basically next nearest neighbors and next next nearest neighbors. So what you will have, even if you start with a form such as this, you will generate next nearest neighbor and next next nearest neighbor interactions. Let's call them K, L, M.
So to be consistent, you have to go back to the original model and put the three interactions and construct recursion relations from the three parameters, K, L, M, to the new three parameters. More or less following this procedure, it's several pages of algebra. So I won't do it. Niemeijer and van Leeuwen did it, and they calculated the yt at next order by finding the fixed point in this three-dimensional space. It has one relevant direction, and that one relevant direction gave them an eigenvalue that was extremely close to 1.
So I don't believe anybody has taken this to next order. You've got good enough, might as well stop. I think it's not going to improve and get better because this is an uncontrolled approximation. So it's likely to be one of those cases, that you asymptotically approach the good result and then move away.
Now once I have yt, I can naturally calculate exponents such as alpha. First of all U, which is 1 over yt. 1 over 0.88 is something like 1.13. And the exact result would be the inverse of 1 which is 1. And I can calculate alpha, which is 2 minus d, which is 2 mu. With that value of U, I will get minus 0.26. Again, the correct result would be corresponding to a logarithmic divergence. So this zeroed order, OK. Those things, let's say, for the exponents to 10%, 20%.
You would say that, OK, what about other exponents, such as beta, gamma, and so forth. Clearly, to get those exponents, I also need to have yh. OK. So to get yh, I will add, as an additional perturbation, a term which is h sum over i sigma i, which is, of course, the same thing as sum over alpha, sigma alpha of 1 plus sigma alpha 2 plus sigma alpha 3.
And if I regard this as a perturbation, you can see that in the perturbative scheme, this would go under the transformation to the average of this quantity, and that the average of this quantity will give me 3 for each cell. So I will get 3h. And for each cell, I will get the average of a site spin, which is related to the cell spin through this factor that we calculated, e to the 3K e to the minus k e to the 3k plus 3e to the minus k sigma alpha prime. So we can see that by generating h prime, which is 3h times e to the 3K plus e to the minus K e to the 3K plus 3e to the minus K.
And I can evaluate d to the yh as dh prime by dh evaluated at the fixed point. So I will get essentially 3 times this factor evaluated at the fixed point. But we can see that at the fixed point, this factor is 1 over root 2. So the answer is 3 over root 2. And my yh would be the log of 3 over root 2 divided by log of b that we said is square root of 3. Put it in the calculator, you get a number that is of the order of 1.4. And exact yh is 1.875. So again, once you have yh, you can go and calculate through the exponent scaling relations all the other exponents that you have like beta.
So not bad, considering that if you wanted to go through epsilon expansion, how much difficulty you would have. And in any case, we are at two dimensions, which is far away from four. And getting results at 2 is worse than trying to get results at three dimensions.
Now we want to do the procedure as an approximation that is even simpler than this. And for that-- so that was the Niemeijer-van Leeuwen procedure. The next one is due to Kadanoff again and Migdal. And it's called bond-moving.
And again, we have to do an approximation. You can't do things exact. So let's demonstrate that by a square lattice, which is much easier to draw than the triangular lattice.
And let's kind of follow the procedure that we had for the one-dimensional case. Let's say we want to do rescaling by a factor of 2. And I want to keep this spin, this spin, this spin, this spin, this spin and get rid of all of the other spins that I have. Not the circular round, much as I did for the one-dimensional case.
And the problem is that if I'm summing over this spin over here, there are paths that connect that spin to other spins. So by necessity, once I sum over all of these spins, I will generate all kinds of interactions. So the problem is all of these paths that connect things. So maybe-- and this is called bond-moving-- maybe I can remove all of these things that are going to cause problem for--
So if I do that, then the only connection between this spin and this spin comes from that side, between this spin and that spin comes from that side. And if the original interaction was K and I sum over this, I will get K prime, which is what I have over there, 1/2 log cos 2K, because the only thing that I did was to connect this site to two neighbors, and then effectively, it's the same thing as I was doing for one dimension.
So clearly this is a very bad approximation, because I have reproduced the same result as one dimension for the two-dimensional case. And the reason is that I weakened the lattice so drastically, I removed most of the bonds. So there isn't that much weight for the lattice to order. Kadanoff and Migdal suggested was, OK. Let's not to remove these bonds. Just move them to some place that they don't cause any harm.
So I take this bond and I strengthen this bond. I take this bond, strengthen this one. This one goes to this one. Essentially what happens is you can see that each one of the bonds has been strengthened by 2. So I have this because of the strength.
So as simple as you can get to construct a potential recursion relation for this square lattice. So this is a way that the parameter K changes, going from 0 to infinity. And we can do the same thing that we did over there. So we can check that for K going zero, if I look at K prime, it is approximately 1/2 log of hyperbolic cosine of something that is close to 0. So that becomes 1 plus the square of this quantity, which is 4K squared over 2. And taking the log of that, it becomes 4K squared.
The factor of 4 does not really matter. If K is very small, like 1 over 100, K squared would be 10 to the minus 4. So basically, you certainly have the expected behavior of becoming disordered if you have a [INAUDIBLE] interaction. If you have a strong enough coupling, however, are we different from what we did for the one-dimensional case? Well, the answer is that in this case, K prime is 1/2 log hyperbolic cosine of 4K, starts as e to the 4K plus e to the minus 4K divided by 2. e to the minus 4K I can ignore.
So you can see that in this case, I will have 2K. I can even ignore the minus log 2, which previously was so important because previously we had 1 here and now it became 2, which means that if I start to be 10,000, it will become 20,000, 40,000 and now you're going this direction. So again, by necessity almost, I must have a fixed point at some value in between. So I essentially have to solve for K star as K star is 1/2 log cos of 4K star. You can recast this as some algebraic equation in terms of e to the 4k e to the K star and manipulate it. And after you do your algebra, you will eventually come up with a value of K star, which I believe is 0.3.
You can ask, well, the square lattice we will solve in class. I said the triangular lattice I will leave for you to solve. KC for the square lattice is something like 0.44. So you are off by about 25%. Of course, again, the quantity that you're interested in is b to t yt. b is 2 in this case. The length scale has changed by a factor of 2, which is dk prime by dk evaluated as K star, again, a combination of doing the algebra of derivatives, evaluating at K star, and then ultimately taking the log to convert it to a yt. And you come up with a value of yt that is around 0.75. And, as I said, the exact yt, which doesn't depend on whether you are dealing with a square lattice or a triangular lattice-- it's only a function of symmetry and dimensionality-- is 1.
So you can see that gradually we are simplifying the complexity. Now we could, within this approximation, solve everything within one panel. Now this kind of approximation, again, is not particularly very good. But it's a quick and dirty way of getting results. And the advantage of it is that you can do this not only in two dimensions, but in higher dimensions as well.
So let's say that you had a cubic lattice and you were doing rescaling by a factor of 2, which means that originally you had spins along the various diagonals and so forth-- around the various partitions of a square of size 2 x 2, and you want to keep interactions among these and get rid of the interactions among all of the places that you are not interested in. And the way that you do that is precisely as before. You move these interactions and strengthen the things that you have over here.
Now whereas the number of bonds that you had to move for the square lattice was 2-- the enhancement factor was 2-- turns out that the enhancement factor in three dimensions would be 4. You essentially have to take one from here, one from there, one from there. So 1 plus 3 becomes 4. And you can convince yourself that if I had done this in d-dimensional hypercubic lattice, what I would have gotten is again the one-dimensional recursion relation, except for this enhancement factor, which is 2 to the power of d minus 1 in d dimensions.
Actually, I could even do that for rescaling rather by a factor 2, by a factor of b. And hopefully, you can convince yourself that it will become b to the d minus 1 times 2K. And essentially that factor is a cross section that you have to remove. So the cross-sectional area that you encounter grows as the size to the power of d minus 1. And a kind of obvious consequence of that is that if I go to the limit of K going to infinity, you can see that K prime b would go like b to the d minus 1K. Essentially, if you were to have some kind of a system and you make pluses on one side, minuses on the one side, to break it, then, the number of bonds that you would have to break would grow like the cross-sectional area. So that's where that comes from.
It turns out that, again, this approach is exact as we've seen for one dimension. As we go to higher dimensions, it becomes worse and worse. So I showed you how bad it was in two dimensions. If I calculate the fixed point and the exponents in three dimensions compared to our best numerical result, it is off by, I don't know, 40%, 50%, whereas 25% over there. So it gradually gets worse and worse. And so one approach that people have tried to do, which, again, doesn't seem to be very rigorous is to convert this into an expansion on dimensionality of 1. So it's roughly correct that it's going to be close to 1-- correct close to one dimension. But as opposed to the previous epsilon expansion, there doesn't seem to be a controlled way to do this.
I showed you how to do this for Ising models. Actually, you can do this for any spin model. So let's imagine that we have some kind of a model that's in one dimension. At each site, we have some variable that as i that I will not specify what it is, how many variables-- how many values it takes. But it interacts only with its neighboring sites. And so presumably, there is some interaction that depends on the two sites. There may be multiple couplings implicit in this if I try to write this in terms of the dot product of spins or things like this.
So if I were to calculate the partition function in one dimension-- I already mentioned this last time-- I have to do a sum over what this spin is of e to the K of si si plus 1 and a product over subsequent sites. And if I regard this as a matrix, which is generally called a transfer matrix, you can see that this multiplication involving the sum over all of the spins is equivalent to matrix multiplication. And, in particular, if I have periodic boundary conditions in which the last spin couples to the first spin, I would have trace of T to the power of N, where t is essentially this, e to the K of si si plus 1.
Now, clearly I can write this as trace of T squared to the power of N over 2. Right. And this I can regard as the partition function of a system that has half as many spins. So I have performed the renormalization group like what we are doing in one dimension. I have T prime is T squared.
And in general, you can see that I can write this as N to the b N over b. So the result of renormalization by a factor of b is simply one dimension to take the matrix that you have and raise it to b power. OK. And so I could parameterize my T by a set of interactions K, like we do for the Ising model. Raise it to the power of b, and I would generate the matrix that I could then parametrize by K prime, and I would have the relationship between K prime and K in one dimension. So this is d equals 1.
And the way to generalize this Migdal Kadanoff, RG, to a very general system is simply to enhance the couplings. So basically, what I would write down is T prime, which, after rescaling by a factor of b, is a function of a set of parameters that I will K prime, is obtained by taking the matrix that I have for one set of couplings and raise it to the power of b. This is the exact one-dimensional result. And if I want to construct this approximation in d dimensions, I will just do this.
So for a while, before people had sufficiently powerful computers to maybe simulate things as easily, this was a good way to estimate locations and exponents of phase diagrams, critical exponents, et cetera for essentially complicated problems that could have a set of parameters even here. Nowadays, as I said, you probably can do things much more easily by computer simulations.
So I guess I still have another 10 minutes. I probably don't want to start on the topic of the next lecture. But maybe what I'll do is I'll expand a little bit on something that I mentioned very rapidly last lecture, which is that in this one-dimensional model where I solve the problem by transfer matrix, what I have is that the partition function is trace of some matrix raised to the N power. And if I diagonalize the matrix, what I will get is the sum over all eigenvalues raised to the N power. Now note that we expect phase transitions to occur, not for any finite system, but only in the limit where there are many degrees of freedom. And, actually, if I have such a sum as this in the limit of very large number of degrees of freedom, this becomes lambda max to the power of N.
Now in order to get any one of these series of eigenvalues, what should I do? I should take a matrix, which is this e to the T-- e to the strength of the interactions. What did I write? e to the K of S and S prime. So there is a matrix. All of its elements are Boltzmann weights. There are all positive. And find the eigenvalues of this matrix.
Now for the case of the Ising model without a magnetic field, the matrix is 2 x 2. It's e to the K, corresponding to the diagonal terms where the spins are parallel, e to the minus K when the spins are antiparallel. And clearly you can see that the eigenvalues corresponding to 1, 1 or 1, minus 1 as eigenvectors, are hyperbolic cosine of K-- strike this. e to the K plus e to the minus K e to the K minus e to the minus K. You can see that to get this, all I had to do was to diagonalize a matrix that corresponded to one bond, if you like.
And just as in here, there is no reason to expect that these eigenvalues, which depend on this set of parameters, should non-analytical functions. There is no reason for non-analyticity as long as you are dealing with a single bond. We expect non-analyticities at their limit of large N.
So if each one of these is analytic function of K, the only way spanning the K axis that I could encounter in non-analyticity is if two of these eigenvalues cross, because we have seen a potential mechanism for phase transitions. We discussed this in A333. That if we have sum of contributions, that each one of them is exponentially large in N, and two of these contributions cross, then your partition function will jump from one hill to another hill. And you will have a discontinuity, let's say, in derivatives or whatever.
So the potential mechanism that I can have is that if I have as a function of changing one of my parameters or a bunch of these parameters, the ordering of these eigenvalues-- let's say lambda 0 is the largest one, lambda 1 is the next one, lambda 2, et cetera. So each one of them is going its own way. If the largest one suddenly gets crossed by something else, then you willl basically abandon one eigenvalue for another, and you will have a mechanism for a phase transition.
So what I told you was that there is a theorem that for a matrix where all of the eigenvalues are positive, this will never happen. The largest eigenvalue will remain non-degenerate. And there is some analog of this, probably you've seen in quantum mechanics, that if you have a potential, the ground state is non-degenerate. The ground state is always a function that is positive everywhere. And the next excitation would have to have a node go from plus to minus. And somehow you cannot have the eigenvalues cross. So in a similar sense, it turns out the largest eigenvalue, the largest eigenvector for these matrices, will have all of the elements positive, and it cannot become degenerate. And so you are guaranteed that this will not happen.
Now the second part of this story that I briefly mentioned was you can repeat this for two dimensions, for three dimensions, higher dimensional things. So one thing that you could do is, rather than sorting the Ising model on a line, you can solve it on a ladder, or a ladder that has two rungs. So solving the Ising model on this structure is not very difficult because you can say that there are eight possible values that this can take.
And so I can construct a matrix that is 8 x 8 that tells me how I go from the choice of eight possibilities here to the eight possibilities there. And I will have an 8 x 8 matrix that has these properties and will satisfy this. It will be true if I go to a 4 strip. It will be a 16 x 16. No problem. I can keep going.
And it would say, well, the two-dimensional-- and also you could do three-dimensional or higher dimensional models-- should not have a phase transition. Well, it turns out that all of this relies on having a finite matrix. And what Onsager showed was that, indeed, for any finite strip, you would have a situation such as this-- actually more accurately if I were to draw a situation such as this, where two eigenvalues would approach but will never cross.
And one can show that the gap between them will scale as something like 1 over this length. And so in the limit where you go to a large enough system, you have the possibility of ascension to some singularity when two eigenvalues touch each other. So this scenario is very well known and studied in two dimensions. In higher dimensions, we actually don't really know what happens. OK. Any questions?
AUDIENCE: So it appears that there are or there are not phase transitions [INAUDIBLE]?
PROFESSOR: Well, we showed-- we were discussing phase transitions for the triangular lattice, for the square lattice. I even told you what the critical coupling is.
AUDIENCE: But it seems to me that the conclusion of what you're-- of this part is that there aren't.
PROFESSOR: As long as you have a finite strip, no. But if you have a 2-- an infinite strip, you do. So what I've shown you here is the following. If I have an L x N system in which you keep L finite and set N going to infinity, you won't see a singularity. But if I have an N x N system, and I said N goes to infinity, I will encounter a singularity in the limit of N going to infinity.
Again, very roughly, one can also should develop a physical picture of what's going on. So let's imagine that you have a system that is a very large number but finite in one direction. And this other direction can be two, can be three, whatever. You have a finite size. But in this other direction, you basically can go as large as you like.
Now presumably this two-dimensional model has a phase transition if it was infinite x infinite. And on approaching that phase transition, there would be a correlation length that would diverge with this exponent nu. So let's say I am sufficiently far away from the phase transition that the correlation length is something like this. So this patch of spins knows about each other.
If I go closer to the transition, it will grow bigger and bigger. At some point, it will fit the size of the system, and then it cannot grow any further. So beyond that, what you will see is essentially there is one patch here, one patch here, one patch here. And you are back to a one-dimensional system. So what happens is that your correlation length starts to grow as if you were in two dimensions or three dimensions.
Once it hits the size of the system, then it has to saturate. It cannot grow any bigger. And then this block becomes independent of the next block. So essentially, you would say that you would have effectively a one-dimensional system where the number of blocks that you have is of the order of N over L.
So what we are going to do starting from next lecture is to develop again a more systematic approach, which is a series expansion about either low temperatures or more usefully about high temperature. And then we will take that high temperature expansion and gradually go in the direction to solve these two-dimensionalizing models exactly. And so we will see why I told you some of these results about exact value of KC, exact value yT. Where do they come from?