Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Description: In this lecture, Prof. Kardar continues his discussion on the Perturbative Renormalization Group, including Perturbative RG (Second Order), and the ε-expansion.
Instructor: Prof. Mehran Kardar
Lecture 11: Perturbative Re...
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: OK, let's start. So today hopefully we will finally calculate some exponents. We've been writing, again and again, how to calculate partition functions for systems, such as a magnet, by integrating over configurations of all shapes of a statistical field. And we have given weights to these configurations that are constructed as some kind of a function [? l ?] of these configurations.
And the idea is that presumably, if I could do this, then I could figure out the singularities that are possible at a place where, for example, I go from an unmagnetized to a magnetized case. Now, one of the first things that we noted was that in general, I can't solve the types of Hamiltonians that I would like. And maybe what I should do is to break it into two parts, a part that I will treat perturbatively, and a part-- sorry, a [INAUDIBLE] part that I can calculate exactly, and a contribution that I can then treat as a perturbation.
Now, we saw that there were difficulties if I attempted straightforward perturbation type of calculations. And what we did was to replace this with some kind of a renormalization group approach. The idea was something like this, that these statistical field theories that we write down have been obtained by averaging true microscopic degrees of freedom over some characteristic landscape.
So this field m certainly does not have fluctuations that are very short wavelength. And, for example, if we were to describe things in the perspective of Fourier components, presumably the variables that I would have would have some maximum, q, that is related to the inverse of the wavelength. So there is some lambda. And if I were to in fact Fourier transform my modes in terms of q, then these modes will be defined [INAUDIBLE] this space. And for example, my beta is zero.
In the language of Fourier modes would be the part that I can do exactly, which is the part that is quadratic and Gaussian. And the q vectors would be between the interval 0 to whatever this lambda is. And the kind of thing that I can do exactly are things that are quadratic. So I would have m of q squared. And then some expansion, [INAUDIBLE] of q, that has a constant plus tq squared and potentially higher order [INAUDIBLE]. So this is the Gaussian theory that I can calculate.
Problem with this Gaussian theory is that it only is meaningful for t positive. And in order to go to the space where t is negative, I have to include higher order terms in the magnetization, and those are non-perturbative. And for example, if I go back to the description in real space, I was writing something like um to the fourth plus higher order terms for the expansion of this u.
When we attempted to do straightforward perturbative calculations, we encountered some singularities. And the perturbation didn't quite make sense. So we decided to combine that with the idea of renormalization group. The idea there was to basically, rather than integrate over all modes, to subdivide the modes into two classes, the modes that are long wavelength and I would like to keep, and I'll call that m tilde, and the modes that are sitting out here that I'm not interested in because they give rise to no singularities that I would like to get rid of.
So my integration over all set of configurations is really an integration over both this m tilde and the sigma. And if I regard m tilde as a span over wave numbers to either be m tilde or sigma, I can basically write this as m tilde plus sigma, and this is m tilde plus sigma also. So this is just a rewriting of the partition function where I have just changed the names of the modes.
Now, the first step in the renormalization group is the coarse graining, which is to average out fluctuations that have scale between a, and let's say this in Fourier space is lambda over b, in real space would be b times whatever your original base scale was for average. So getting rid of those modes would amount to basically changing the scale over which you're averaging by a factor of b. Once I do that, if I can do the integration, what I will be left if I integrate over sigma is just an integral over m tilde. OK?
Now, what would be the form of this integration? The result. Well, first of all, if I take the Gaussian and separate it out between zero to lambda over b and lambda over b2 lambda and integrate over the modes between lambda over b and lambda, just as if I had the Gaussian, then I would get essentially the contribution of the logarithm of the determinants of all of these Gaussian types of variances. So there will be a contribution to the free energy that is effectively independent of m tilde will depend on the rescaling factor that you are looking at. But it's a constant. It doesn't depend on the different configurations of the field m tilde.
The other part of the Gaussian-- so essentially, I wrote the Gaussian as 0 to lambda over b and lambda over b2 lambda. The part that is 0 over lambda b will simply remain, so I will have [? beta 8 ?] 0. That now depends only on these m tildes. Well, what do I have to do with this term? So it's an integration over sigma that has to be performed. I did the integration by taking out this them as if it was a Gaussian.
So effectively, the result of the remaining integration is the average of e to the minus u. And when I take it to the log, I will get plus log of e to the minus u, which is a function of m tilde and sigma, where I have integrated out the modes that are out here, the sigmas. So it's only a function of m tilde. They have been integrated out using a Gaussian weight, such as the one that I have over here.
So that's formally exact. But it hasn't given me any insights because I don't know what that entity is. What I can do with that entity is to make an expansion powers of u. So I will have a minus the average of u. And then the next term would be the variance of u. So I will have the average of u squared, average of u squared, and then higher order terms. So basically, this term can be expanded as a power series as I have indicated.
And again, just to make sure, these averages are performed with this Gaussian weight. And in particular, we've seen that when we have a Gaussian weight, the different components and the different q values are independent of each other. So I get here a delta alpha beta, I get a delta of q plus q prime, and I will get t plus k q squared and potentially higher order powers of q that will appear in this series.
Now, we kind of started developing a diagrammatic perspective on all of this. Something is m to the fourth, since it was the dot product of two factors of m squared, we demonstrate it as a graph such as this. And we also introduced a convention where there solid lines would correspond to m tilde. Let's say wavy lines would correspond to sigma. And essentially, what I have to do is to write this object according to this, where each factor of m is replaced by two factors, which is the sum of this entity and that entity diagrammatically.
So that's two to the four, or 16 different possibilities that I could have once I expand this. And what was the answer that we got for the first term in the series? So if I take u0 average, the kind of diagrams that I can get is essentially keeping this entity as it is. So essentially, I will get the original potential that I have. Rather than m to the fourth, I will simply have the equivalent m tilde to the fourth. So basically, diagrammatically this would correspond to this entity.
There was a whole bunch of things that cancel out to zero in the diagram that I had with only one leg when I took the average because I had a leg by itself, which would make it an odd average, would give me zero. So I didn't have to put any of these. And then I had diagrams where I had two of the lines replaced by wavy lines. And so then I would get a contribution to u.
There was a factor of 4n plus-- sorry, 2n plus 4. That came from diagrams in which I took two of the legs that were together, and the other two I made wavy, and I joined them together. And essentially, I had the choice of picking this pair of legs or that pair of legs, so that gave me a factor of two.
And something that we will see again and again, whenever we have a loop that goes around by itself, it corresponds to something like a delta alpha alpha, which, when you sum over alpha, will give you a factor of n. The other contribution, the four, came from diagrams in which I had two wavy lines on different branches. And since they came originally from different branches, there wasn't a repeated helix to sum and give me a factor of n. So I just have as a factor of two from choice of one branch or the other branch. So that was a factor of four.
And then associated with each one of these diagrams, there was then an integration over the index, k, that characterized these m tildes, in fact, the sigmas that had been integrated over. So I would have an integral from lambda over b2 lambda. Let's call that dbk 2 pi to the d 1 over the variance, which is what I have here, t plus k, k squared, [INAUDIBLE].
There are diagrams then with three wavy lines, which again gave me zero because the average of three-- average of an odd number with a Gaussian weight is zero. And then there were a bunch of things that would correspond to all legs being wavy. There was something like this, and there was something like this. And basically, I didn't really have to calculate them. So I just wrote the answer to those things as being a contribution to the free energy and overall constant, such as the constant that I have over here, but not at the next order in u, but independent of the configurations.
So this was straightforward perturbation. I forgot something very important here, which is that this entire coefficient was also coupled to these solid lines, whose meaning is that it is an integral over q 2 pi to the d m tilde of q squared, where the waves and numbers that are sitting on these solid lines naturally run from 0 to lambda.
So we can see that if I were to add this to what I have above, I see that my z has now been written as an integral over these modes that I'm keeping of a new weight that I will call beta h tilde, depending on these m tildes. Where this beta h tilde is, first of all, these terms that are proportional. That's with a v here also. To contributions of the free energy coming from the modes that I have integrated out, either at the zero order or at the first order so far.
I have the u, exactly the same u as I had before, but now acting on m tilde. So four factors of m tilde. The only thing that happened is that the Gaussian contribution now running from 0 to lambda over b, that is proportional to m tilde of q squared, is now still a series, such as the one that I had before, where the coefficient that was a constant has changed. All the other terms in the series, the term that is proportional to q squared, q to the fourth, et cetera, are left exactly as before.
So what happened is that this beta h tilde pretty much looks like the beta h that I started with, with the only difference being that t tilde is t plus essentially what I have over there, 4u n plus 2 integral lambda over b2 lambda ddk 2 pi to the d, 1 over t plus k, k squared, and so forth. But quite importantly, the parameter that I would associate with coefficient of q squared is left unchanged. If I had a coefficient of q to the fourth, its coefficient would be unchanged. And I have a coefficient for u. Its coefficient is unchanged also.
So the only thing that happened is that the parameter that corresponded to t got modified. And you actually should recognize this as the inverse susceptibility, if I were to integrate all the way from 0 to lambda. And when we did that, this contribution was singular. And that's why straightforward perturbation theory didn't make sense. But now we are not integrating to 0, which would have given the singularity. We are just integrating over the shell that I have indicated outside.
So this step was the first step of renormalization group that we call coarse graining. But rg had two other steps. That was rescale. Basically, the theory that I have has a cut-off that is lambda over b. So it looks grainier in real space. So what I can do in real space is to shrink it. In Fourier space, I have to blow up my momenta. So essentially, whenever I see q, I replace it with the inverse q prime so that q prime, that is bq, runs from zero to lambda, restoring the cut-off that I had originally.
And the next step was to renormalize, which amounted to replacing the field m tilde with a new field m prime after multiplying or rescaling by a factor of z to be determined. Now, this amounts to simple dimensional analysis. So I go back into my equation, and whenever I see q, I replace it with b inverse q prime. So from the integration, I get a factor of b to the minus d, multiplying t tilde, replace m tilde by z times m prime. So that's two factors of z. So what I get is that t prime is z squared b to the minus d, this t tilde that I have indicated above.
Now, k prime is also something that appears in the Gaussian term. So it has a z squared. It came from two factors of m. But because it had an additional factor of q squared rather than b to the minus d, it is b to the minus d minus 2. And I can do the same analysis for higher order terms going with higher powers of q in the expansion that appears in the Gaussian.
But then we get to the non-linear terms, and the first linearity that we have kept is this u prime. And what we see is it goes with four factors of m. So there will be z to the fourth. If I write things in Fourier space, m to the fourth in real space in Fourier space would involve m of g1, m of q2, m of q3. And the fourth m, that is minus q1, minus q2, minus q3. But there will be three integrations over q, which gives me three factors of b to the minus 3.
So these are pretty much exactly what we had already seen for the Gaussian model-- forgot the k-- except that we replaced this t that was appearing for the Gaussian model with t tilde which is what I have up here. Now, you have to choose z such that the theory looks as much as possible as the original way that I had. And as I mentioned, our anchoring point would be the Gaussian.
So for the Gaussian model, we saw that the appropriate choice, so that ultimately we were left with the right number of relevant directions was to set this combination to 1, which means that I have to choose z to be v to the power of 1 plus d over 2. Now, once I choose that factor for z, everything else becomes determined. This clearly has two factors of b with respect to the original. So this becomes b squared.
This you have to do a little bit of work. Z to the fourth would be b to the 4 plus 2d, then minus 3d gives me b to the 4 minus d. And I can similarly determine what the dimensions would be for additional terms that appear in the Gaussian, as well as additional nonlinearities that could appear here. All of them, by this analysis, I can assign some power of b.
So this completes the rg in the sense that at least at this order in perturbation theory, I started with my original theory, and I see how the parameters of the new theory are obtained if I were to rescale and renormalize by this factor of b. Now, we did one thing else, which is quite common, which is rather than choosing factors like b equals to 2 or 3, making b to be infinitesimally small, at least on the picture that I have over there.
What I'm doing is I'm making this b very close to 1, which means that effectively I'm putting the modes that I'm getting rid of in a tiny shell around lambda. So I have chosen b to be slightly larger than 1 by an amount delta l. And I expect that all of the parameters will also change very slightly, such that this t prime evaluated at scale v would be what I had originally, plus something that vanishes as delta l goes to zero and presumably is linear in delta l dt by dl. And similarly, I can do the same thing for u and all the other parameters of the theory.
Once I do that, these jumps from one parameter to another parameter can be translated into flows. And, for example, dt by dl gets a contribution from writing b squared as 1 plus 2 delta l. That is proportional to 2 times t. And then there's another contribution that is order of delta l. Clearly, if b equals to 1, this integral would vanish.
So if b is very close to 1, this integral is off the order of delta l. And what it is is just evaluating the integrand when k equals to lambda at the shell, and then multiplying by the volume of that shell, which is the surface area times the thickness. So I will get from here a contribution order of delta l. I have divided through by delta l, which is 4u m plus 2 1 over t plus k lambda squared [INAUDIBLE] integrand.
And then I have the surface area divided by 2 pi to the d that we have always called kd. And then lambda to the d is the product of lambda to the d minus 1 and lambda delta l, which comes from the thickness. The delta l I have taken out. And this whole thing is the order of u contribution. And then they had a term that is du by dl, which is 4 minus d times u.
So this is the result of doing this perturbative rg to the lowest order in this parameter u. Now, these things are really the important parameters. There will be other parameters that I have not specifically written down. And next lecture, we will deal with all of them. But let's focus on these two.
So I have one parameter, which is t, the other parameter, which is u. But u can only be positive for the theory to make sense. I said that originally the Gaussian theory only makes sense if t is positive because once t becomes negative, then the weight gets shifted to large values of m. It is unphysical. So for physicalness of the Gaussian theory, I need to confine myself to the t positive plane. Now that I have u, I can have t that is negative and um to the fourth, as long as u positive, will make the weight well behaved. So this entire plane is now accessible.
Within this plane, there is a point which corresponds to a fixed point, a point that if I'm at that location, then the parameters no longer change. Clearly, if u does not change, u at the fixed point should be 0. If u at the fixed point is 0 and t does not change, t at the fixed point is 0. So this is the fixed point.
Since I'm looking at a two-dimensional projection, there will be two eigendirections associated with moving away from this fixed point. If I stick with the axis where u is 0, you can see that u will stay 0. But then dt by dl is 2t. So if I'm on the axis that u equals to 0, I will stay on this axis. So that's one of my eigendirections. And along this eigendirection, I will be flowing out with an eigenvalue of 2.
Now, in general however, let's say if I go to t equals to 0, you can see that if t is 0, but u positive dt by dl is positive. So basically, the u direction you will be going if you start on the t equals to 0 axis, you will generate a positive t. And the typical flows that you would have would be in this direction. Actually, I should draw it with a different color. So quite generically, the flows are like this.
But there is a direction along which the flow is preserved. So there is a straight line. This straight line you can calculate by setting dt by dl divided by du by dl to be the ratio of t over u. You can very easily find that it corresponds to a line of t being proportional to u with a negative slope.
And the eigenvalue along that direction is determined by 4 minus d. So that the picture that I have actually drawn for you here corresponds to dimensions greater than four. In dimensions greater than four along this other direction, you will be flowing towards the fixed point. And in general, the flows look something like this.
So what does that mean? Again, the whole thing that we wrote down was supposed to describe something like a magnet at some temperature. So when I fix my temperature of the magnet, I presumably reside at some particular point on this diagram. Let's say in the phase that is up here, eventually I can see that I go to large t and u goes to 0. So the eventual weight is very much like a Gaussian, e to the tm squared over 2. So this is essentially independent patches of the system randomly pointing to different directions.
If I change my system to have a lower temperature, I will be looking at a point such as this. As I lower the temperature, I will be looking at some other point presumably. But all of these points that correspond to lowering temperatures, if I also now look at increasing land scale, will flow up here.
Presumably, if I go below tc, I will be flowing in the other direction, where t is negative, and then the u is needed for stability, which means that I have to spontaneously choose a direction in which I order things. So the benefit of doing this renormalization and this study was that in the absence of u, I could not achieve the low temperature part of the system.
With the addition of u, I can describe both sides, and I can see on the rescaling which set of points go to what is the analog of the high temperature, which set of points go to what is the analog of low temperature. And the point that corresponds to the transition between the two is on the basing of attraction of the Gaussian fixed point that is asymptotically, the theory would be described by just gradient of m squared.
But this picture does not work if I go too d that is less than four. And d less than four, I can again draw u. I can draw t. And I will again find the fixed point at 0, 0. I will again find an eigendirection, at u equals to 0, which pushes things out along the u equals to 0 axis.
Going from d of above four to d of below four does not really materially change the location of this other eigendirection by much. It pretty much stays where it was. The thing that it does change is the eigenvalue. So basically, here I will find that the flow is in this direction. And if I were to generalize the picture that I have, I would get things that would be going like this or going like this.
Once again, there are a set of trajectories that go on one side, a set of trajectories that go on the other side. And presumably, by changing temperature, I will cross from one set of trajectories to the other set of trajectories. But the thing is that the point that corresponds to hitting the basin that separates the two sets of trajectories, I don't know what it corresponds to.
Here, for d greater than 4, it went to the Gaussian fixed point. Here currently, I don't know where it is going. So I have no understanding at this level of what the scale invariant properties are that describe magnets in three dimensions at their critical temperature.
Now, the thing is that the resolution and everything that we need comes from staring more at this expansion that we had. We can see that this is an alternating theory because I started with e to the minus u. And so the next term is likely to have the opposite sign to the first term. So I anticipate that at the end of doing the calculation, if I go to the next order, there will be a term here that is minus vu squared.
Actually, there will be a contribution to dt by dl also that is minus, let's say, au squared. So I expect that if I were to do things at the next order, and we will do that in about 15 minutes, I will get these kinds of terms. Once I have that kind of term, you can see that I anticipate then a fixed point occurring at the location u star, which is 4 minus d divided by b.
And then, by looking in the vicinity of this fixed point, I should be able to determine everything that I need about the phase transition. But then you can ask, is this a legitimate thing to do? I have to make sure I do things self consistently. I did a perturbation theory assuming that u is a small quantity, so that I can organize things in power of u, u squared, u cubed.
But what does it mean that I have control over powers of u? Once I have landed at this fixed point, where at the fixed point, u has a value that is fixed and determined. It is this 4 minus d over b. So in order for the series to make sense and be under control, I need this u star to be under control as a small parameter.
So what knob do I have to ensure that this u star is a small parameter? Turns out that practically the only knob that I have is that this 4 minus d should be small. So I can only make this into a systematic theory by making it into an expansion in a small quantity, which is 4 minus d. Let's call that epsilon. And now we can hopefully, at the end of the day, keep track of appropriate powers of epsilon.
So the Gaussian theory describes properly the behavior at four dimensions. At 4 minus epsilon dimensions, I can figure out where this fixed point is and calculate things correctly. All right? So that means that I need to do this calculation of the variance of u. So what I will do here is to draw a diagram to help me do that. So let's do something like this. OK, let's do something like this.
Six, seven rows and seven columns. The first row is to just tell you what we are going to plot. So basically, I need a u squared average, which means that I need to have two factors of u. Each one of them depends on m tilde and sigma. And so I will indicate the two sets. Actually already, we saw when we were doing the case of the first order calculation, how to decompose this object that has four lines.
And we said, well, the first thing that I can do is to just use the m's. The next thing that I can do is I could replace one of the m's with a sigma. And there was a choice of four ways to do so. Or I could choose to replace two of the m's with wavy lines. And question was, the right branch or the left branch? So there's two of these.
I could put the wavy lines on two different branches. And there was four ways to do this one. I could have three wavy lines, and the one solid line could then be in one of four positions. Or I had all wavy lines, so there is this. So that's one of my factors of u on the vertical for this table. On the horizontal, I will have the same thing. I will have one of these. I will have four of these. I will have two of these. I will have four of these. I will have four of these, and one which is all wavy lines.
Now I have to put two of these together and then do the average. Now clearly, if I put two of these together, there's no average to be done. I will get something that is order of m to the fourth. But remember that I'm calculating the variance. So that would subtract from the average squared of the same quantity. It's a disconnected piece.
And I have stated that anything that is disconnected will not contribute. And in particular, there is no way to join this to anything. So everything that we log here in this row would correspond to no contribution once I have subtracted out the average of u squared. And there is symmetry in this table. So the corresponding column is also all things that are disconnected entities.
All right. Now let's see the next one. I have a wavy line here, a sigma here, and a sigma here. I can potentially join them together into a diagram that looks something like this. So I will have this, this. I have a leg here. I will have this line gets joined to that line. And then I have this, this, this.
Now, what is that beast? It is something that has six factors of m tilde. So this is something that is order of m tilde to the sixth power. So the point is that we started here saying that I should put every term that is consistent with symmetry. I just focused on the first fourth order term, but I see this is one of the things that happens under renormalization group.
Everything that is consistent with symmetry, even if you didn't put it there at the beginning, is likely to appear. So this term appeared at this order. You have to think of ultimately whether that's something to worry about or not. I will deal with that next time. It is not something to worry about. But let's forget about that for the time being.
Next term, I have one wavy line here and two wavy lines there. So it's something that is sigma cubed. Against the Gaussian weight, it gives me 0. So because of it being an odd term, I will get a 0 here. What color [INAUDIBLE] a 0 here. Somehow I need this row to be larger in connection with future needs.
Next one is also something that involves three factors of sigma, so it is 0 by symmetry. And again, since this is a diagram that has symmetry along the diagonal, there will be 0's over here.
Next diagram. I can somehow join things together and create something that has four legs. It will look something like this. I will have this leg. This leg can be joined, let's say, with this leg, giving me something out here. And these two wavy lines can be joined together. That's a possibility.
You say, OK. This is a diagram that corresponds to four factors of m tilde. So that should contribute over here. Actually, the answer is that diagram is 0. The reason for that is the following.
Let's look at this vortex over here. It describes four momenta that have come together. And the sum of the four has to be 0. Same thing holds here. The sum of these four has to be 0. Now, if we look at this diagram, once I have joined these two together, I have ensured that the sum of these two is 0. The sum of all of four is 0. The sum of these two is 0.
So the sum of these two should be 0 too. But that's not allowed. Because one of them is outside this shell, and the other is inside the shell. So just kinematically, there's no choice of momenta that I could make that would give a contribution to this. So this is 0 because of what I will write as momentum type of conservation. Again, because of that, I will have here as 0 momentum down here.
The next diagram has one sigma from here and four sigmas. So that's an odd number of sigmas. So this will be 0 too, just because of up-down symmetry in m tilde. So we are gradually getting rid of places in this table.
But the next one is actually important. I can take these two and join them to those two and generate a diagram that looks like this. So I have these two hands. These two hands get joined to the corresponding two hands. And I have a diagram such as this. Yes.
AUDIENCE: [INAUDIBLE] Is there another way for them to join also?
PROFESSOR: Yes, there is another way which suffers exactly the same problem. Ultimately, because you see the problem is here. I will have to join two of them together, and the other two will be incompatible. Now, just to sort of give you ultimately an idea, associated with this diagram there will be a numerical factor of 2 times 2 from the horizontal times the vertical choices.
But then there's another factor of 2 because this diagram has two hands. The other diagram has two hands. They can either join like this, or they can join like this. So there's two possibilities for the crossing.
If you kind of look ahead to the indices that carry around, these two are part of the same branch. They carry the same index. These two would be carrying the same index, let's say j. These two would be carrying the same index, j prime. So when I do the sum, I will have a sum over j and j prime of delta j, j prime. I will have a sum over j delta jj, which will give me a factor of n.
Any time you see a closed loop, you generate a factor of n, just like we did over here. It generated a factor of n. OK, so there's that. The next diagram looks similar, but does not have the factor of n. I have from over there the two hands that I have to join here. I have to put my hands across, and I will get something like this.
So it's a slightly different-looking diagram. The numerical factor that goes with that is 2 times 4 times 2. There is no factor of n. Now, again, because of symmetry, there's a corresponding entity that we have over here. If I just rotate that, I will essentially have the same diagram. Opposite way, I have essentially that. The two hands reach across to these and give me something that is like this. To that, sorry. They join to that one. And the corresponding thing here looks like this.
Numerical factors, this would be 2 times 4 times 2. It is exactly the same as this. This would be 4 times 4 times 2. At the end of the day, I will convince you that this block of four diagrams is really the only thing that we need to compute. But let's go ahead and see what else we have.
If I take this thing that has two hands, try to join this thing that has three hands, I will get, of course, 0, based on symmetry. If I take this term with two hands, join this thing with four hands, I will generate a bunch of diagrams, including, for example, this one. I can do this. There are other diagrams also.
So these are ultimately diagrams with two hands left over. So they will be contributions to m tilde squared. And they will indeed give me modifications of this term over here. But we don't need to calculate them. Why? Because we want to do things consistently to order of epsilon.
In the second equation, we start already with epsilon u. So this term was order of epsilon squared. Since u star will be order of epsilon, this term will be epsilon squared. The two terms I have to evaluate, they are both of the same order. But in the first equation, I already have a contribution that is order of epsilon. If I'm calculating things consistently to lowest order, I don't need to calculate this explicitly.
I would need to calculate it explicitly if I wanted to calculate things to order of epsilon squared, which I'm not about to do. But to our order, this diagram exists, but we don't need to evaluate. Again, going because of the symmetry along the diagonal of the diagram, we have something here that is order of m tilde squared that we don't evaluate. OK, let's go further.
Over here, we have two hands, three hands. By symmetry, it will be zero. Over here, we have two hands, four hands. I will get a whole bunch of other things that are order of m tilde squared. So there are other terms that are of this same form that would modify the factor of a, which I don't need to explicitly evaluate.
All right. What do we have left? There is a diagram here that is interesting because it also gives me a contribution that is order of m tilde squared, which we may come back to at some point. But for the time being, it's another thing that gives us a contribution to a.
Here, what do we have? We have three hands, four hands, zero by symmetry, zero by symmetry. Down here, we have no solid hands. So we will get a whole bunch of diagrams, such as this one, for example, other things, which collectively will give a second order correction to the free energy. It's another constant that we don't need to evaluate.
So let's pick one of these diagrams, this one in particular, and explicitly see what that is. It came out of putting two factors of u together. Let's be explicit. Let's call the momenta here q1, q2, and k1, k2. And the other u here came from before I joined them, there was a q3, q4. There was a k1 prime, k2 prime.
So let's say the first u-- this is a diagram that will contribute at order of u squared. Second order terms in the series all come with a factor of one half. It is u to the n divided by n factorial. So this would be explicitly u squared over 2.
For the choice of the left diagram, we said there were two possibilities. For the choice of right diagram, there were two branches, one of which I could have taken. In joining the two hands together, I had a degeneracy of two, so I have all of that. A particular one of these is an integral over q1, q2. And from here, I would have integrations over q3, q4.
These are all integrations that are for variables that are in the inner shell. So this is lambda to lambda over b. I have integrations from lambda over v2 lambda for the variables k1, k2, k1 prime, k2 prime. And if I explicitly decided to write all four momenta associated with a particular index, I have to explicitly include the delta functions that say the sum of the momenta has to add up to 0.
Now, what I did was to drawing these two sigmas together. So I calculated one of those Gaussian averages that I have over there. Actually, before I do that, I note that these pairs are dotted together. So I have m tilde of q1 dotted with m tilde of q3, m tilde of q4 dotted with m tilde of q1, q2, q3, q4. These two are dot products. These two are dot products.
Here, I joined the two sigmas together. The expectation value gives me 2 pi to the d at delta function k1 plus k1 prime divided by t plus k, k1 squared, and so forth. And the delta function, if I call these indices j, j, j prime, j prime, I will have a delta j j prime. And from the lower two that I have connected together, I have 2 pi to the d delta function k2 plus k2 prime. Another delta j j prime, t plus k, k2 squared, and so forth.
Now I can do the integrations. But first of all, numerical factors, I will get 4u squared. As I told you, delta j j prime, delta j j prime will give me delta jj. Sum over j, I will get a factor of n. That's the n that I anticipated and put over there.
I have the integrations 0 to lambda over v, ddq1, ddq2, ddq3, ddq4, 2 pi to the 4d. And then this m tilde q1 m tilde q2 m tilde q3 dotted with m tilde q4. Now, note the following. If I do the integration over k1 prime, k1 prime is set to minus k1. If I do the integration over k2 prime, k2 prime is set to minus k2. If I now do the integration over k2, k2 is set to minus q1 minus q2 minus k1, which if I insert over here, will give me a delta function that simply says that the four external q's have to add up to 0.
So there is one integration that is left, which is over k1. So I have to do the integral lambda over v2 lambda, dd of k1 2 pi to the d. So basically, there's k1 running across the upper line gives me a factor of 1 over t plus k, k1 squared, and so forth. And then there is what is running along the bottom line, which is k2 squared. And k2 squared is the same thing as q1 plus q2 plus k1, the whole thing squared.
So the outcome of doing the averages that appear in this integral is to generate a term that is proportional to m to the fourth, which is exactly what we had, with one twist. The twist is that the coefficient that is appearing here actually depends on q1 and q2. Of course, q1 and q2 being inner momenta are much smaller than k1, which is one of the shared momenta.
So in principle, I can expand this. I can expand this as ddk 2 pi to the d lambda over b2 lambda, 1 over t plus k, k squared. I've renamed k1 to k squared to lowest order in q, this is 0. And then I can expand this thing as 1 plus k q1 plus q2 squared and so forth, divided by t plus k k squared raised to the minus 1 power.
Point is that if I set the q's to 0, I have obtained a constant addition to the coefficient of my m to the fourth. But I see that further down, I have generated also terms that depend on q. What kind of terms could these be? If I go back to real space, these are terms that are after order of m to the fourth, which was, if you remember, m squared m squared. But carry additional gradients with them.
So, for example, it could be something like this. It has two factors of q, various factors of n. Or it could be something like m squared gradient of m squared. The point is that we have again the possibility, when we write our most general term, to introduce lots and lots of non-linearities that I didn't explicitly include. But again, I see, if I forget them at the beginning, the process will generate them for you.
So I should have really included these types of terms in the beginning because they will be generated under the RG, and then I can track their evolution of all of the parameters. I started with 0 for this type of parameter. I generated it out of nothing. So I should really go back and put it there. But for the time being, let's again ignore that. And next time, I'll see what happens.
So what we find at the end of evaluating all of these diagrams is that this beta h tilde evaluated at the second order, first of all, has a bunch of constants which in principle, now we can calculate to the next order. Then we find that we get terms that are proportional to m tilde squared, the Gaussian. And I can get the terms that I got out of second order and put it here.
So I have my original t tilde now evaluated at order of u squared. Because of all those diagrams that I said I have to do. I will get a k tilde q squared and so forth, all of them multiplying m tilde squared. And I see that I generated terms that are of the order of m tilde to the fourth. So I have ddq1, ddq4, 2 pi to the 4d, 2 pi to the d delta function, q1, q4, m tilde q1, m tilde q2, m tilde q3, m tilde q4.
And what I have to lowest order is u. And then I have a bunch of terms that are of the form something like this. So they are corrections that are proportional to the integral lambda over b lambda ddk 2 pi to the d, 1 over t plus k, k squared squared. So essentially, I took this part of that diagram. That diagram has a contribution at order of u squared, which is 4n. So if I had written it as u squared over 2, I would have put 8n, the 8 coming from just the multiplication that I have there, 2 times 2 times 2 times n.
Now, if you calculate the other three diagrams that I have boxed, you'll find that they give exactly the same form of the contribution, except that the numerical factor for them is different. I will get 16, 1632, adding up together to a factor of 64 here. And then the point is that I will generate additional terms that are, let's say, order of q squared and so forth, Which are the kinds of terms that I had not included.
So what we find is that-- question?
AUDIENCE: Where did you add the t total?
PROFESSOR: OK. So let's maybe write this explicitly. So what would be the coefficient that I have to put over here? I have t at the 0-th order. At order of u, I calculated 4u n plus 2 integral. The point is that when I add up all of those diagrams that I haven't explicitly calculated, I will get a correction here that is order of u squared whose coefficient I will call a. But then this is the 0-th order in the momenta, and then I have to go and add terms that are at the order of q squared and higher order terms.
So this was, again, the course graining step of RG, which is the hard part. The rescaling and renormalization are simple. And what they give me at the end of the day are the modifications to dt by dl and du by dl that we expected. dt by dl we already wrote. 0-th order is 2t. First order is a correction. 4u n plus 2 integral, which, when evaluated on the shell, gives me kt lambda to the d t plus k lambda squared and so forth.
Now, this a here would involve an integration. Again, this integration I evaluated on the shell. So the answer will be some a that will depend on t, k, and other things, would be a contribution that is order of u squared. I haven't explicitly calculated what this a is. It will depend on all of these other parameters.
Now, when I calculate the u by dl, I will get this 4 minus d times u to the lowest order. To the next order, I essentially get this integral. So I have minus 4n minus 4n plus 8 u squared. Evaluate that integral on the shell. kd lambda to the d t plus k lambda squared and so forth squared. And presumably, both of these will have corrections at higher orders, order of u squared, et cetera.
So this generalizes the picture that we had over here. Now we can ask, what is the fixed point? In fact, there will be two of them. There is the old Gaussian fixed point at t star u star equals to 0. Clearly, if I said t and u equals to 0, I will stay at 0. So the old fixed point is still there. But now I have a new fixed point, which is called the ON fixed point because it explicitly depends on the symmetry of the order parameter, the number of components, n, as well as dimensionality. It's called the ON fixed point.
So setting this to 0, I will find that u star is essentially epsilon divided by whatever I have here. I have 4n plus 8 kd lambda to the d. In the numerator, I would have t star plus k lambda squared squared. And then I can substitute that over here to find what t star is. So t star would be minus 2 n plus 2 kd lambda to the d divided by t star plus k lambda squared, et cetera, times u star, which is what I have the line above, which is t star plus k lambda squared, et cetera, squared, divided by 4n plus 8 kd lambda to the t.
Now, over here, this is in principle an implicit equation for t star. But I forgot the epsilon that I have here. But it is epsilon multiplying some function of t star. So clearly, t star is order of epsilon. And I can set t star equal to 0 in all of the calculation, if I'm calculating things consistently to epsilon. You can see that this kd lambda to the d cancels that. One of these factors cancels what I have over here. At the end of the day, I will get minus n plus 2 divided by n plus 8 k lambda squared epsilon.
And similarly, over here I can get rid of t star because it's already order of epsilon, and I have epsilon out here. So the answer is going to be k squared lambda to the power of 4 minus d divided by 4n plus 8 kd lambda to the d. Presumably, both of these plus order of epsilon squared. So you can see that, as anticipated, there's a fixed point at negative 2 t star and some particular u star. There was a question. [INAUDIBLE]
AUDIENCE: [INAUDIBLE]
PROFESSOR: What is unnecessary?
AUDIENCE: [INAUDIBLE]
AUDIENCE: You already did 4 minus [INAUDIBLE].
AUDIENCE: The u started. Yeah, that one.
PROFESSOR: Here.
AUDIENCE: Erase it.
PROFESSOR: Oh, lambda to the d is 0. Right. Thank you.
AUDIENCE: For t star, does factor of 2.
PROFESSOR: t star, does it have a factor of 2? Yes, 2 divided by 4. There is a factor of 2 here. Thank you.
Look at this. You don't really see much to recommend it. The interesting thing is to find what happens if you are not exactly at the fixed point, but slightly shifted. So we want to see what happens if t is t star plus delta t, u is u star plus delta u, if I shift a little bit. If I shift a little bit, linearizing the equation means I want to know how the new shifts are related to the old shift. And essentially doing things at the linear level means I want to construct a two-by-two matrix that relates the changes delta t delta u to the shifts originally of delta t and delta u.
What do I have to do to get this? What I have to do is to take derivatives of the terms for dt by dl with respect to t, with respect to u. Take the derivative with respect to t. What do I get? I will get two. I will get minus 4u n plus 2 kd lambda to the father of d divided by t plus k lambda squared squared. So the derivative of 1 over t became minus 1 over t squared.
There is a second order term. So there will be a derivative of that with respect to t multiplying u squared. I want you to calculate it. Delta u, if I make a change in u, there will be a shift here, which is 4n plus 2 kd lambda to the d divided by t plus k lambda squared. From the second order term, I will get minus 2au.
For the second equation, if I take the derivative of this variation in t, I will get a plus 4n plus 8u squared kd lambda to the d t plus k lambda squared and so forth cubed. And the fourth place, I will get epsilon minus 8 n plus 8 kd lambda to the d u divided by t plus k lambda squared and so forth squared.
Now, I want to evaluate this matrix at the fixed point. So I have to linearize in the vicinity of fixed point. Which means that I put the values of t star and u star everywhere here. And then I have to calculate the eigenvalues of this matrix. Now, note that this element of the matrix is proportional to u star squared. So this is certainly evaluated at the fixed point order of epsilon squared. Order of epsilon squared to me is zero. I don't see order of epsilon squared. So I can get rid of this. Think of a zero here at this order. Which means that the matrix now has zeroes on one side of the diagonal, which means that what is appearing here are exactly the eigenvalues.
Let's calculate the eigenvalue that corresponds to this element. I will call it yu. It is epsilon minus 8 n plus 8 kd lambda to the d t star.
Well, since I'm calculating things to order of epsilon, I can ignore that t star down there. I have k squared lambda to the four or k squared, lambda squared, and so forth squared. Multiplied by u star. Where is my u star? u star is here. So it is multiplied by n plus 2. Sorry, my u star is up here. k squared lambda to the 4 minus d 4 n plus 8 kd epsilon.
Right. Now the miracle happens. So k squared cancels the k squared. Lambda to the four and lambda to the d cancel this lambda to the four minus d. The kd cancels the kd. The n plus 8 cancels the n plus a. 8 cancels the 2. The answer is epsilon minus 2 epsilon, which is minus epsilon. OK?
[LAUGHTER]
So this direction has become irrelevant. The epsilon here turn to a minus epsilon. This irrelevant direction disappeared. There is this relevant direction that is left, which is a slightly shifted version of what my original [INAUDIBLE] direction was. And you can calculate yt. So you go to that expression, do the same thing that I did over here. You'll find that at the end of the day, you will find 2 minus n plus 2 over n plus 8 epsilon.
All these unwanted things, like kd's, these lambdas, et cetera, disappear. You expected at the end of the day to get pure numbers. The exponents are pure numbers. They don't depend on anything. So we had to carry all of this baggage. And at the end of the day, all of the baggage miraculously disappears.
We get a fixed point that has only one relevant direction, which is what we always wanted. And once we have the exponent, we can calculate everything that we want, like the exponent for divergence of correlation length is the inverse of that. You can calculate how it has shifted from one half. It is something like n plus 2 over n plus 8 epsilon.
And we see that the exponents now explicitly depend on dimensionality of space because of this epsilon. They explicitly depend on the number of components of your order parameter n. So we have managed, at least in some perturbative sense, to demonstrate that there exists a kind of scale invariance that characterizes this ON universality class.
And we can calculate exponents for that, at least perturbatively. In the process of getting that number, I did things rapidly at the end, but I also swept a lot of things under the rug. So the task of next lecture is to go and look under the rug and make sure that we haven't put anything that is important away.