Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Description: In this lecture, Prof. Kardar continues his discussion on Continuous Spins at Low Temperatures, including Generic Scale Invariance in Equilibrium Systems, Non-equilibrium Dynamics of Open Systems, and Dynamics of a Growing Surface.
Instructor: Prof. Mehran Kardar
Lecture 26: Continuous Spin...
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu
MEHRAN KARDAR: OK, let's start. So in this class, we focused mostly on having some slab of material and having some configuration of some kind of a field inside. And we said that, basically, we are going to be interested close to, let's say, phase transition and some quantity that changes at the phase transition. We are interested in figuring out the singularities associated with that.
And we can coarse grain. Once we have coarse grained, we have the field m, potentially a vector, that is characterized throughout this material. So it's a field that's function of x. And by integrating out over a lot of degrees of freedom, we can focus on the probability of finding different configurations of this field.
And this probability we constructed on the basis of a number of simple assumptions such as locality, which implied that we would write this probability as a product of contributions of different parts, which in the exponent becomes an integral. And then we would put within this all kinds of things that are consistent with the symmetries of the problem. So if, for example, this is a field that is invariant on the rotations, we would be having terms such as m squared, m to the fourth, and so forth. But the interesting thing was that, of course, there is some interaction with the neighborhoods. Those neighborhood interactions we can, in the continuum limit, inclement by putting terms that are proportional to radiant of m and so forth.
So there is a lot of things that you could put consistent with symmetry and presumably be as general as possible. You could have this and the terms in this coefficients of this expansion would be these phenomenological parameters characterizing this probability, function of all kinds of microscopic degrees of freedom, as well as microscopic constraints such as temperature, pressures, et cetera.
Now the question that we have is if I start with a system, let's say a configuration where everybody's pointing up or some other configuration that is not the equilibrium configuration, how does the probability evolve to become something like this? Now we are interested therefore in m that is a function of position and time. And since I want to use t for time, this coefficient of m squared that we were previously calling t, I will indicate by r, OK?
There are various types of dynamics that you can look at for this problem. I will look at the class that is dissipative. And its inspiration is the Brownian motion that we discussed last time where we saw that when you put a particle in a fluid to a very good approximation when the fluid is viscous, it is the velocity that is proportional to the force and you can't ignore inertial effects such as mass times acceleration and you write an equation that is linear in position. Velocity is the linear derivative of position, which is the variable that is of interest to you.
So here the variable that is of interest to us, is this magnetization that is changing as a function of time. And the equation that we write down is the derivative of this field with respect to time. And again, for the Brownian motion, the velocity was proportional to the force. The constant of proportionality was some kind of a mobility that you would have in the fluid. So continuing with that inspiration, let's put some kind of a mobility here. I should really put a vector field here, but just for convenience, let's just focus on one component and see what happens.
Now what is the force? Presumably, each location individually feels some kind of a force. And typically when we had the Brownian particle, the force we were getting from the derivative of the potential with respect to the [? aviation ?] of the position, the field that we are interested.
So the analog of our potential is the energy that we have over here, and in the same sense that we want this effective potential, if you like, to govern the equilibrium behavior-- and again, recall that for the case of the Brownian particle, eventually the probability was related to the potential by e to the minus beta v. This is the analog of beta v, now considered in the entire field.
So the analog of the force is a derivative of this beta H, so this would be our beta H with respect to the variable that I'm trying to change. And since I don't have just one variable, but a field, the analog of the derivative becomes this functional derivative. And in the same sense that the Brownian particle, the Brownian motion, is trying to pull you towards the minimum of the potential, this is an equation that, if I kind of stop over here, tries to put the particle towards the minimum of this beta H.
Now the reason that the Brownian particle didn't go and stick at one position, which was the minimum, but fluctuated, was of course, that we added the random force. So there is some kind of an analog of a random force that we can put either in front of [? mu ?] or imagine that if we added it to [? mu, ?] and we put it over here. Now for the case of the Brownian particle, the assumption was that if I evaluate eta at some time, and eta at another time t2 and t1, that this was related to 2D delta function t1 minus t2.
Now of course here, at each location, I have a noise term. So this noise carries an index, which indicates the position, which is, of course, a vector in D dimension in principal. And there is no reason to imagine that the noise here that comes from all kinds of microscopic degrees of freedom that we have integrated out should have correlation with the noise at some other point. So the simplest assumption is to also put the delta function in the positions, OK?
So if I take this beta H that I have over there and take the functional derivative, what do I get? I will get that the m of xt by dt is the function of-- oops, and I forgot to put a the minus sign here. The force is minus the potential, and this is basically going down the gradient, so I need to put that.
So I have to take a derivative of this. First of all I can take a derivative with respect to m itself. I will get minus rm. Well, actually, let's put the minus out front. And then I will get the derivative of m to the fourth, which is [? 4umq, ?] all kinds of terms like this.
Then the terms that come from the derivatives and taking the derivative with respect to gradient will give me K times the gradient. But then in the functional derivative, I have to take another derivative, converting this to minus K log Gaussian of m. And then I would have L the fourth derivative from the next term and so forth. And on top of everything else, I will have the noise who's statistics I have indicated above.
So this entity, this equation, came from the Landau-Ginzburg model, and it is called a time dependent Landau-Ginzburg equation. And so would be the analog of the Brownian type of equation, now for an entire field.
Now we're going to have a lot of difficulty in the short amount of time that is left to us to deal with this nonlinear equation, so we are going to do the same thing that we did in order to capture the properties of the Landau-Ginzburg model qualitatively. Which is to ignore nonlinearities, so it's a kind of Gaussian version of the model. So what we did was we linearized.
AUDIENCE: Question.
MEHRAN KARDAR: Yes?
AUDIENCE: Why does the sign in front of the K term change relative to the others?
MEHRAN KARDAR: OK, so when you have the function 0 of a field gradient, et cetera, you can show that the functional derivative, the first term is the ordinary type of derivative. And then if you think about the variations that are carrying m and with the gradient and you write it as m plus delta m and then make sure that you take the delta m outside, you need to do an integration by part that changes the sign. And the next term would be the gradient of the phi by the [? grad ?] m, and the next term would be log [INAUDIBLE] of [INAUDIBLE] gradient of m squared and so forth. It alternates and so by explicitly calculating the difference of two functionals with this integration evaluated at m and m plus delta m and pulling out everything that is proportional to delta m you can prove these expressions.
So we linearize this equation, which I've already done that. I cross out the mq term and any other nonlinear terms, so I only keep the linear term. And then I do a Fourier transform. So basically I switch from the position representation to the Fourier transform that's called m tilde of q, I think. x is replaced with q.
And then the equation for the field m linears. The form separates out into independent equations for the components that are characterized by q. So the Fourier transform of the left hand side is just this.
Fourier transform of the right hand side will give me minus mu r plus K q squared. The derivative of [INAUDIBLE] will give me a minus q squared. The derivative of the next term L q to the forth, et cetera. And I've only kept the linear term, so I have m tilde of q and t. And then I have the Fourier transforms of the noise eta of x and t. They [? call it ?] eta tilde of q [? m. ?]
So you can see that each mode satisfy a separate linear equation. So this equation is actually very easy to solve for any linear equation m tilde of q and t. If I didn't have the noise, so I would start with some value at t [? close ?] to 0, and that value would decay exponentially in the characteristic time that I will call tao of q. And 1 over tao of q is simply this mu r plus K q2 and so forth.
Now once you have noise, essentially each one of these noises acts like an initial condition. And so the full answer is an integral over all of these noises from 0 tilde time t of interest. Dt prime the noise that occurs at a time t prime, and the noise that occurs at time t [? trime ?] relaxes with this into the minus t minus t prime tao of q. So that's the solution to that linear noisy equations, basically [? sequencing ?] [INAUDIBLE].
So one of the things that we now see is that essentially the different Fourier components of the field. Each one of them is independently relaxing to something, and each one of them has a characteristic relaxation time. As I go towards smaller and smaller values of q, this rate becomes smaller, and the relaxation time becomes larger. So essentially shortwave wavelength modes that correspond to large q, they relax first. Longer wavelength modes will relax later on. And you can see that the largest relaxation time, tao max, corresponds to q equals to 0 is simply 1 over [? nu ?] r.
So I can pluck this to a max as a function of r, and again in this theory, the Gaussian theory, we saw only makes sense as long as all is positive. So I have to look only on the positive axis. And I find that the relaxation time for the entire system for the longest wavelength actually diverges as r goes to 0. And recall that r, in our perspective, is really something that is proportional to T minus Tc.
So basically we find that as we are approaching the critical point, the time it takes for the entirety of the system or the longest wavelength [? modes ?] to relax diverges as 1 over T minus Tc. There's an exponent that shows this, so called, critical slowing down.
Yes?
AUDIENCE: In principal, why couldn't you have that r becomes negative if you restrict your range of q to be outside of some value and not go arbitrarily close to the origin.
MEHRAN KARDAR: You could, but what's the physics of that? See.
AUDIENCE: I'm wondering if there is or is that not needed.
MEHRAN KARDAR: No. OK, so the physics of that could be that you have a system that has some finite size, [? l. ?] Then the largest q that you could have [? or ?] the smallest q that you could have would be the [? 1/l. ?] So in principal, for that you can go slightly negative. You still cannot go too negative because ultimately this will overcome that. But again, we are interested in singularities that we kind of know arise in the limit of [INAUDIBLE].
I also recall, there is a time that we see as diverging as r goes to 0. Of course, we identified before a correlation length from balancing these two terms, and the correlation length is square root of K over r, which is again proportional to T minus Tc and diverges with a square root singularity. So we can see that this tao max is actually related to the psi squared over Nu K.
And it is also related towards this. We can see that our tao of q, basically, if q is large such that qc is larger than 1 and the characteristic time is going to be 1 over [? nu ?] K times the inverse of q squared. And the inverse of q is something like a wavelength. Whereas, ultimately, this saturates for qc what is less than 1 [? long ?] wavelengths to c squared over [? nu ?] K.
So basically you see things at very short range, at length scales that are much less than the correlation length of the system, that the characteristic time will depend on the length scale that you are looking at squared. Now you have seen times scaling as length squared from diffusion, so essentially this is some kind of a manifestation of diffusion, but as you perturb the system, let's say at short distances, there's some equilibrium system. Let's say we do some perturbation to it at some point, and that perturbation will start to expand diffusively until it reaches the size of the correlation length, at which point it stops because essentially correlation length is an individual block that doesn't know about individual blocks, so the influence does not last.
So quite generally, what you find-- so we solved the linearized version of the Landau-Ginzburg model, but we know that, say, the critical behaviors for the divergence of the correlation length that is predicted here is not correct in three dimensions, things get modified. So these kind of exponents that come from diffusion also gets modified. And quite generally, you find that the relaxation time of a mode of wavelength q is going to behave something like wavelength, which is 1 over q, rather than squared as some exponent z. And then there is some function of the product of the wavelength you are looking at and the correlation length so that you will cross over from one behavior to another behavior as you are looking at length scales that are smaller than the correlation length or larger than the correlation length.
And to get what this exponent z is, you have to do study of the nonlinear model in the same sense that, in order to get the correction to the exponent [? nu ?], we had to do epsilon expansion. You have to do something similar, and you'll fine that at higher changes, it goes like some correction that does not actually start at order of epsilon but at order of epsilon squared. But there's essentially some modification of the qualitative behavior that we can ascribe to the fusion of independent modes exists quite generally and universal exponents different from [? to ?] will emerge from that.
Now it turns out that this is not the end of the story because we have seen that the same probability distribution can describe a lot of different systems. Let's say the focus on the case of n equals to 1. So then this Landau-Ginzburg that I described for you can describe, let's say, the Ising model, which describes magnetizations that lie along the particular direction. So that it can also describe liquid gas phenomena where the order parameter is the difference in density, if you like, between the liquid and the gas.
Yet another example that it describes is the mixing of an alloy. So let's, for example, imagine brass that has a composition x that goes between 0 and 1. On one end, let's say you have entirely copper and on the other and you have entirely zinc. And so this is how you make brass as an alloy.
And what my other axis is is the temperature. What you find is that there is some kind of phase diagram such that you get a nice mixture of copper and zinc only if you are at high temperatures, whereas if you are at low temperature, you basically will separate into chunks that are rich copper and chunks that are rich in zinc. And you'll have a critical demixing point, which has exactly the same properties as the Ising model. For example, this curve will be characterized with an exponent beta, which would be the beta of the Ising mode.
And in particular, if I were to take someplace in the vicinity of this and try to write down a probability distribution, that probability distribution would be exactly what I have over there where m is, let's say, the difference between the two types of alloys that I have compared to each other over here. So this is related to 2x minus 1 or something like that. So as you go across your piece of material close to here, there will be compositional variations that are described by that.
So the question is, I know exactly what the probability distribution is for this system to be an equilibrium given this choice of m. Again, with some set of parameters, R, U, et cetera. The question is-- is the dynamics again described by the same equation? And the answer is no. The same probability of distribution can describe-- or can be obtained with very different dynamics.
And in particular, what is happening in the system is that, if I integrate this quantity m across the system, I will get the total number of 1 minus the other, which is what is given to you, and it does not change as a function of time. d by dt of this quantity is 0. It cannot change.
OK. Whereas the equation that I have written over here, in principle, locally, I can, by adding the noise or by bringing things from the neighborhood, I can change the value of m. I cannot do that. So this process of the relaxation that would go on in data graphs cannot be described by the time dependent on the Landau-Ginzburg equation because you have this conservation here.
OK. So what should we do? Well, when things are conserved, like, say, as the gas particles move in this fluid, and if I'm interested in the number of particles in some cube, then the change in the number of particles in some cube in this room is related to the gradient of the current that goes into that place. So the appropriate way of writing an equation that describes, let's say, the magnetization changing as a function of time.
Given that you have a conservation, though, is to write it as minus the gradient of some kind of a current. So this j is some kind of a current, and these would be vectors. This is a current of the particles moving into the system.
Now, in systems that are dissipative, currents are related to the gradient of some density through the diffusion constant, et cetera. So it kind of makes sense to imagine that this current is the gradient of something that is trying-- or more precisely, minus the gradient of something that tries to bring the system to be, more or less, in its equilibrium state. Equilibrium state, as we said, is determined by this data H, and we want to push it in that direction. So we put our data H by dm over here, and we put some kind of a U over here.
Of course, I would have to add some kind of a conserved random current also, which is the analog of this non-conserved noise that I add over initially. OK.
Now, the conservative version of the equation, you can see, you have two more derivatives with respect to what we had before. And so once I-- OK. So if we do something like this, dm by dt is mu C. And then I would have [INAUDIBLE] plus of R, rather than by itself.
Actually, it would be the plus then of something like R m plus 4 U m cubed, and so forth. And we are going to ignore this kind of term. And then there would be high order terms that would show up minus k the fourth derivative, and so forth. And then there may be some kind of a conserved noise that I have to put outside.
OK. So when I fully transform this equation, what do I get? I will get that dm by dt is-- let's say in the full space, until there is a function of qnt is minus U C. Because of this plus, then there's an additional factor of q squared. And then I have R plus kq squared plus L cubed to the fourth, et cetera. And then I will have a fully transformed version of this conserved noise.
OK. You can see that the difference between this equation and the previous equation is that all of the relaxation times will have an additional factor of q squared. And so eventually, this shortest relaxation time actually will glow like the size of the system squared. Whereas previously, it was saturated at the correlation length.
And because you will have this conservation, though, you have to rearrange a lot of particles keeping their numbers constant. You have a much harder time of relaxing the system. All of the relaxation times, as we see, grow correspondingly and become higher.
OK. So indeed, for this class, one can show that z starts with 4, and then there will be corrections that would modify that. So the-- yes?
AUDIENCE: How do we define or how do we do a realization of the conserved noise, conserved current-- conserved noise in the room?
MEHRAN KARDAR: OK.
AUDIENCE: So it has some kind of like correlation-- self-correlation properties, I suppose, because, if current flowing out of some region, doesn't it want to go in?
MEHRAN KARDAR: If I go back here, I have a good idea of what is happening because all I need, in order to ensure conservation, is that the m by dt is the gradient of something.
AUDIENCE: OK.
MEHRAN KARDAR: So I can put whatever I want over here.
AUDIENCE: So if it's a scalar or an [INAUDIBLE] field?
MEHRAN KARDAR: Yes. As long as it is sitting under the gradient--
AUDIENCE: OK.
MEHRAN KARDAR: --it will be OK, which means this quantity here that I'm calling a to z has a gradient in it. And if you wait for about five minutes, we'll show that, because of that in full space, rather than having-- well, I'll describe the difference between non-conserved and conserved noise in fully space. It's much easier. OK.
So actually, as far as what I have discussed so far, which is relaxation, I don't really need the noise because I can forget the noise. And all I have said-- and I forgot the n tilde-- is that I have a linear equation that relaxes your variable to 0. I can immediately read off for the correlation, then this-- a correlation times what I need the noise for it so that, ultimately, I don't go to the medium is the potential, but I go to this pro-rated distribution.
So let's see what we have to do in order to achieve that. For simplicity, let's take this equation. Although I can take the corresponding one for that. And let's calculate-- because of the presence of this noise, if I run the same system at different times, I will have different realizations of the noise than if I had run many versions of the system because of the realizations of noise. It's quantity and tilde would be different. It would satisfy some kind of a pro-rated distribution.
So what I want to do is to calculate averages, such as the average of m tilde. Let's say, q1 at time T with m tilde q2 at time t. And you can see already from this equation that, if I forget the part that comes from the noise, whatever initial condition that I have will eventually decay to 0. So the thing that agitates and gives some kind of a randomness to this really comes from this.
So let's imagine that we have looked at times that are sufficiently long so that the influence of the initial condition has died down. I don't want to write the other term. I could do it, but it's kind of boring to include it. So let's forget that and focus on the integral. 0 to t.
Now, if I multiply two of these quantities, I will have two integrals over t prime. All right. Each one of them would decay with the corresponding tau of q. In one case, tau of q1. In the other case, tau of q2. Coming from these things. And the noise, q1 at time to 1 prime, and noise q2 at time [? t2 prime. ?]
OK. Now if I average over the noise, then I have to do an average over here. OK.
Now, one thing that I forgot to mention right at the beginning is that, of course, the average of this we are going to set to 0. It's the very least that is important. Right?
So if I do that, clearly, the average of one of these in full space would be 0 also because the full q is related to the real space delta just by an integral. So if the average of the integral is 0, the average of this is 0.
So it turns out that, when you look at the average of two of them-- and it's a very simple exercise to just rewrite these things in terms of 8 of x and t. 8 of x and t applied average that you have. And we find that the things that are uncorrelated in real space also are uncoordinated in full space. And so the variance of this quantity is 2d delta 1 prime minus [? d2 prime. ?] And then you have the analog of the function in full space, which always carries the solution of a factor of 2 pi to the d. And it becomes a sum of the q's, as we've seen many times before.
OK. So because of this delta function, this delta integral becomes one integral. So I have the integral 0 to t. The 2t prime I can write as just 1t prime, and these two factors merge into one. It comes into minus t minus t prime over tau of q, except that I get multiplied by a factor 2 since I have two of them.
And outside of the integral, I will have this factor of 2d, and then 2 pi to the d delta function with 1 plus 2.
OK. Now, we are really interested-- and I already kind of hinted at that in the limit where time becomes very large. In the limit, where time becomes a very large, essentially, I need to calculate the limit of this integral as time becomes very large.
And as time grows to very large, this is just the integral from 0 to infinity. And the integral is going to give me 2 over tau of q. Essentially, you can see that integrating at the upper end will give me just 0. Integrating at the smaller end, it will be exponentially small as it goes to infinity as a factor of 2 over tau. OK. Yes?
AUDIENCE: Are you assuming or-- yeah. Are you assuming that tau of q is even in q to be able to combine to--
MEHRAN KARDAR: Yes.
AUDIENCE: --2 tau? So-- OK.
MEHRAN KARDAR: I'm thinking of the tau of q's that we've calculated over here.
AUDIENCE: OK.
MEHRAN KARDAR: OK. Yes?
AUDIENCE: The e times the [INAUDIBLE] do the equation of something? Right?
MEHRAN KARDAR: At this stage, I am focusing on this expression over here, where a is not. But I will come to that expression also.
So for the time being, this d is just the same constant as we have over here. OK? So you can see that the final answer is going to be d over tau of q 2 pi to the d delta function q1 plus q2. And if I use the value of tau of q that I have, tau of q is mu R plus kq squared, which becomes d over mu R plus kq squared and so forth. 2 pi to the d delta function q1 plus q2.
So essentially, if I take this linearized time dependent line of Landau-Ginzburg equation, run it for a very long time, and look at the correlations of the field, I see that the correlations of the field, at the limit of long times, satisfy this expression.
Now, what do I know if I look at the top line that I have for the probability of distribution? I can go and express that probability of distribution in full mode. In the linear version, I immediately get that the probability of m tilde of q is proportional to a product over different q's, e to the minus R plus kq squared, et cetera, and tilde of q squared over 2.
All right. So when I look at the equilibrium linear as Landau-Ginzburg, I can see that, if I calculate the average of m of q1, m of q2, then this is an equilibrium average. What I would get is 2 pi to the d delta function of q1 plus q2 because, clearly, the different q's are recovered from each other. And for the particular value of q, what I will get is 1 over R plus kq squared and so forth. Yes?
AUDIENCE: So isn't the 2 over tau-- [INAUDIBLE]?
MEHRAN KARDAR: Yes.
Over 2.
Yeah.
OK.
OK. I changed because I had 2d cancels the 2. I would have to put here tau of q over 2. I had 2d times tau of q over 2. So it's d tau. And inverse of tau, I have everything correct. OK.
So if you make an even number of errors, the answer comes up. OK. But you can now compare this expression that comes from equilibrium and this expression that comes from the long time limit of this noisy equation.
OK So we want to choose our noise so that the stochastic dynamics gives the same value as equilibrium, just like we did for the case of a Brownian particular where you have some kind of an Einstein equation that was relating the strength of the noise and the mobility. And we see that here all I need to do is to ensure that d over mu should be equal to 1.
OK. Now, the thing is that, if I am doing this, I, in principle, can have a different noise for each q, and compensate by different mobility for each q. And I would get the same answer. So in the non-conserved version of this time dependent dynamics that you wrote down, the d was a constant and the mu was a constant. Whereas, if you want to get the same equilibrium result out of the conserved dynamics, you can see that, essentially, what we previously had as mu became something that is proportional to q squared. So essentially, here, this becomes mu C q squared.
So clearly, in order to get the same answer, I have to put my noise to be proportional to q squared also. And we can see that this kind of conserved noise that I put over here achieves that because, as I said, this conserved noise is the gradient of something, which means that, when I go to fully space, if it be q, it will be proportional to q. And when I take its variants, it's variants will be proportional to q squared. Anything precisely canceled.
But you can see that you also-- this had a physical explanation in terms of a conservation law. In principle, you can cook up all kinds of b of q and mu of q. As long as this equality is satisfied, you will have, for these linear stochastic equations, the guarantee that you would always get the same equilibrium result. Because if you wait for this dynamics to settle down after long times, you will get to the answer. Yes?
AUDIENCE: I wonder how general is this result for stochastic [INAUDIBLE]?
MEHRAN KARDAR: OK.
AUDIENCE: But what I--
MEHRAN KARDAR: So what I showed you was with for linearized version, and the only thing that I calculated was the variance. And I showed that the variances were the same. And if I have a Gaussian problem of distribution, the variance is completely categorizable with distribution. So this is safe.
But we are truly interested in the more general non-Gaussian probability of distribution. So the question really is-- if I keep the full non-linearity in this story, would I be able to show that the probability of distribution that will be characterized by all kinds of moments eventually has the same behavior as that.
AUDIENCE: Mm-hmm.
MEHRAN KARDAR: And the answer is, in fact, yes. There's a procedure that relies on converting this equation-- sorry. One equation that governs the evolution of the full probability as a function of time. Right? So basically, I can start with a an initial probability and see how this probability evolves as a function of time.
And this is sometimes called a master equation. Sometimes, called a [INAUDIBLE] equation. And we did cover, in fact, this in-- next spring in the statistical physics and biology, we spent some time talking about these things. So you can come back to the third version of this class. And one can ensure that, with appropriate choice of the noise, the asymptotic solution for this probability distribution is whatever Landau-Ginzburg or other probability distribution that you need most.
AUDIENCE: So is this true if we assume Landau-Ginzburg potential for how resistant?
MEHRAN KARDAR: Yes.
AUDIENCE: OK. Maybe this is not a very good-stated question, but is there kind of like an even more general level?
MEHRAN KARDAR: I'll come to that, sure. But currently, the way that I set up the problem was that we know some complicated equilibrium force that exist-- form of the probability that exist. And these kinds of linear-- these kinds of stochastic linear or non-linear evolution equations-- generally called non- [INAUDIBLE] equations-- one can show that, with the appropriate choice of the noise, we'll be able to asymptotically reproduce the probability of distribution that we knew.
But now, the question is, of course, you don't know the probability of distribution. And I'll say a few words about that.
AUDIENCE: OK. Thank you.
MEHRAN KARDAR: Anything else? OK. So the lesson of this part is that the field of dynamic or critical phenomena is quite rich, much richer than the corresponding equilibrium critical phenomena because the same equilibrium state can be obtained by various different types of dynamics. And I explained to you just one conservation law, but there could be some combination of conservation of energy, conservation of something. So there is a whole listing of different universality classes that people have targeted for the dynamics.
But not all of this was assuming that you know what the ultimate answer is because, in all cases, the equations that we're writing are dependent on some kind of a gradient descent conserved or non-conserved around something that corresponded to the log of probability of distribution that we eventually want to put. And maybe you don't know that, and so let me give you a particular example in the context of, let's say, surface interface fluctuations.
Starting from things that you know and then building to something that maybe you don't. Let's first with the case of a soap bubble. So we take some kind of a circle or whatever, and we put a soap bubble on top of it. And in this case, the energy of the formation-- the cost of the formation comes from surface tension.
And let's say, the cost of the deformation is the changing area times some sigma. So I neglect the contribution that comes from the flat surface, and see if I make a deformation.
If I make a deformation, I have changed the area of the spin. So there is a cost that is proportion to the surface tension times the change in area. Change in area locally is the square root of 1 plus the gradient of a height profile.
So what I can do is I can define at each point on the surface how much it has changed its height from being perfectly flat. So h equals to 0 is flat. Local area is the integral dx dy of square root of 1 plus gradient of h squared, minus 1. That corresponds to the flat.
And so then you expand that. The first term is going to be the integral gradient of h squared. So this is the analog of what we had over there, only the first term.
So you would say that the equation that you would write down for this would be all to some constant mu proportional to the variations of this, which will give me something like sigma Laplacian of h. But because of the particles from the air constantly bombarding the surface, there will be some noise that depends on where you are on the surface in time. And this is the non-conserved version.
And you can from this very quickly get that the expectation value of h tilde of q squared is going to be proportional to something like D over mu sigma q squared, because of this q squared. And if you ask how much fluctuations you have in real space-- so that typical scale of the fluctuations in real space-- will come from integrating 1 over q squared. And it's going to be our usual things that have this logarithmic dependence, so there will be something that ultimately will go logarithmically with the size of the system. The constant of proportionality will be proportional to kt over sigma. So you have to choose your D and mu to correspond to this.
But basically, a soap film, as an example of all kinds of Goldstone mode-like things that we have seen. It's a 2-dimensional entity. We will have logarithmic fluctuations-- not very big, but ultimately, at large enough distances, it will have fluctuations.
So that was non-conserved. I can imagine that, rather than this, I have the case of a surface of a pool. So here I have some depth of water, and then there's the surface of the pool of water. And the difference between this case and the previous case-- both of them can be described by a height function.
The difference is that if I ignore evaporation and condensation, the total mass of water is going to be conserved. So I would need to have divided t of the integral dx d qx h of x and t to be 0. So this would go into the conserved variety.
And while, if I create a ripple on the surface of this compared to the surface of that, the relaxation time through this dissipative dynamics would be much longer in this case as opposed to that case. Ultimately, if I wait sufficiently long time, both of them would have exactly the same fluctuations. That is, you would go logarithmically with the length scale over which [INAUDIBLE].
OK, so now let's look at another system that fluctuates. And I don't know what the final answer is. That was the question, maybe, that you asked.
The example that I will give is the following-- so suppose that you have a surface. And then you have a rain of sticky materials that falls down on top of it. So this material will come down.
You'll have something like this. And then as time goes on, there will be more material that will come, more material that will come, more material that will come. So there, because the particles are raining down randomly at different points, there will be a stochastic process that is going on.
So you can try to characterize the system in terms of a height that changes as a function of t and as a function of position. And there could be all kinds of microscopic things going on, like maybe these are particles that are representing some kind of a deposition process. And then they come, they stick in a particular way. Maybe they can slide on the surface. We can imagine all kinds of microscopic degrees of freedom and things that we can put.
But you say, well, can I change my perspective, and try to describe the system the same way that we did for the case of coarse grading and going from without the microscopic details to describe the phenomenological Landau-Ginzburg equation? And so you say, OK, there is a height that is growing. And what I will write down is an equation that is very similar to the equations that I had written before.
Now I'm going to follow the same kind of reasoning that we did in the construction of this Landau-Ginzburg model, is we said that this weight is going to depend on all kinds of things that relate to this height that I don't quite know. So let's imagine that there is some kind of a function of the height itself. And potentially, just like we did over there, the gradient of the height, five derivatives of the height, et cetera.
And then I will start to make an expansion of this in the same spirit that I did for the Landau-Ginzburg Model, except that when I was doing the Landau-Ginzburg Model, I was doing the expansion at the level of looking at the probability distribution and the log of the probability. Here I'm making the expansion at the level of an equation that governs the dynamics.
Of course, in this particular system, that's not the end of story, because the change in height is also governed by this random addition of the particles. So there is some function that changes as a function of position and time, depending on whether, at that time, a particle was dropped down. I can always take the average of this to be 0, and put that average into the expansion of this starting from a constant. Basically, if I just have a single point and I randomly drop particles at that single point, there will be an average growth velocity, an average addition to the height, that averages over here. But there will be fluctuations that are going [INAUDIBLE].
OK but the constant is the first term in an expansion such as this. And you can start thinking, OK, what next order? Can I put something like alpha h?
Potentially-- depends on your system-- but if the system is invariant whether you started from here or whether you started from there-- something like gravity, for example, is not important-- you say, OK, I cannot have any function of h if my dynamics will proceed exactly the same way if I were to translate this surface to some further up or further down. If I see that there's no change in future dynamics on average, then the dynamic cannot depend on this. OK, so we've got rid of that.
And any function of h-- can I put something that is proportional to gradient of h? Maybe for something I can, but for h itself I cannot, because h is a scalar gradient. If h is a vector, I can't set something that is a scalar equal to a vector, so I can't have this.
Yes?
AUDIENCE: Couldn't you, in principle, make your constant term in front of the gradient also a vector and [INAUDIBLE]?
MEHRAN KARDAR: You could. So there's a whole set of different systems that you can be thinking about. Right now, I want to focus on the simplest system, which is a scalar field, so that my equation can be as simple as possible, but still we will see it has sufficient complication. So you can see that if I don't have them, the next order term that I can have would be something like a Laplacian.
So this kind of diffusion equation, you can see, has to emerge as a low-order expansion of something like this. And this is the ubiquity of the diffusion equation appearing all over the place. And then you could have terms that would be of the order of the fourth derivative, and so forth. There's nothing wrong with that.
And then, if you think about it, you'll see that there is one interesting possibility that is not allowed for that system, but is allowed for this system, which is something that is a scalar. It's the gradient of h squared. Now I could not have added this term for the case of the soap bubble for the following reason-- that if I reverse the soap bubble so that h becomes minus h, the dynamics would proceed exactly as before. So the soap bubble has a symmetry of h going to minus h, and so that symmetry should be preserved in the equation. This term breaks that symmetry because the left-hand side is odd in h, whereas the right-hand side of this term would be even in h.
But for the case of the growing surface-- and you've seen things that are growing. And typically, if I give you something that has grown like the tree trunk, for example, and if I take the picture of a part of it, and you don't see where the center is, where the end is, you can immediately tell from the way that the shape of this object is, that it is growing in some particular direction. So for growth systems, that symmetry does not exist. You are allowed to have this term, and so forth.
Now the interesting thing about this term is that there is no beta h that you can write down that is local-- some function of h such that if you take a functional derivative with respect to h, it will reproduce that term-- just does not exist. So you can see that somehow immediately, as soon as we liberate ourselves from writing equations that came from functional derivative of something, but potentially have physical significance, we can write down new terms. So this is actually-- also, you can do this, even for two particles.
A potential v of x1 and x2 will have some kind of derivatives. But if you write dynamical equations, there are dynamical equations that allow you to rotate 1 x from x1 to x2. That kind of term will never come from taking the derivative.
So fine. So this is a candidate equation that is obtained in this context-- something that is grown. We say we are not interested in it's coming from some underlying weight, but presumably, this system still, if I look at it at long times, will have some kind of fluctuations. All the fluctuations of this growing surface, like the fluctuations of the soap bubble, and they have this logarithmic dependence. You have a question?
AUDIENCE: So why doesn't that term-- what if I put in h times that term that we want to appear and then I vary with respect to h? A term like what we want to pop out together with other terms?
MEHRAN KARDAR: Yeah, but those other terms, what do you want to do with them?
AUDIENCE: Well, maybe they're not acceptable [INAUDIBLE]?
MEHRAN KARDAR: So you're saying why not have a term that is h gradient of h squared? Functional derivative of that is gradient of h squared. And then you have a term that is h Laplacian-- it's a gradient of, sorry, gradient of h. And then you expand this.
Among the terms that you would generate would be a term that is h, a Laplacian of h. This term violates this condition that we had over here. And you cannot separate this term from that term.
So what you describe, you already see at the level over here. It violates translation of symmetry in [? nature ?]. And you can play around with other functions. You come to the same conclusion.
OK, so the question is, well, you added some term here. If I look at this surface that has grown at large time, does it have the same fluctuations as we had before? So a simple way to ascertain that is to do the same kind of dimensional analysis which, for the Landau-Ginzburg, was a prelude to doing renormalization.
So we did things like epsilon expansion, et cetera. But to calculate that there was a critical dimension of 4, all we needed to do was to rescale x and m, and we would immediately see that mu goes to mu, b to the 4 minus D or something-- D minus 4, for example. So we can do the same thing here.
We can always move to a frame that is moving with the average velocity, so that we are focusing on the fluctuations. So we can basically ignore this term. I'm going to rescale x by a factor of b. I'm going to rescale time by a factor of b to something to the z.
And this z is kind of indicative of what we've seen before-- that somehow in these dynamical phenomena, the scaling of time and space are related to some exponent. But there's also an exponent that characterizes how the fluctuations in h grow if I look at systems that are larger and larger. In particular, if I had solved that equation, rather than for a soap bubble in two dimensions, for a line-- for a string that I was pulling so that I had line tension-- the one-dimensional version of it, the one-dimensional version of an integral of 1 over q squared would be something that would grow with the size of the system.
So there I would have a chi of 1/2, for example, in one dimension. So this is the general thing. And then I would say that the first equation, dhy dt, gets a factor of b to the chi minus z, because h scaled by a factor of chi, t scaled by a factor of z, the term sigma Laplacian of h gets a factor of b to the chi minus 2 from the two derivatives here-- sorry, the z and the 2 look kind of the same. This is a z. This is a 2.
And then the term that is proportional to this non-linearity that I wrote down-- actually, it is very easy, maybe worthwhile, to show that sigma to the 4th goes with a factor of b to the chi minus 4. It is always down by a factor of two scalings in b with respect to a Laplacian-- the same reason that when we were doing the Landau-Ginzburg. We could terminate the series at order of gradient squared, because higher-order derivatives were irrelevant. They were scaling to 0.
But this term grows like b to the 2 chi, because it's h squared, minus 2, because there's two gradients. Now thinking about the scaling of eta takes a little bit of thought, because what we have-- we said that the average of eta goes to 0. The average of eta at two different locations and two different times-- it is these particles that are raining down-- they're uncorrelated at different times. They're uncorrelated at different positions. There's some kind of variance here, but that's not important to us.
If I rescale t by a factor of b, delta of bt-- sorry, if I scale t by a factor of b to the z, delta of b to the zx will get a factor of b to the minus z. This will get a factor of b to the minus d. But the noise, eta, is half of that. So what I will have is b to the minus z plus d over 2 times eta on the rescalings that I have indicated.
I get rid of this term. So this-- divide by b to the chi minus z. So then this becomes bh y dt is sigma b to the z minus 2. Maybe I'll write it in red-- b to the z minus 2.
This becomes sigma to the 4, b to the z minus 4. And then lambda over 2. This is Laplacian of h.
This, for derivative of h, this is Laplacian of h. This term becomes b to the chi plus z minus 2, gradient of h squared. And the final term becomes b to the chi minus d minus z over 2.
AUDIENCE: [INAUDIBLE]?
MEHRAN KARDAR: b to the minus chi-- you're right. And then, actually, this I-- no? Minus chi minus d over 2 minus d over 2 [INAUDIBLE], That's fine.
So I can make this equation to be invariant. So I want to find out what happens to this system if I find some kind of an equation, or some kind of behavior that is scale invariant. You can see that immediately, my choice for the first term has to be z equals to 2.
So basically, it says that as long as you're governed by something that is diffusive, so that when you go to Fourier space, you have q squared, your relaxation times are going to have this diffusive character, where time is distance squared. Actually, you can see that immediately from the equation that this diffusion time goes like distance squared. So this is just a statement of that.
Now, it is the noise that causes the fluctuations. And if I haven't made some simple error, you will find that the coefficient of the noise term becomes scale invariant, provided that I choose it to be z minus d over 2 for chi. And since my z was 2, I'm forced to have chi to be 2 minus d over 2.
And let's see if it makes sense to us. So if I have a surface such as the case of the soap bubble in two dimensions, chi is 0. And 0 is actually this limiting case that would also be a logarithm.
If I go to the case of d equals to 1-- like pulling a line and having the line fluctuate-- then I have 2 minus 1 over 2, which is 1/2, which means that, because of thermal fluctuations, this line will look like it a random walk. You go a distance x. The fluctuations in height will go like the square root of that. OK?
So all of that is fine. You would have done exactly the same answer if you had just gotten a kind of scaling such as this for the case of the Gaussian Model without the nonlinearities. But for the Gaussian Model with nonlinearities, we could also then estimate whether the nonlinearity u is relevant.
So here we see that the coefficient of our nonlinearity is lambda, is governed by something that is chi plus z minus 2. And our chi is 2 minus z over 2. z minus 2 is 0.
So whether or not this nonlinearity is relevant, we can see depends on whether you're above or below two dimensions. So when you are below two dimensions, this nonlinearity is relevant. And you will certainly have different types of scaling phenomena then what you predict by the case of the diffusion equation plus noise.
Of course, the interesting case is when you are at the marginal dimension of 2. Now, in terms of when you do proper renormalization group with this nonlinearity, you will find that, unlike the nonlinearity of the Landau-Ginzburg, which is marginally irrelevant in four dimensions. du by dl was minus u squared, this lambda is marginally relevant. d lambda by dl is proportional to plus lambda squared. It is relevant marginality-- marginally relevant.
And actually, the epsilon expansion gives you no information about what's happening in the system. So people have then done numerical simulations. And they find that there is a roughness that is characterized by an exponent, say something like 0.4. So that when you look at some surface that is grown, is much, much rougher than the surface of a soap bubble or what's happening on the surface of the pond.
And the key to all of this is that we wrote down equations on the basis of this generalization on symmetry that we had learned, now applied to this dynamical system, did an expansion, found one first term. And we found it to be relevant. And is actually not that often that you find something that is relevant, because then it is a reason to celebrate.
Because most of the time, things are irrelevant, and you end up with boring diffusion equations. So find something that is relevant. And that's my last message to you.