Lecture 12: Catch Up and Review, Postulates

Flash and JavaScript are required for this feature.

Download the video from Internet Archive.

Description: This lecture covers five postulates and a catch up and review before the exam.

Instructor: Prof. Robert Field

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at osu.mit.edu.

ROBERT FIELD: You know that there is exam tomorrow night. It's very heavily on the particle in a box, the harmonic oscillator, and the time-independent Schrodinger equation. It's really quite an amazing amount of stuff.

OK. So last time, we talked about some time-independent Hamiltonian examples. And two that I like are the half harmonic oscillator and the vertical excitation-- the Franck-Condon excitation.

Now, when you're doing time-dependent Hamiltonians-- or when you're doing time-dependent problems for a time-independent Hamiltonian-- You always start with the form of the function at t equals zero and automatically extend it using the fact that you have a complete set of eigenfunctions of the time-independent Hamiltonian to the time-dependent wave function.

And with the time-dependent wave function, you're able to calculate almost anything you want. And I did several examples of things that you could calculate. One is the probability density as a function of time. So the other is the survival probability.

And the survival probability is a really neat thing, because it says, we've got some object which is particle-like. And the time evolution makes the wave function move away from its birthplace. And that's an easy thing to understand, but what's surprising is that we think about a wave packet as localized in position, but it also has encoded in it momentum information.

And so when the wave packet moves away from its starting point-- if it starts at rest-- The initial fast changes are in momentum. And the momentum-- the change of the momentum-- is sampling the gradient of the potential.

And that's something you usually want to know. The gradient of the potential at a turning point-- at an energy you know. And that is a measurable quantity. And it's really a beautiful example of how you can find easily observable, or easily calculable, quantum mechanical things that reflect classical mechanics.

OK. I talked about grand rephasings for problems like the harmonic oscillator, and the rigid rotor, and the particle in a box-- You have this fantastic property that all of the energy level differences are integer multiples of a common factor. And that guarantees that you will have periodic rephasings at times related to that common factor.

And so that enables you to observe the evolution of something for a very long time and to see whether the rephasings are perfect or not quite perfect. And the imperfections are telling you something beyond the simple model-- They're telling you about anharmonicity or some other thing that makes the energy levels not quite integer multiples of a common factor.

Now, one of the nicest things is the illustration of tunneling. And we don't observe tunneling. Tunneling is a quantum mechanical thing. And it's encoded in what's most easily observed-- the energy level pattern. It's encoded as a level staggering.

Now, we talked about this problem, where we have a barrier in the middle. And with a barrier in the middle, half of the energy levels are almost unaffected, and the other half are affected a lot. Now, here is a problem that you can understand-- In the same way, there's no tunneling here, but there is some extra stuff here.

And the level diagram can tell you the difference between these two things. Does anybody want to tell me what's different? What is the qualitative signature of this? Yes.

AUDIENCE: The spacings move up a little bit. The odd spacings are erased in energy, and the even spacings are just about unaffected.

ROBERT FIELD: Even symmetry levels are shifted up.

AUDIENCE: Right. Sorry.

ROBERT FIELD: And so now you're ready to answer this one. If the even symmetry levels are shifted up, because they feel the barrier, what about the even symmetry levels here? Yes?

AUDIENCE: They would be shifted down.

ROBERT FIELD: Right. And so you get a level staggering where, in this case, the lowest level is close to the next higher one. And in this one, the lowest level is shifted way down, and the next one is not shifted. And then we get the doubling or the parent.

So that's a kind of intuition that you get just by looking at these problems.

Now, one thing that is really beautiful is, when you have a barrier like this, since this part of the potential problem is something that is exactly solved, you propagate the wave function in from the sides and they have the same phase. So there is no accumulation of phase under the barrier.

And that means the levels that are trying to propagate under this barrier are shifted up, because they have to accumulate enough phase to satisfy the boundary conditions at the turning points. Here, you're going to accumulate more phase in this special region. Phase is really important.

OK. So today we're going to talk-- And this this lecture is basically not on the exam, although it does connect with topics on the exam and makes it possible to understand them better. So instead of learning about the postulates in a great abstract way at the beginning, before you know what they're for, now we're going to review what we understand about them.

And so one thing is, there is a wave function. And now we're considering not just one dimension, but any number of dimensions. And this is the state function that tells you everything you're allowed to know about the system.

And if you have this, you can calculate everything. If you know how observables relate to this, well, you're fine. You can then use that to describe the Hamiltonian.

Hermetian operators are the only kind of operators you can have in quantum mechanics. And they have the wonderful property that their eigenvalues-- all of them-- are real. Because they correspond to something that's observable. And when you observe something, it's a real number. It's not a complex number. It's not an imaginary number.

So what is Hermetian? And why does that ensure that you always get a real number?

OK. You all know, in quantum mechanics, that if you do an experiment, and the experiment can be represented by some operator, you get an eigenvalue of that operator-- nothing else-- one experiment, one eigenvalue. 100 experiments-- You might get several eigenvalues.

And the relative probabilities of the different eigenvalues-- I tell you something about, what is this? What was the initial preparation?

So I've already said something about this. The expectation value of the wave function for a particular operator is related to probabilities times eigenvalues. OK. The fifth postulate is the time-dependent Schrodinger equation. And I don't need to talk much about that, because it's all the way around us.

And then I'm going to talk about some neat stuff where we start using words that are very instructive-- so completeness, orthogonality, commutators-- simultaneous eigenfunctions.

And this is really important. Suppose we have an easy operator, which commutes with the Hamiltonian. And it's easy for us to find the eigenvalues of that operator-- eigenfunctions of that operator. Those eigenfunctions are automatically eigenfunctions. It's a hard operator. So we like them.

And so we are interested in, can we have simultaneous eigenfunctions of several operators? And the answer is yes, if the operators can move. We use the term basis set to describe the complete set of eigenfunctions of some operator.

And we use the words mixing coefficients and mixing fraction. So here we have a wave function that is expressed as a linear combination of basis functions. And the coefficient in front of each one is the mixing coefficient.

Now, if we're talking about probabilities, we care about mixing fractions, which are basically the mixing coefficient squared-- or [INAUDIBLE] square modulus. So these are words that are part of the language of someone who uses quantum mechanics to understand stuff.

OK. So I'm going to try to develop this in some useful tricks. So we have a state function. So we have a state function, which is a function of coordinates and time. And this thing is telling you, what is the probability of finding this system at that coordinate at the specified time? And the volume element is dx, dy, and dz.

Now, often, we use an abbreviation-- d tau-- for the volume element, because we're going to be dealing with problems that are not just single particle, but many particles. And so we use this notation to say, for the differential associated with every coordinate associated with the problem, and we're going to integrate over those sorts of things-- or at least over some of them.

So this is telling you about a probability within a volume element at a particular point in space and time. Now, one of the things that we're told is, if the wave functions are well behaved. And that says something about the wave functions and the derivatives of the wave functions.

So what's well-behaved? Well, the wave function is normalize-able. Now, there are two kinds of normalization-- normalization to one, implying that the system is somewhere within a specified range of coordinates where there's one particle in the system. Whatever.

And there's normalization to number density. When you have a free particle, the free particle is not confined. And so you can't say, I'm normalizing. So there's one particle somewhere, because that means there is no particle anywhere.

And so we can extend the concept of normalization to say, there's one particle in a particular length of the system.

And that was a problem that got removed from the draft of the exam. But one thing that happens is, if I write a beautiful problem, and it gets bumped from an exam, it might appear in the final.

OK. So normalize-able is part of well-behaved continuous and single value-- we'll talk about all of these-- and square integrable

OK. Continuous-- The wave function has to be continuous everywhere. The first derivative of the wave function, with respect to coordinate-- we already know from the particle in a box that that is not continuous at an infinite wall. So an infinite wall-- not just a vertical wall, but one that goes to infinity-- guarantees that this guy is not continuous.

But that's a pretty dramatic thing. The second derivative is not continuous when you have a vertical step, which is not infinite.

Now, when you have problems where you divide space up into regions, you're often trying to establish boundary conditions between the different regions or at the borders. And the boundary conditions are usually expressed in terms of continuity of the wave function and continuity of the first derivative. And we don't need this often.

But you're entitled-- if the problem is sufficiently well-behaved-- All of these guys are continuous, and you can use them all. OK. So normalize-able-- Well, normalize-able means it's square integrable, and you don't get infinity unless we use this other definition of normalize-able.

So one of the things that has to happen is, at the coordinate plus and minus infinity, the wave function has to go to zero. Now, the wave function can be infinite at a very small region of space. So there are singularities that can be dealt with.

But normally, you say, the wave function is never going to be infinite, and it's never going to be anything but 0 at infinity, or you're in real trouble.

Now, there is a wonderful example of a kind of a problem called the delta function. A delta function is basically an infinite spike-- infinitely thin, infinitely tall. And what it does is, it causes this to be discontinuous by an amount related to the value of the derivative at the-- by an amount determined by the value of the wave function at the delta function.

And delta functions are computationally wonderful, because they enable you to treat certain kinds of problems in a trivial way. Like, if you have a barrier-- Yes.

AUDIENCE: Does it relate at all to the integral of the spike?

ROBERT FIELD: Yes. So we have an integral of the delta function at x i times the wave function at x, dx. And that gives you the wave function at x i.

OK. I haven't really talked much about delta functions. But because it acts like a barrier and is a trivial barrier, it enables you to solve barrier problems or at least understand them in a very quick and easy way. And vertical steps are also not physical, but we like vertical steps because it's easy to apply boundary conditions.

And so these are all just computationally tricky things that are wonderful. And we don't worry about, is there a real system that acts like a delta function or a vertical step? No. There isn't. But everything you get easily, mathematically, from these simple things is great.

OK. Did I satisfy you on the--

AUDIENCE: Sure. I'll read into it.

ROBERT FIELD: OK. And there's also a notation where you have x, xi, or x minus x i, and they're basically all the same sort of thing. If you have this-- the argument-- when the argument is 0, you get the infinite spike. And there's just lots of things.

OK. So for every classical mechanical observable, there is a quantum mechanical operator, which is Hermetian. And the main point of Hermetian, as I said, is that its eigenvalues are real. And so what is the thing that assures that we get real eigenvalues? Well, here is the definition in a peculiar form. So we have an integral from minus infinity to infinity of some function-- complex conjugate-- a times some other function dx.

So this could be a wave function and a different wave function. And the definition of Hermetian is this abstract definition. We have, say, interval from minus infinity to infinity g a-- Let's put a hat on it. Star, f-star, dx.

Well, this is kind of fancy. So we have an operator. We can take the complex conjugate of the operator. We have functions. We can take the complex conjugates of the function. But here, what we're seeing is, the operator is operating on the g function-- the function that started out on the right.

And here, what we have is this operator operating on the f function, which was initially on the left. And so this is prescription for operating on the left. And it's also an invitation to use a really convenient and compact notation.

And that is this-- Put two subscripts on a. A subscript says the first guy is the function over here, which is complex conjugated. And the second one is the function over here, which is not complex conjugated. And so this equation reduces to a g f star, where, now, this is a wonderful shorthand.

And this is another way of saying that A has to be equal to A dagger. Where now we're talking about operators, and matrix representations of operators. Because here we have a number with two indices, and that's how we represent elements of a matrix. And we're soon going to be playing with linear algebra, and talking about matrices.

And so this is just saying, well, we can take one matrix, and it's equal to the complex conjugate of every term in the matrix. And the order switched. So this is a warning that we're going to be using a notation, which is way simpler than taking these integrals. And so once you recognize what this symbol means, you're never going to want to go back.

OK. Why real? So let's look at a specific example, where instead of having two different functions, let's just look at one. So we have this integral f star A f d x. So that's a f f. And the definition says, well, we're going to get the--

So it says, replace the original thing by moving this-- anyway, yes. And this is a f f star. If you know how this notation translates into-- now you can see, oh, well, what is this? It's just taking the complex conjugate. And these two guys are equal. And so a number is real if it's equal to its complex conjugate.

And this is just a special case. It's-- the Hermitian property is a little bit more powerful than that, but the important thing is that it guarantees that if you calculate the expectation value of an observable quantity, you're going to get a real number, if the operator is Hermitian. And it has to be Hermitian, if it's observable.

Now, it's often useful if you have a classical mechanical operator, and you translate it into a quantum mechanical operator by doing the usual replacement of x with x, and p with i h bar, the derivative with respect to x. If you do all that sort of stuff, you might be unlucky and you might get a non-Hermitian operator.

And so you can generate a Hermitian operator if you write-- that's guaranteed to be Hermitian. So you take classic mechanics, you generate something following some simple rules, and you have bad luck, it doesn't come out to be Hermitian. This is the way we make it Hermitian.

So if this is not Hermitian, and this is not Hermitian, but that we're defining something that is Hermitian, so let's just put a little twiddle over A, that's Hermitian. OK then we're now talking about the third postulate, and each measurement of a gives an eigenvalue of a.

We've talked about this enough. But your first encounter of this was the Two-Slit experiment. In the Two-Slit experiment, this experiment is an operator. And you have some initial photons entering this operator , and you get dots, on the screen, those are eigenvalues of this operator.

Now it may be that the operator has continuous eigenvalues, but they don't have uniform probabilities. And so what you observe is a whole distribution of dots that doesn't look like anything special, and it's not reproducible from one experiment to the other, but you have this periodic structure that's appearing, which is related to the properties of the operator.

And so there is not uniform probabilities of each eigenvalues, and so you get that. OK, these are simple, but really beautiful.

OK, if we have a normalized state then we can say, OK, and we never use this notation. But whenever you see this symbol, it means the expectation value of an operator for some wave functions, so we could actually symbolically put that wave function down here, or some symbols saying, OK, which one?

And that this is equal to psi star A psi d tau, or dx. And if the wave function is normalized, we don't need to divide by a normalization integral. If it's not normalized, like if it's a free particle, we divide by some kind of free particle integral.

So now the next topic, which is related to this, is completeness and orthogonality. So we have a particular operator. There exists some complete set of eigenfunctions of that operator. Usually that complete set is infinite, but they're related to each other in a simple way. You have some class of functions, and you change an integer to get a new function.

And orthogonality is, well, if you have all the eigenfunctions of an operator, if they belong to different eigenvalues, they're guaranteed to be orthogonal. Which is convenient, because that means you get lots of 0s from integrals, and we like that, because we don't have to worry about them. And you want to be able to recognize the zero integrals, so that you can move very quickly through a problem.

Completeness means, take any function defined on the space of the operator you're interested in. You might have an operator that's only operating on a particular coordinate of a many electron atom or molecule. There's lots of ways of saying, it's not over all space, but for each operator you know what space the operator in question is dealing with.

And then it's always possible to write. So this is some general function, defined in the space of the operator, and this is the equation that says, well, completeness tells us that we can take the sum over all of the eigenfunctions with mixing coefficients. c j. And this set of all of the eigenfunctions is called the basis set. It's a complete basis set.

So it's always true, you can do this. So you know from other problems, that if you have a finite region of space, you can represent anything within that finite region via sum over Fourier components. That's a discrete sum.

If you have an infinite space, you have to do a Fourier integral, but it's basically the same thing. You're expressing a function, anything you want, in terms of simple, manipulable objects. So sometimes the these things are Fourier components, and sometimes they're just simple wave functions.

OK, now suppose you have two operators, a and b. If they operate over the same space, the question is, can we take eigenfunctions of one and be sure that they're eigenfunctions of the other?

OK, but let's deal with something simpler. So suppose we have psi i and psi j, both belong to a sub i. So they both have the same eigenvalue. Well, in that case, we cannot be sure that these two functions are orthogonal.

So there is a handy dandy procedure called Schmidt Orthogonalization that says, take any two functions, and let's construct an orthogonal pair. This is amazingly useful when you're trying to understand a problem using a complete orthonormal basis set.

We know, I'm not going to prove it, because I don't know where it is in my notes, what the sequence is, I'm just going to forget it-- I think I'm going to do it. If you have functions belonging to different eigenvalues, they are automatically orthogonal. That's also really valuable, because you have to check.

So you have harmonic oscillator functions, and they're all orthogonal to each other. You have, perhaps, one harmonic oscillator, and a different harmonic oscillator, and the functions for these two different harmonic oscillators don't know about each other. They're not guaranteed to be orthogonal. But any two eigenvalues of this guy are orthogonal, and any of those are orthogonal. But not here.

OK, so let's just say we have now two eigenfunctions of an operator that belong to the same eigenvalues. And that happens. There are often very many, very high degeneracies. But we want to make sure that we've got two.

So let's say, here is a number, the overlap integral between psi 1 and psi 2 dx. This is a calculable number. So you have two original functions, which are not guaranteed to be orthogonal, because they belong to the same eigenvalue, and you can calculate this number. And then we can say, let us now construct something up psi 2, which is guaranteed to be a psi 2 prime which is guaranteed to be orthogonal to psi 1.

How do we do that? Well, we define psi 2 to be a normalization factor, times psi 2 the original, plus some constant times psi 1. And then we say, let us calculate the overlap integral between psi 1 and psi 2 prime.

OK, so we do that. And so we have the integral, and we have psi 1 star times psi 2 plus a psi 1 dx. Psi 1 on psi 1 is 1. So we get an a. And psi 1 star times psi 2 gives s. So this integral, which is supposed to be 0, because we want psi 1 to be orthogonal to psi 2 prime, is going to be n times s plus a.

Well, how do we satisfy this? We just make a to be minus s. Guaranteed. Now this is one of the tricks that I use the most when I do derivations. I want orthogonal functions, and this little Schmidt Orthogonalization enables me to take my simple idea and propagate it into a complicated problem. It's very valuable. You'll probably never use it, but I love it.

So if a is equal to minus s, you've got orthogonality. And the general formula is psi 2 prime is equal to 1 minus x squared, the square root of that, times psi 2 minus s psi 1. So this is a normalized function which is orthogonal to psi 1. And it doesn't take much work. You calculate one interval, and it's done.

Now later in the course, we're going to talk we're going to talk about a secular determinant that you use to solve for all the eigenvalues and eigenfunctions of a complicated problem. And often, when you do that, you use convenient functions which are not guaranteed to be orthogonal.

And there is a procedure you apply to this secular matrix, which orthogonalizes it first. Then you diagonalize a simple thing, and then you undiagonalize, if you need to. Anyway, this is terribly valuable, especially when you're doing quantum chemistry, which you're going to see towards the end. OK.

Often, we would like to express a particular eigenfunction expectation value as a sum over P i a i. So this is a probability. And so how do we do that? Certainly, the average of this operator over psi can be written as the eigenvalues of a times the probability of each operator.

And what are they? Well, you can show that this is equal to c i-- I better be careful here. Well, let's just do it. Well, I'll just write it. Where this is the mixing coefficient of the eigenfunction of a in the original function.

So we get probabilities mixing fractions, and we have mixing coefficients. And this is the language we're going to use to describe many things.

AUDIENCE: [INAUDIBLE]

ROBERT FIELD: I'm sorry?

AUDIENCE: The probability doesn't have an a i, the average does. You just remove the a i.

ROBERT FIELD: So the probability--

AUDIENCE: The probability--

ROBERT FIELD: Oh, yes. You see, when I start lecturing from what's in my addled brain-- OK, thank you. OK. Now let's do some really neat things.

So we have a commutator of two operators. If that commutator is 0, then all non-degenerate eigenfunctions of A are eigenfunctions of B. If this is not equal to 0, then we can say something about the variances of A and B. So these quantities are greater than or equal to minus 1/4 times the integral-- I better write this on the board below.

And this is greater than 0, and real. This is the uncertainty principle. So it's possible to prove this, and it's really strange, because we have a square of a number here. We think the square of a number is going to be real, but not if it's imaginary. Most non-zero commutators are imaginary. And so this thing is negative, and it's canceled.

So the joint uncertainty is related to the expectation value of a commutator. And this is all traced back to x P x is equal to ih bar. And this commutator is imaginary. And everything that appears in here, it comes from the non-commutation of coordinate and momentum.

And this is why this commutator is often regarded as the foundation of quantum mechanics. Because all of the strangeness comes from it. So yes, this is surprising. It's saying that, when we have a non-zero commutator, this is what determines the joint uncertainty of two properties.

This commutator is always imaginary, that is a big, big surprise. And as a result the joint uncertainty is greater than 0, If the two operators don't commute, it's because x and P don't commute. It's really scary. OK, what time is it? We've got five minutes left, and I can do one more thing. Well, I guess I'm going to be just talking about--

So the uncertainty principle. If we know operator A and B, we can calculate their commutator. This is a property of the operators. It doesn't have to do with constructing some clever experiment, where you tried to measure two things simultaneously. It says, all of the problems with simultaneous observations of two operators, two things, comes from the structure of the operators. Comes from their commutation rule. Which traces all the way back to the commutator between x and P.

So at the beginning, I said I don't like these experiments, where we try to confine the coordinate of the photon or the electron, and it results in uncertainty in the measurement of the conjugate property, the momentum, or something like that. These experiments depend on your cleverness, but this doesn't. This Is fundamental. So I like that a lot better.

OK, the last thing I want to talk about, which I have just barely enough time to do, is suppose we have a wave function. Let's call it psi 2 in quotes of x for the particle in a box. Particle in a infinite box. This is a wave function, which is not the eigenfunction, but it is constructed to look like the eigenfunction. Mainly because it has the right number of nodes.

And so suppose we call this thing some normalization factor x x minus a and x minus a over 2. This guarantees you have a node at x equals 0. This guarantees you have a node at x equals a. This guarantees you have a node at x equals a over 2. So this is that the generic property of the second eigenfunction for the particle in a box.

And this is a very clever guess, and often you want to make a guess. And so, how well you do with a guess? And so this function, this part of it looks sort of like this between x and a, and this part of it looks like this. And we multiply these two together and we get something that looks like this. Which is, at least a sketch, looks like the n equals 2 eigenfunction with a particle in a box.

So let's just go through and see how well we can do. So first of all, we have to determine n. So we do the normalization integral. Psi 2 star psi 2 dx. That has to be 1. So what we do, now this is kind of a cheat, because we do this because we don't know an eigenfunction. But we do know these eigenfunctions, so we can expand these functions, in terms of the particle in a box eigenfunctions.

So we use these things, which you know very well. Now we have a sine function here. That's because I've chosen the box that doesn't have 0 left edge, it's symmetric about x equals 0. And that would be appropriate for this kind of function.

So anyway, when we do this, we find the mixing coefficients. And I know I just said something wrong and that's, we don't have time to correct it. Because I said it the wave function is 0 at x equals 0, and at x equals a. And I'm now saying, all of a sudden I'm using symmetric box-- this does not matter, because the calculation is done correctly.

And what we end up getting, is that the mixing coefficients for these functions of the general form, 800-- I'm sorry, 840-- this is algebra!-- Square root a to the minus 7/2 2 over n-- 2 over a. I think that's a. Well, I'm not sure whether that's a or n, but let's just say it's 2 over n square root-- oh, it's going to be 2 over a. And times the integral-- anyway, so we get c 2n is equal to 1680 square root over 6 over 2n pi cubed. And that becomes equal to 0.9914n to the minus 3.

So this is a general formula which you can derive. I don't recommend it, and I don't think it's really important. The important thing is what I'm about to say. And I have no time. This is almost 1. So when we calculate the energy using these functions, we get that the energy of this 2n function is equal to 4E1 times 0.983 integral from n equals 1 to infinity times n to the minus 4.

OK, the first term and this is 1. And so we have something that looks like 4 times E1. Now the sum is larger than one, and the product of these two things is larger than 1. And so what we get is that, E2n is larger, but only slightly larger, than the exact results.

So this is sort of a taste of a variation calculation. We can solve for the form of a function by doing a minimization of the energy of that function. And that function will look like the true function, but its energy will always be larger than the true function. But it's great, because the bigger the calculation, the better you do.

And that's how most of the money, the computer time in the world, is expended. Doing large variational calculations to find eigenfunctions of complicated problems. OK, good luck on the exam. I hope you find it fun, and I meant it to be fun.