Lecture 13: From Hij Integrals to H Matrices I

Flash and JavaScript are required for this feature.

Download the video from Internet Archive.

Description: This lecture covers the topic of Hij integrals and H matrices.

Instructor: Prof. Robert Field

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

ROBERT FIELD: OK. So, today is the first of a pair of lectures taking us from the Schrodinger picture to the Heisenberg picture. Or the wave function picture to the matrix picture.

Now almost everyone who does quantum mechanics is rooted in the Heisenberg picture. We use the Heisenberg picture because the structure of the problem is immediately evident when you write down what you know. And it's also mostly the way you program your computers to solve any problem.

So I'm going to get you to the Heisenberg picture by dealing with the two-level problem, which is-- it should be called one of the exactly solved problems, except it's abstract as opposed to harmonic oscillator, particle in an infinite box, rigid rotor. It's a exactly-solved problem, and it leads us to-- or guides us to-- the new approach.

I'm going to approach this problem the algebraic or Schrodinger way. And then I'm going to describe it in a matrix way and introduce the new language and the new notation. So I can say, at least at the end of this lecture, I'll be able to say the word matrix element and everybody will know what it is, and I'll stop talking about integrals.

And one of the scary things is, when we go to the matrix picture, we stop looking at the wave function. We never think about the wave functions. And so there are all sorts of things like phases that we have to make sure we're not screwing up. Because we're playing fast and loose with symbols, and sometimes they can bite you if you don't know what you're dealing with.

But mostly, we're going to get rid of wave functions, because we know all the solutions to standard problems, and we know a lot of integrals involving those solutions to standard problems. And so basically, all we need is an index saying this wave function or this state, this integral, and you have-- and then you can write down everything you need in matrix notation, and then you can tell your friendly computer, OK, solve the problem for me.

OK. So once I give you the matrix picture, then I will talk about how you find the eigenvalues and eigenvectors of the matrix picture using a unitary transformation. And then I'll generalize from the two-level problem, which is exactly soluble. You don't need a computer for it, but it's convenient-- because you don't have to write anything down-- to N levels where N can be infinite.

And the N-level problem is in principle difficult because even a computer can't diagonalize an infinite matrix. And all of your basis sets, all of your standard problems involve an infinite number of functions. And so I've introduced a notation and a concept of solving a matrix equation, but you can't do it unless you, say, let's make an approximation, and it's called nondegenerate perturbation theory. And this is the tool for learning almost everything you want to know about complicated problems.

And it's not complicated to apply nondegenerate perturbation theory. It's just ugly. But it's really valuable because you don't have to remember how to solve a particular complicated differential equation, you just write down what you know and you do some simple stuff, and bang, you've got a solution to the problem.

And this is really what I want you to come away from this course with-- the concept that anything that requires quantum mechanics you can solve using some form of perturbation theory. So hold onto your seats.

So the Schrodinger picture is differential equations. And they're often coupled differential equations. And you don't want to go there, usually. And we're going to replace that with linear algebra. Now many of you have not had a course in linear algebra.

And so that should make you say, can I do this? And the answer is, yeah, you can do this because the linear algebra you are going to need in this course is based on one concept, and that is that the solution of coupled linear homogeneous equations involves diagonalizing or solving a determinant.

And when you solve this determinant of the equation, it's equivalent to diagonalizing a matrix. And that's the language we're going to be using all the time. And so the only thing I'm not going to do is prove this fundamental theorem of linear algebra, but the notation is easy to use and the tricks are very simple.

So, we have with exactly-solved problems complete sets of the energy levels of the wave functions. And we know a lot of integrals. Psi i operator psi j d tau. We know these not by evaluating them, but because the functions are so simple that we can write these as simply a function of the initial and final quantum numbers. And so all of a sudden, we forget about the wave functions, because all the work has been done for us. We could do it too, but why?

So we start with the two-level problem. And the two-level problem says we have two states, psi 1 and psi 2. Still in the Schrodinger picture. And this has an energy which we could we could call E1, and that would be H11, the diagonal integral of the Hamiltonian between the 1 function and the 1 function, and this is E2.

Now these two states have an interaction. They're connected by an interaction term which causes them to repel each other equal and opposite amounts. And so this is H12. Again, an integral, which you usually don't have to evaluate, because it's basically done for you. And then we get E plus and E minus, and the corresponding eigenfunction.

So this is the two-level problem, and it's expressed in terms of three parameters-- H11, H22, and H12. And we get the two energy levels and the two eigenfunctions. And we know how to solve this problem in the Schrodinger picture, and I'll do that first.

And so, H11 is just this integral, psi 1 star H. And we put a hat on H for the time being-- 1 d tau. And H22 is a different integral of psi 2 star H psi 2 d tau. And H12 is psi 1 star H psi 2 tau. And we know that the Hamiltonian operator is Hermitian, and so we can write also that this is psi 2 star H star psi I d tau.

Anyway, we call this thing V. And there's some subtleties. If the integral between these two things is imaginary, then the 1-2 and 2-1 integrals have opposite signs. Because the Hamiltonian is Hermitian. But let's just think of this as just one number. So we have E1, E2, and delta.

So the two-level problem is going to let us find the eigenfunctions-- the plus and minus eigenfunctions. C plus 1. So we have eigenfunctions corresponding to-- which are eigenfunctions belonging to the eigenvalues, E plus and E minus. And there are a linear combination of the two states.

Now, it's a two-level problem. This is a state space that contains only two states. It's not an approximation. And so completeness says we can write any function we want as a linear combination of the functions and the basis set. And so our job is going to be to find these coefficients and also to find the energy eigenvalues.

OK. So let's start to do some work. We know this thing. And let's start some-- and we know that H psi plus-minus has to be equal to the eigenenergies psi plus-minus. Now, in order to get something useful out of this, we left multiply by psi 1. OK. And when you left multiply by psi 1-- well, let's just write it out.

We're going to get psi 1 HC 1 plus. So let's just write this out. We have H11 C1 plus-minus. Because this is C plus-minus, and we have the integral of psi 1 with itself or the Hamiltonian is H11. And then we get another term. I'm doing this differently from my notes, so the other term is C plus-minus psi 2 V. OK. So plugging in what I have over here and knowing that H11-- we know that V is psi 1 H psi 2, and so this is what we get.

We do the same thing, and we have the same left-hand side, but we can also now express this as that H-- that psi plus-minus is an eigenfunction. And so we have the same left-hand side, but on the right-hand side, we now get integral psi 1 star H psi plus-minus d tau. But that's equal to psi star E plus-minus C1 plus-minus psi 1 plus C2 plus-minus psi 2 d tau.

OK. And so now we're talking about orthonormality of the wave function, and because we don't have any operators in here, then this thing becomes simply E plus-minus times C1 plus-minus plus 0 C2 plus-minus. There should be a 1 here. OK. Because psi 1, psi 2, that's orthogonal. So now we have an equation that's quite useful.

So we're going to combine these two equations, and we get C1 plus H11 plus C2 plus-- I'm sorry, this is plus-minus, plus-minus. V is equal to E plus-minus C1 plus-minus. Now we need another equation. And so we do the same thing and we left multiply by psi 2 star and integrate. And we get another equation, and that is C1 plus-minus the plus C2 plus-minus H22 minus E plus-minus is equal to 0.

So we combine the two equations that we have derived, solving-- because both equations can be solved for C1 plus-minus or C2 plus-minus. So we equate the C1 plus-minus over C2 plus-minus from the two equations. And we get this wonderful result, V over H11 minus E plus-minus is equal to H22 minus E plus-minus over V.

Now you're not going to ever do this. So yes, you can attempt to reconstruct this from what I read on the board or what's in your notes, but the important point is that we're just using what we know. We say we have eigenfunctions, and we want to find something about the coefficients of the basis functions in each of the eigenfunctions. And we want to find the eigenvalues, and we get this equation.

So, this is easy. We relate this to-- we just multiply through and we get V2, V squared is equal to H11 minus E plus-minus times H22 minus E plus-minus. Or we solve, and we know that E plus-minus is equal-- this is a quadratic equation.

A quadratic and E plus-minus, and so we have this result, H11 plus H22 plus or minus this H11 squared-- H11 plus H22 squared minus 4 H11 H22 minus V squared over 2. This is just a quadratic equation. So we have the eigenenergies expressed in terms of the quantities we know.

We simplify the notation. E bar is H11 plus H22 over 2. And delta is H11 minus H22 over 2. So when we do that, we simplify the algebra and we end up discovering that the eigenvalue equation is E plus-minus is equal to E bar plus or minus delta squared plus V squared square root. That's a simple result. This is something you should remember.

So if you have a two-level problem, the energy eigenvalues are the average energy plus or minus this quantity. This is an exact solution for a two-level problem. For all two-level problems.

OK. And we even simplify the notation more. We call X delta squared plus V squared. And so this becomes E bar plus-minus X squared. Pretty compact. Now, the reason for simplifying the notation is that we're going to derive the values for the eigenfunctions, and they involve a lot of symbols. And we want to make this as compact as possible, and so we'll do that.

OK. So when we started out, it looked like we were going to solve for these quantities, but we took a detour and solved for the eigenenergies first. And this is one of the things that happens in linear algebra. You get something you can get easily-- quickly before you get the other stuff that you want.

OK, but the second part of the job is to find these mixing coefficients. And so one thing you do is you say, well, we insist that the wave functions be orthog-- normalized. After that a lot of algebra ensues. And I'm not going to even attempt to work through it. You may want to work through it, but what I recommend doing is looking at the solution and then checking to see whether it does things that you expect it has to do.

Because one of the things that kind of inspection leads you to is factors of two errors and sign errors. OK. But, C plus-minus-- C1 plus-minus is equal to 1/2 times 1 plus or minus delta over the square root of X. Two square roots. Whoops. OK, no square root in here because we get square root there. OK and C2 plus-minus is 1/2 1 minus or plus delta square root, x square root. OK?

So these two things are kind of simple-looking, but there's an awful lot of compression and select and clever manipulation in order to get this. But the important thing to notice is that we have the energy difference divided by this thing, this delta squared plus V squared that expresses the importance of the two-level interaction. And these two things enter with opposite signs.

OK. And so one of the checks that's really easy to do is let's let V go to 0, and let's let V go to infinity. OK, remember, our Hamiltonian-- the two-level problem-- I can't write it as a matrix yet, so remember that we had E1, E2, and V. And this was the higher energy, this was the lower energy, and so delta is E1 minus E2-- I better not write that.

OK. So we normally write E1 over E2, and we have this V interaction between them. OK. And so if the interaction integral between the two functions is 0, then E1 is the higher energy, and it corresponds to-- it should correspond to psi 1 alone. Well, so the higher energy we're taking the plus combinations here, and if V is equal to 0, then this is delta over delta. And this is either 2 or 0. And when it's the higher-- the upper sine, it's 2 divided by 2 or 1, exactly what you expect. And this one is 1 minus 1. So these two are behaving, right? And the V to 0 rate.

Now, in the V going to infinity, then this is infinite. So that just means that this term goes away, right? And so we have C1 plus and minus is equal to square root of 2-- 1 over the square root of 2. And same here. So what we get is what we call 50/50 mixing.

OK, so this also works. So I don't say that this can-- this confirms that I have not made an algebraic mistake, I haven't, but it's a good test because it's really easy to do. And if you're not getting what you expect, you know you've either blown a sign or a factor or two, which are the two things that you fear most in a wave function free approach. Because you've got nothing to do except check your algebra, and usually, you made the mistake because it was subtle, and you'll make it again when you're checking it. So these kinds of checks are really valuable.

OK. So once you know that this is likely to be correct, then you do some other things, and you check the wave functions you've derived with your C plus-minus 1 and C plus-minus two are both normalized and orthogonal. And that the energy you get is such that psi plus is E plus.

And you know what E plus was. And so you just go in and you calculate what the energy should be giving using the values for PSI plus-minus And so we plug those into the original equations. And you can also, again, show that psi plus-minus a star H psi plus-minus is equal to 0 because the eigenfunctions are orthogonal. We already did that, but we plugged it into an equation here, and we got it a second time.

So in the lecture notes, there is a lengthy algebraic proof or demonstration that E plus-minus is equal to plus or minus square of x, which are already derived, but then I just did it the long way. OK . So this is it-- we are about to move from the Schrodinger picture to the matrix picture.

So the trick now is to go to linear algebra, go to the matrix picture and learn how to just write the equations and what the language is and show that it works. So suppose you have two square matrices, A and B, they're n by n squared matrices, you know the rules for matrix multiplication?

So if you wanted, the mn element of this product or two matrices, you would go, n equals-- i equals 1. Let's use the same notation. j equals 1 to n of Am-- because I want the mn-- j Bjn. And so when you multiply two square matrices, you get back a square matrix. And this picture is not a bad one. So you can say, all right.

So if you need a little q to remind yourself, you take this row and multiply term by term and add the results-- this row in that column, and you get a number here. And you just repeat that, and it's really easy to tell your computer to do this, and it's rather tedious to do it yourself if it's more than a two-by-two matrix.

OK. So what about this thing C? Well, C is a N row column matrix. So it's N rows, 1 column. It looks like this-- C1, C2, Cn. And so if you want to multiply a square matrix by a vector, we know the rules too, OK? And again, this picture is a useful one. So let's just draw something like this. We do this and this, and that gives you one element in the column.

OK. Now, the last thing that I want to remind you of-- I better use this board. If we have a two-state problem, then the vector c1 is 1, 0; the vector c2-- this should be a lower case c, because we tend to use lowercase letters for vectors and uppercase letters for matrices.

And so if we do c1 dagger c2, or c1 dagger c1. Now this dagger means conjugate transpose, except, well, there isn't anything to do except convert a matrix-- a vector into-- so that becomes a row and this becomes a column.

And so a row times the column gives a number. And that number is going to be-- well, let's do it. We have 1, 0; 0, 1. 1 times 0 is 0, 0 times 1 is 0, and so this is 0. Well that's orthogonality. This, we have 1 times 1 is 1. 0 times 0 is 0. So it's 1 plus 0. Right. OK, all right.

OK. So the Schrodinger equation becomes in matrix language-- and now, one notation is-- I'm going to stop using the double underline. That means boldface. And we don't use hats anymore. Or at least if we were really consistent, when we go away from the Schrodinger picture, we don't put hats on operators, we make them boldface letters. OK.

Now, the thing is we're so comfortable in matrix land, that we don't use either. OK. But remember, we're talking about different things. So the Schrodinger equation looks like this. And so we have a matrix, delta v, v delta times 1, 0. And that's delta, v; or delta times 1, 0 plus v times 0, 1. Isn't that interesting?

Remember, the Hamiltonian or any operator operating on a function gives rise to a linear combination of the functions and the basis set. And so here's one of the functions, here's the other. OK. Nothing very mysterious has happened here. So this very equation is going to be HC is equal to EC.

OK, so how do we approach this? Well I have to introduce a new symbol, and that's going to be this symbol T. It's a matrix. It's a unitary matrix. And we want it to have a special properties. Those special properties will be shown here.

OK, so first of all, we have this matrix-- T11, T--

AUDIENCE: Your Hamiltonian matrix is incorrect. So I think you mean to say E bar plus-minus?

ROBERT FIELD: I'm sorry?

AUDIENCE: I think the diagonal elements of the Hamiltonian matrix should be--

ROBERT FIELD: Yeah, OK. If we wanted the eigenvalues, OK? This is--

AUDIENCE: So the infinite matrix in this case should be-- the diagonal elements should be E1 and E2. So I think you meant E bar plus delta, and then E bar minus delta.

ROBERT FIELD: Yeah. OK. There is something in the notes and something in my notes which-- we can always write the Hamiltonian as E bar 0, 0, E bar plus delta v v minus delta. We can always take out this constant term. And this is the thing we're always working on.

And so rather than-- and so we could call this H prime. Or we can simply say, oh, we have these two things-- this always gets added in, and it's not affected. If we take this diagonal constant matrix and apply a transformation to it, a unitary transformation, you get that matrix again, you get-- it's nothing.

OK, so we wanted to describe some special matrix where the transpose of that matrix is equal to the inverse of that matrix. Now-- yes?

AUDIENCE: So shouldn't there be a minus on there? On the prior equation? For the bottom right entry. The primary delta and the-- first line. First line.

ROBERT FIELD: This is delta minus the--

AUDIENCE: Yes. But up there, you just had delta delta.

ROBERT FIELD: Yep. Thank you. OK. Now, when you have a matrix, you really like to have T minus 1T is equal to the unit matrix. Getting the inverse of a matrix in a general problem is really awful. But for a unitary matrices, all you do to get the inverse is to take-- you flip it on the diagonal.

Now strictly, the conjugate transpose of a unitary matrix is the inverse. But often we have real matrices. But the important thing is always this flipping along-- flipping the matrix on the diagonal. And that gives you the inverse unless there is stuff here that is complex.

OK. So this conjugate transpose, it would look like-- OK? OK. Now, what we want to do is derive the matrix form of the Schrodinger equation using this unitary transformation. So we start out again with HC is equal to EC. And now, we insert TT dagger. And this is one of the things where you screw up. Whether you insert TT dagger or T dagger T, because they're both 1. And if you use the wrong one, all of your phases, everything is wrong. But it's still correct, the equations are correct, but the things you've memorized are no longer correct.

OK, so we're going to insert this unit matrix between H and C. And of course, we have to-- we would have to insert this on the other side. T T dagger C and E. But that's one. OK, so we don't do anything on the left-hand side. And now we left multiply by T dagger. OK. And we call this H twiddle, and we call this C twiddle. So we have H twiddle, C twiddle is equal to EC twiddle. Now we're cooking.

OK, because we construct this unitary matrix to cause this Hamiltonian, the transformed Hamiltonian to be in diagonal form. So we say that H twiddle, which is T dagger T, is equal to E plus, E minus, 0, 0 for the 2 by 2 problem.

OK, that leads to some requirements. What are the elements of the T matrix? But the important thing is we say that T diagonalizes H. So this magic matrix gives you the energy eigenvalues. But we also have T dagger C is equal to C twiddle. And so this gives you the linear combination of the column vectors that correspond to the eigenvector.

So what is it? We have here T dagger, which is TT, T12 T, T21 T, T22 T times C. And that's supposed to be equal to C twiddle or C1 twiddle, C2 twiddle. So we put everything together and we discover that OK, when we multiply this by that, we get the element that goes up here. And so the element on top is C1 T11 dagger plus C2 T12-- I'm using T's and daggers independently, and we do the same thing here.

So we have a column vector, and it's composed of the original state. And so anyway, when we do everything, we discover that using the solution to the two-level problem from the Schrodinger picture, we know everything. So H twiddle, C twiddle plus is H plus C twiddle plus. And that's just E plus 1, 0. And H twiddle C twiddle minus is E minus times 0, 1.

OK. This is all very confusing because the notation is unfamiliar. But the important thing is that everything you can do in the Schrodinger picture you can do in this matrix picture. And you can find these elements of this T matrix, and it turns out that what you-- the matrix that diagonalizes the Hamiltonian, the columns of T dagger are the eigenvectors.

So you find this matrix, it diagonalizes H. The computer tells you the eigenenergies, and it also tells you T dagger or T plus-- TT. And so with that, you know how to write in the original basis set what the eigenbasis is for each eigenvalue.

And so the next step, which will happen in the next lecture, is the general unitary transformation for two-level. Now, you expect that there is going to be a general solution for the two-level problem, because the two-level problem in the Schrodinger picture led to a quadratic equation. And that had an analytical solution.

And so there is going to be a general and exact solution the two-level problem. And this unitary transformation is going to be written in terms of cosine theta sine theta minus sine theta cosine theta. This is a matrix which is unitary, and it, when applied to your basis set, conserves normalization and orthogonality.

And so the trick is to be able to find the theta that causes the Hamiltonian in question to be diagonalized. And the algebra for that will happen in the next lecture.

OK. Remember, you're not going to do this ever. You're going to use the idea of this unitary transformation, and you're going to use that to get-- to diagonalize the matrix. And this will lead to some formulas which you're going to like, because you can forget sines and cosines. Everything is going to be expressed in terms of things like this-- V over delta.

So we have matrix element-- off diagonal matrix element over the energy difference. And that's the basic form of nondegenerate perturbation theory, which is applied not just in 2-by-2 problems, but to all problems. And so you want to remember this lecture as, this is how we kill the two-level problem. Then we can discover what we did and apply it to a general problem where it's not a two-level problem anymore, it's an infinite number of levels.

And when certain approximations are met, it applies, and it gives you the most accurate energy levels and wave functions you could want. And you know how accurate they are going to be. And this is liberating, because now, you can take a not exactly-solved problem and you can solve it approximately. And you can use the solution to determine, OK, there's going to be some function of the quantum numbers which describes the energy levels.

Well what is that function? And what are the coefficients in that function? And how do they relate to the things you know from the Hamiltonian? So it's incredibly powerful. And once you're given the formulas for nondegenerate perturbation theory, you can solve practically any problem in quantum mechanics. Not just numerically, but with insight.

It tells you, if we make observations of some system, we determine a set of energy levels, which is called the spectrum of that operator. And the spectrum of the operator is an explicit function of the physical constants-- the unique interactions between states. And so you will then know how the experimental data determines the mechanism of all the interactions, and you can calculate the wave function.

You can't observe the wave function, but you can discover its traces in the energy levels. And then you can reproduce the energy levels, and if you have the energy levels-- the eigenstates, you can also describe any dynamics using the same formulas in here.

So this is an incredible enablement. And you don't have to look at my derivations. I'm not proud of the derivations, I'm proud of the results. And if you can handle these results that come from perturbation theory as well as you've done so far in the course, you're going to be at the research level very soon.

OK, see you on Friday.