Lecture 19: Spectroscopy: Probing Molecules with Light

Flash and JavaScript are required for this feature.

Download the video from Internet Archive.

Description: This lecture discusses time-dependent quantum mechanics.

Instructor: Prof. Robert Field

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseware continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseware at ocw.mit.edu.

ROBERT FIELD: This lecture is not relevant to this exam or any exam. It's time-dependent quantum mechanics, which you probably want to know about, but it's a lot to digest at the level of this course. So I'm going to introduce a lot of the tricks and terminology, and I hope that some of you will care about that and will go on to use this. But mostly, this is a first exposure, and there's a lot of derivation. And it's hard to see the forest for the trees.

OK, so these are the important things that I'm going to cover in the lecture. First, the dipole approximation-- how can we simplify the interaction between molecules and electromagnetic radiation? This is the main simplification, and I'll explain where it comes from. Then we have transitions that occur. And they're caused by a time-dependent perturbation where the zero-order Hamiltonian is time-independent, but the perturbation term is time-dependent. And what does that cause? It causes transitions.

We're going to express the problem in terms of the eigenstates of the time-independent Hamiltonian, the zero-order Hamiltonian, and we know that these always have this time-dependent factor if we're doing time-dependent quantum mechanics. The two crucial approximations are going to be the electromagnetic field is weak and continuous. Now many experiments involve short pulses and very intense pulses, and the time-dependent quantum mechanics for those problems is completely different, but you need to understand this in order to understand what's different about it.

We also assume that we're starting the system in a single eigenstate, and that's pretty normal. But often, you're starting the system in many eigenstates that are uncorrelated. We don't talk about that. That's something that has to do with the density matrix, which is beyond the level of this course.

And one of the things that happens is we get this thing called linear response. Now I went for years hearing the reverence that people apply to linear response, but I hadn't a clue what it was. So you can start out knowing something about linear response.

Now this all leads up to Fermi's golden rule, which explains the rate at which transitions occur between some initial state and some final state. And there is a lot more complexity in Fermi's golden rule than what I'm going to present, but this is the first step in understanding it. Then I'm going to talk about where do pure rotation transitions come from and vibrational transitions. Then at the end, I'll show a movie which gives you a sense of what goes on in making a transition be strong and sharp.

OK, I'm a spectroscopist, and I use spectroscopy to learn all sorts of secrets that molecules keep. And in order to do that, I need to record a spectrum, which basically is you have some radiation source. And you tune its frequency, and things happen. And why do the things happen? How do we understand the interaction of electromagnetic radiation and a molecule? And there's really two ways to understand it.

We have one-way molecules as targets, photons as bullets, and it's a simple geometric picture. And the size of the target is related to the transition moments, and it works. It's very, very simple. There's no time-dependent quantum mechanics. It's probabilistic.

And for the first 45 years of my career, this is the way I handled an understanding of transitions caused by electromagnetic radiation. This is wrong. It has a wide applicability. But if you try to take it too seriously, you will miss a lot of good stuff.

The other way is to use the time-dependent in your equation, and it looks complicated because we're going to be combining the time-dependent Schrodinger equation and the time-independent Schrodinger equation. We're going to be thinking about the electromagnetic radiation as waves rather than photons, and that means there is constructive and destructive interference. There's phase information, which is not present in the molecules-as-targets, photons-as-bullets picture.

Now I don't want you to say, well, I'm never going to think this way because it's so easy to think about trends. And, you know, the Beer-Lambert law, all these things that you use to describe the probability of an absorption or emission transition, this sort of thing is really useful.

OK, so this is the right way, and the crucial step is the dipole approximation. So we have electromagnetic radiation being a combination of electric field and magnetic field, and we can describe the electric field in terms of-- OK. So this is a vector, and it's a function of a vector in time.

And there is some magnitude, which is a vector. And its cosine of this thing, this is the wave vector, which is 2 pi over the wavelength, but it also has a direction. And it points in the direction that the radiation is propagating. And this is the position coordinate, and this is the frequency. So there's a similar expression for the magnetic part-- same thing. I should leave this exposed.

So we know several things. We know for electromagnetic radiation that the electric field is always perpendicular to the magnetic field. We know a relationship between the constant term, and we have this k vector, which points in the propagation direction.

Now the question is-- because we have-- we have a molecule and we have the electromagnetic radiation. And so the question is, what's a typical size for a molecule in the gas field? Well, anywhere? Pick a number. How big is a molecule?

STUDENT: A couple angstroms?

ROBERT FIELD: I like a couple angstroms. That's a diatomic molecule. They're going to be people who like proteins, and they're going to talk about 10 or 100 nanometers.

But typically, you can say 1 nanometer or 2 angstroms or something like that. Now we're going to shine light at a molecule. What's the typical wavelength of light that we use to record a spectrum? Visible wavelength. What is that?

STUDENT: 400 to 700 nanometers?

ROBERT FIELD: Yeah, so the wavelength of light is on the order of, say, 500 nanometers. And if it's in the infrared, it might be 10,000 nanometers. If it were in the visible, it might be 100-- in the ultraviolet, it might be as short as 100 nanometers.

But the point is that this wavelength is much, much larger than the size of a molecule, so this picture here is complete garbage. The picture for the ratio-- so even this is garbage. The electric field or the magnetic field that the molecule sees is constant over the length of the molecule to a very good approximation.

So now we have this expression for the field, and this is a number which is a very, very small number times a still pretty small number. This k dot r is very small. It says we can expand this in a power series and throw away everything except the omega t. That's the dipole approximation.

So all of a sudden, we have as our electric field just E0 cosine omega t. That's fantastic. So we've gotten rid of the spatial degree of freedom, and that enables us to do all sorts of things that would have required a lot more justification.

Now sometimes we need to keep higher order terms in this expansion. We've kept none of them, just the zero order term. And so if we do, that's called quadrupole or octopole or hexadecapole, and there are transitions that are not dipole allowed but are quadrupole allowed. And they're incredibly weak because k dot r is really, really small.

Now the intensity of quadrupole-allowed transitions is on the order of a million times smaller than dipole. So why go there? Well, sometimes the dipole transitions are forbidden. And so if you're going to get the molecule to talk to you, you're going to have to somehow make use of the quadrupole transitions. But it's a completely different kind of experiment because you have to have an incredibly long path length and a relatively high number density. And so you don't want to go there, and that's something that's beside-- aside from what we care about.

So now many of you are going to be doing experiments involving light, and that will involve the electric field. Some of you will be doing magnetic resonance, and they will be thinking entirely about the magnetic field. The theory is the same. It's just the main actor is a little bit different.

Now if we're dealing with an electric field, we are interested in the symmetry of this operator, which is the electric field dotted into the molecular dipole moment, and that operator has odd parity. And so now I'm not going to tell you what parity is. But because this has odd parity, there are only transitions between states of opposite parity, whereas this, the magnetic operator, has even parity. And so they only have transitions between states of the same parity.

Now you want to be curious about what parity is, and I'm not going to tell you. OK, so the problem is tremendously simplified by the fact that now we just have a time-dependent field, which is constant over the molecule. So the molecule is seeing an oscillatory field, but the whole molecule is feeling that same field.

OK, now we're ready to start doing quantum mechanics. So the interaction term, the thing that causes transitions to occur-- the electric interaction term, which we're going to call H1 because it's a perturbation. We're going to be doing something in perturbation theory, but it's time-dependent perturbation theory, which is a whole lot more complicated and rich than ordinary time-independent.

Now many of you have found time-independent perturbation theory tedious and algebraically complicated. Time-dependent perturbation theory for these kinds of operators is not tedious. It's really beautiful. And there are many, many cases. It's not just having another variable. There's a lot of really neat stuff.

And what I'm going to present today or I am presenting today is the theory for CW radiation-- that's continuous radiation-- really weak, interacting with a molecule or a system in a single quantum state initially. And it's important. The really weak and the CW are two really important features. And the single quantum state is just a convenience.

We can deal with that. That's not a big deal, but it does involve using a different, more physical, or a more correct definition of what we mean by an average measurement on a system of many particles. And you'll hear the word "density matrix" if you go on in physical chemistry. But I'm not going to do anything about it, but that's how we deal with it.

OK, so this is going to be minus mu-- it's a vector-- dot E of t, which is also a vector. Now a dot product, that looks really neat. However, this is a vector in the molecular frame, and this is a vector in the laboratory frame. So this dot product is a whole bunch more complicated than you would think.

Now I do want to mention that when we talk about the rigid rotor, the rigid rotor is telling us what is the probability amplitude of the orientation of the molecular frame relative to the laboratory frame. So that is where all this information about these two different coordinate systems reside, and we'll see a little bit of that.

OK, there's a similar expression for the magnetic term. I'm just not going to write it down because it's just too much stuff to write down. So the Hamiltonian, the time-independent Hamiltonian, can be expressed as H0 plus H1 of t. This looks exactly like time-independent perturbation theory, except this guy, which makes all the complications is time dependent.

But this says, OK, we can find a whole set, a complete set of eignenenergies and eigenfunctions. And we know how to write the time-dependent Schrodinger-- the solutions of the time-dependent Schrodinger equation if this is the whole game. So we're going to use these as basis functions just as we did in ordinary perturbation theory. So H0 times some eigenfunction, which now I'm writing as explicitly time-dependent is En phi n t equals 0 e to the minus i En t over h-bar. So this is a solution. This thing is a solution to the time-dependent Schrodinger equation.

And so when the external field is off, then the only states that we consider are eigenstates of the zero-order Hamiltonian, and they can be time dependent. But if we write psi n star of t times psi n of t, well, that's not time dependent if this is an eigenstate. So the only way we get time dependence is by having this time-dependent perturbation term.

OK, so let's take some initial state. And let us call that initial state some arbitrary state. And we can always write this as a superposition of zero-order states.

OK, and now, unfortunately, both the coefficients in this linear combination and the functions are time dependent. So this means when we're going to be applying the time-dependent Schrodinger equation, we take a partial derivative with respect to t, we get derivatives with this and this. So it's an extra level of complexity, but we can deal with it, because one of the things that we keep coming back to is that everything we talk about is expressed as a linear combination of t equals zero eigenstates of the zero-order Hamiltonian.

OK, so the time-dependent Schrodinger equation-- i h-bar partial with respect to t of the-- yeah, of the wave function is equal to-- OK, that's our friend or our new friend because the old friend was too simple. And so, well, we can represent this partial derivative just using dots because the equations I'm going to be putting on the board are hideous, and so we want to use every abbreviation we can.

This is written as a product of time-dependent coefficients and time-dependent functions. When we apply the derivative to it, we're going to get derivatives of each. And so that's the left-hand side. OK, and let's look at this left-hand side for a minute.

OK, so we've got something that we don't really know what to do with, but this guy, we know that this is-- this time-dependent wave function is something that we can use the time-dependent Schrodinger equation on and get a simplification. So the left-hand side-- I haven't written the right-hand side. I'm just working on the left-hand side of what we get when we start to write this equation. And what we get is we know that the time dependence of this is equal to 1 over i h-bar times the Hamiltonian operating on phi n. Is that what I want? I can't read my notes so I have to-- I have to be-- yeah, so we've just taken that 1 over i h-bar. This is going to be the time-independent Hamiltonian, the zero-order Hamiltonian. And we know what we get here. Yes?

STUDENT: So all your phi n's, those are the zero-order solutions?

ROBERT FIELD: That's correct.

STUDENT: So they're unperturbed states?

ROBERT FIELD: They're unperturbed eigenstates of H0. And if it's psi n of t, it has the e to the i En of t-- En t over h-bar factor implicit, and we're going to be using that. All right, so what we get when we take that partial derivative, we get a simplification.

OK, let me just write the right-hand side of this equation too. So we have the simplified left-hand side, which is psi n c-- I've never lectured on time-dependent perturbation theory before. And so although I think I understand it, it's not as available in core as it ought to be.

OK, so we have this minus-- where did the wave function go? Well, there's got to be a phi in here and then minus i over h-bar En cn t over the times phi n of t. That's the left-hand side in the bracket here.

OK, and the right-hand side of the original equation, that is just some n cn t En plus H1 of t phi n t. OK, it takes a little imagination, but this and the terms associated with that are the same. This happened when we did non-degenerate perturbation theory.

We looked at the lambdas of one equation. There was a cancellation of two ugly terms. And so what ends up happening is we get a tremendous simplification of the problem. And so the left-hand side of the equation has the form, and the right-hand side has the form over here without the extra term-- sum over n, cn of t H1 of t psi n of t.

OK, and now we have this equation. We have this simple thing here, and we have this ugly thing here. And we want to simplify this by multiplying on the left by psi F of t so-- and integrating with respect to tau. F is for final. So we're interested in the transition from some initial state to some final state. So we're going to massage this. And when we do that, we get-- I've clearly skipped a step, but it doesn't matter-- i h-bar cf dot of t is equal to this integral sum c n of t integral cf of-- phi f of t H1 f of t phi n of t, e tau.

This is a very important equation because we have a simple derivative of the coefficient that we want, and it's expressed as an integral. And we have an integral between an eigenstate of the zero-order Hamiltonian and another eigenstate. And this is just H1 f n of t. OK, so we have these guys.

So what we want to know is, all right, this is the thing that's making stuff happen. This is a matrix element of this term. Well, H1 of t, which is equal to v cosine omega t can be written as v times 1/2 e to the i omega t plus e to the minus i omega t. This is really neat because you notice we have these complex oscillating field terms, and we have on each of these wave functions a complex oscillating term.

And what ends up happening is that we get this equation. i h-bar cf dot of t is equal to-- and this is-- you know, it's ugly. It gets big. A lot of stuff has to be written, and I have to transfer from my notes to here. And then you have to transfer to your paper.

And there is going to be-- there will be printed lecture notes. And in fact, there may actually be printed lecture notes for this lecture. But if they're not, they will be soon. OK, and so we get this differential equation, which is the sum over n c n of t integral psi f star of t times 1/2 v, v to the i omega t plus e to the minus i omega t times psi n of t, e tau.

Well, these guys have time dependence, and so we can put that in. And now this integral has the form psi f star 0 1/2 v, and we have e to the minus I omega and f minus omega t. Omega nf, the difference in-- so omega nf is En minus Ef over h-bar. And so we have minus this oscillating term, minus omega t, and then we have e to the minus i nft plus omega t.

So here this isn't well, it's so ugly because of my stupidity here. But what we have here is a resonance integral. We have something that's oscillating fast minus something that's oscillating fast. And we have the same thing plus something that's oscillating fast. So those terms are zero because we have an integral that is oscillating.

I'm sorry. It's oscillating between positive and negative, positive, negative. And as long as omega is different from omega nf, those integrals are zero because this integrand, as we integrate to t equals infinity or to any time, is oscillating about zero, and it's small. However, if omega is the same as minus omega nf or plus omega nf, well, then this thing is 1 times t. It gets really big.

Now we're talking about coefficients, which are related to probabilities. And so these coefficients had better not go get really big because probability is always going to be less than 1. OK, so what we're going to do now is collect the rubble in a form that it turns out to be really useful.

So we have an equation for the time dependence of a final state, and it's expressed as a sum over n. But if we say, oh, let's make our initial state just one of those. So our initial state is-- let's call it ci. And we say, well, the system is not in any other state other than the i state, and this is weak.

So we can neglect all of the other states where n is not equal to i. And if they're not there, cn has to be 1, so we can forget about it. So we end up with this incredibly wonderful simple equation. So we make the two approximations. Single state, the perturbation is really weak, and we get cf of t is equal to the vfi-- the off-diagonal matrix element-- over 2i h-bar times the integral from 0 to t e to the minus i omega i f t minus omega times e to the i omega i f plus omega dt.

Well, all complexity is gone. We have the amount of the final state, and it's expressed by a matrix element and some time dependence. And this is a resonant situation where if omega t, omega t-- if omega is equal to omega i f, fine. Then this is zero, the exponent is zero, we get t here from that. And we get zero from that one because that's oscillating so fast it doesn't do anything.

But that's a problem because this c is a probability. And so the square of c had better not be larger than one, and this is cruising to be larger than 1. But we don't care about cw. What we really care about-- well, what is the rate as opposed to the probability? OK, because the rate of increase of state f is something that we can calculate from this integral simply by taking the-- we multiply the integral by 1 over T if the limit T goes to infinity. And now we get a new equation, which is called Fermi's golden rule.

OK, so I'm skipping some steps, and I'm doing things in the wrong order. But so first of all, the probability of the transition from the i state to the f state as a function of time. So the probability is going to keep growing. That's why we want to do this trick with dividing by t. What time is it? OK.

That's just cf of t squared, and that's just the fi over 4 h-bar squared times this integral 0 to t e to the plus and e to the minus term dt squared.

OK, the integrals survive only if omega is equal to omega i f. And if we convert to a rate so that the rate is going to be Wfi, which is going to be Vfi over 4 h-bar squared times the sum of two delta functions Vi minus Ef minus omega plus sum of Ei minus Ef plus omega. So the rate is just this simple thing-- the square matrix element and a delta function-- saying either it's an absorption or emission transition on resonance, and we're cooked.

OK, so now I want to show some pictures of a movie, which will make this whole thing make more sense. This is for a vibrational transition. So we have the electric field-- the dipole interacting with the electric field. And now let's just turn on the time dependence.

OK, so this is the interaction term. We add that interaction term to the zero-order Hamiltonian, and so we end up getting a big effect of the potential. The potential's going like this, like that. And so the eigenfunctions of that potential are going to be profoundly affected, and so let's do that. Let's go to the next.

All right, so here now we have a realistic small field, and now this is small. And you can hardly see this thing moving. OK, now what we have is the wave function of this. And what we see is if omega is 1/4 the energy, if omega is much smaller than the vibrational frequency or much larger, we get very little effect of the time dependent.

You can see that the wave function is just moving a little bit. The potential is jiggling around, whether the perturbation is strong or weak. It's not on resonance. And now let's go to the resonance.

Now what's happening is the potential is moving not too much, but the wave function is diving all over the place. And if we ask, well, what does that really look like as a sum of terms, the thing that's different from the zero-order wave function is this. So zero-order wave function is one nodeless thing.

This is the time-dependent term, and it looks like V equals 1 So what this shows is, yes, there are-- if we have a time-dependent field, and it's resonant, then we get a very strong interaction even though the field is weak. And it causes the appearance of the other level but oscillating.

And so resonance is really important, and selection rule is really important. The selection rule for the vibrational transitions has to do with the form. Oh, I shouldn't be rushing at all. OK, so let's draw a picture. And this is the part that has puzzled me for a long time, but I've got it now.

So here we have a picture of the molecule. And this end is positive, and this end is negative. And we have a positive electrode, and we have a negative electrode. So that's an electric field.

And so now the positive electrode is saying, you better go away, and you better come here. So it's trying to use compressed bond. And now this field oscillates, and so it's compressing and expanding. Now that's what's going on.

But how does quantum mechanics account for it? Well, quantum mechanics says in order for the bond length to change, we have to mix in some other state. So we have the ground state, and we have an excited state that looks like that. And so the field is mixing these two.

Now that means that the operator is mu 0 plus derivative of mu with respect to the electric field times q. So this is the thing that allows some mixing of an excited state into the ground state.

This is our friend the harmonic oscillator-- operator, displacement operator. It has selection rules delta v equals plus or minus 1, and that's all. So a vibrational transition is caused by the derivative of the-- yeah, that's-- no, derivative of the dipole moment with respect to Q.

So did I have it right? Yes. So this is something-- we can calculate how the dipole moment depends on the displacement from equilibrium, but this is the operator that causes the mixing of states.

So one of the things I've loved to do over the years is to write a cumulative exam in which I ask, well, what is it that causes a vibrational transition? What does a molecule have to have in order to have a vibrational transition? And also what does a molecule have to have to have a rotational transition? Well, this is what causes the rotational transition because we can think of the dipole moment interacting with a field, which is going like that or like that.

And so what that does is it causes a torque on the system. It doesn't change the dipole moment, doesn't stretch the molecule. It causes a transition, and this is expressed in terms of the interaction mu dot E. This dot product, this cosine theta, is the operator that causes this.

We call the relationship between the laboratory and the body fixed coordinate system is determined by the cosine of some angle, and the cosine of the angle is what's responsible for a pure rotational transition. And we have vibrational transitions where they are derivative of the dipole with respect to the coordinate.

Now let's say we have nitrogen-- no dipole moment, no derivative of the dipole moment. Suppose we have CO. CO has a very small dipole moment and a huge derivative of the dipole moment with respect to displacement. And so CO has really strong vibrational transitions and rather weak rotational transitions.

So if it happened that CO had zero permanent dipole moment, it would have no rotational transition. But as you go up to higher V's, then it would not be zero. And you would see rotational transitions. And so there's all sorts of insights that come from this.

And so now we know what causes transitions. There is some operator, which causes mixing of some wave functions. And the time-dependent perturbation theory when it's resonant mixes only one state. We have selection rules which we understand just by looking at the wave fun-- looking at the matrix elements, and now we have a big understanding of what is going to appear in a spectrum. What are the intensities in the spectrum?

What are the transitions? Which transitions are going to be allowed? Which are going to be forbidden? And that's kind of useful. So there is this tremendously tedious algebra, which I didn't do a very good job displaying, but you don't need it because, at the end, you get Fermi's golden rule, which says transitions occur on resonance.

Now if you're a little bit off resonance, well, then the stationary phase in the oscillating exponential persists for a while, and then it goes away. And so you get a little bit of slightly off-resonance transition probability, and you get other things too. But you already now have enough to understand basically everything you need to begin to make sense of the interaction of radiation with molecules correctly, and this isn't bullets and targets.

This is waves with phases, and so there are all sorts of things you have to do to be honest about it. But you know what the actors are, and that's really a useful thing. And you're never going to be tested on this from me. OK, good luck on the exam tomorrow night.