Lecture 9: Frequency Response

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Instructor: Dennis Freeman

Description: The response of a system to sinusoidal input gives valuable information about its behavior in the frequency domain, similar to convolution in the time domain. Eigenfunctions and vector plots are used to explore this frequency response.

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

DENNIS FREEMAN: So hello and welcome. No particular announcements today other than, in the unrelenting tradition of MIT, we're going to forge straight ahead, and keep going with new material. Today, we're going to talk about frequency response. And just to set the context, remember that last time we talked about a way to characterize a system, in terms of a single signal. By knowing the impulse response of a CTE system, you could characterize the response to any system by thinking about the response to any signal by thinking about the output as the convolution of the impulse response with the input signal.

And that gave rise to a very compact way of thinking about the output of a system. We think about the output y as being x convolved with h, regardless of whether it's dt or ct. And the way that you carry out that convolution is, in fact, very similar in the two systems.

That way of thinking about a system is particularly useful for certain kinds of systems. The examples we looked at last time were optical. In the case of an optical microscope, it's very convenient to think about the effect of the microscope is to blur the target. So whatever the target was, it always comes out blurrier, and that blur function is something that in optics we call the point spread function. The point spread function in 6003 terms is simply the three dimensional impulse response.

We can think about three dimensions very similar to the way we think about one dimension. And in fact, in the next homework assignment-- which amazingly enough is already issued. So in the next homework assignment we'll think about two dimensional signal processing. And the difference between one and two is not very much, and the difference between two and three is even less than the difference between one and two.

So we'll be able to think about an optical system purely in terms of the impulse response, and that's particularly useful, because it's so intuitive. Since the effect of the optical system is to blur, the impulse response is a direct measure of how much it blurs. That's true, not only for microscopes, but it's true for optics, in general. Imaging with light, it's a fundamental principle that derives from the wavelength, the coherent nature of light.

And here's an example taken from the Hubble Space Telescope, where we see that there is much more blurring from a ground based microscope than there is from the Hubble, simply because the blurring is the result of two things. Generally speaking, if you have a ground based system then there's two kinds of blurring that we usually think about, blurring to the atmosphere and bouncing off particles in the atmosphere, and blurring due to the optics, the same as it was in the microscope. By going to space, we completely eliminate the blurring in the atmosphere, and so we're left with what can be a much sharper point spread function.

Today, we're going to look at a completely different way of thinking about a system. Rather than thinking, what we will call in the time domain, h of t was a function of time. The impulse response was a function of time. We think about that as time domain signal processing.

Rather than thinking about the time today, we'll think about the frequency domain. Frequency domain, just like time domain, is very convenient for certain kinds of signal processing tasks. One very natural example is audio. You all have lots of familiarity with this.

What I'm going to do now, to prove to you that you have great intuition, is I'm going to play a clip followed by the same clip processed in four different-- in two different ways. All right. Clip-- so original clip 2. And clip 1 and clip 2 will have some transformation applied to it. Either the high frequencies HF or the low frequencies LF will be increased up arrow or decreased down arrow. OK? Is that clear?

So I'm going to first play of the original, then play clip 1. You should decide whether the clip 1 sounds like the high frequencies are increased, the high frequencies are decreased, the low frequencies are increased, the low frequencies are decreased, or none of those. Then I'll play the original clip 1, original clip 2. So there's going to be two answers, what happened to clip 1, what happened to clip 2. OK. Nod your head yes, everybody knows what's going to happen. OK, now everybody listen. So you're supposed figure out two different answers, what happened to clip 1, and what happened to clip 2.

[MUSIC PLAYING]

[MUSIC PLAYING]

[MUSIC PLAYING]

[MUSIC PLAYING]

[MUSIC PLAYING]

[MUSIC PLAYING]

[MUSIC PLAYING]

[MUSIC PLAYING]

OK. What happened? Answer 1, answer 2, answer 3, answer 4, answer 5, raise your hand. Come on, come on, come on. This is the part that you blame on your neighbor. OK, no don't. Talk to your neighbor, that way you can blame it on your neighbor. Yeah, I forgot that part. OK, talk to your neighbor.

[SIDE CONVERSATIONS]

OK, everybody raise your hands what happened, and if you're wrong just blame it on your neighbor. Just point to your neighbor as you raise your hand, right, and I'll understand what you mean. OK, you're only about a half right, you're supposed to be young people. I'm old, I'm not supposed to understand these things. So you're about half right. So now let's see maybe I can do it again. So listen again.

[MUSIC PLAYING]

[MUSIC PLAYING]

[MUSIC PLAYING]

[MUSIC PLAYING]

[MUSIC PLAYING]

[MUSIC PLAYING]

[MUSIC PLAYING]

[MUSIC PLAYING]

OK, now the answer is perfectly clear. Everybody raise your hand. What's the answer? OK. I know the problem. The problem is-- does anybody know the title through this piece?

AUDIENCE: No.

DENNIS FREEMAN: That's the problem. It's 1970s, I wasn't thinking. This was new in 1970. I heard it when I was in-- but anyway. OK.

AUDIENCE: [LAUGHING]

DENNIS FREEMAN: The answer was. The answer is that one. OK, so the high frequencies in clip one were increased. If you heard sort of a tapping in the background, kind of a symbol thing that was louder, because the high frequencies were enhanced. In the second part, it sounded sort of similar, except the volume was lower. That's because I killed the low frequencies. OK. So in the first part the high frequencies were increased. In the second part the low frequencies were decreased. And I'll assume that if I had done the updated music, you would have got that. So I assume that you all got 100% correct, and we'll go on.

So the-- yes, please.

AUDIENCE: I have a question about when you say increasing degrees [INAUDIBLE]?

DENNIS FREEMAN: Yes, you do.

AUDIENCE: And what that include the amplitude when you say increasing degrees?

DENNIS FREEMAN: Will be a little more clear-- So the question was, when I say increasing degrees, what exactly do I mean by that. What I meant was the magnitude of the frequency components was increased. We'll be saying-- we'll be developing some language, in the next 40 minutes, that will make that statement more precise, OK. So the idea here was to give you an intuitive feeling for something that will make mathematically more rigorous as we go along. By the end of the hour it should be completely clear what I mean by high frequencies increased or low frequencies decreased. If it's not tell me.

So the idea then, in frequency analysis, is to think about the input as frequencies. Think about the input being cos omega t, and then think about what the system would do to a signal that was of the form cos omega t. We'll find out, as we go through the hour, that the signal that comes out of a linear time invariance system, when the input is cos omega t, is also cos omega t of frequent-- it's a signal with the same frequency. However, the amplitude and the phase-- that which sort of addresses your question. The amplitude and the phase can be different. Linear time invariant systems can change the amplitude and phase, but not the frequency of a pure sinusoid.

So then the trick in thinking about what linear time the invariant system does, to a sinusoid, is thinking about how's the magnitude to change, and how's the frequency change. So motivate that a little more, well, I want to think about mass, spring, dashpot system. We talked about this before. If I think about the input of the system being the position of the top of the spring, and if I think about the response of the system being the position of the mass, then I can characterize that system as a linear time invariant system. And I can think about how does the amplitude change and how does the phase change, as a function of frequency of the sinusoid. OK, this is just like the music example, but this time it's mechanical.

And what I want to do first is do a demonstration of figuring out the frequency response for a physical system. OK, so here's my mass, spring, dashpot. Hopefully, you can all see that. It is 10 loops of a slinky connected up to a bolt. And the idea is that I want to characterize how does the magnitude of the response change, as a function of frequency, and how does the phase of the response change, as a function of frequency.

So if I turn it on to a low frequency--

[MOTOR BUZZING]

OK, so I've got a low voltage going to the motor. The motor has got a cam on it. The cam's turning, that's giving the sinusoidal motion up and down. If you have very good eyes, you should be able to see this a little knot in the string is going up and down, that's my input. Because the cam is not changing physically, as I change the speed of the motor, the amplitude of the sinusoid won't change, but the frequency will. That's the idea.

OK, so what you're supposed to do is this is x and that's y, how would you characterize the magnitude of the response as the ratio of the magnitude of the input, the output? They're the same. So the magnitude for this low frequency, the magnitude is 1. How about the phase relationship between the two?

AUDIENCE: [INAUDIBLE].

DENNIS FREEMAN: They're in phase with each other. Right, when one goes up, the other one goes up. So at these low frequencies, the magnitude starts out at 1, and the phase starts out at 0.

Now I'll increase the speed.

[MOTOR BUZZING]

Now what? What's the magnitude? Up, up, down, don't have a clue, don't care.

AUDIENCE: [LAUGHING]

DENNIS FREEMAN: OK, it's up a little bit. So the magnitude is up a little bit. How about the phase? How about the phase?

AUDIENCE: [INAUDIBLE].

DENNIS FREEMAN: It's kind of similar. I'll just put a big dot.

OK, so now I'll turn up the speed a little more.

[MOTOR BUZZING]

AUDIENCE: I think the magnitude might have changed.

DENNIS FREEMAN: The magnitude might have changed. Up or down?

AUDIENCE: [INAUDIBLE].

DENNIS FREEMAN: Yeah, went way up. So it came up here someplace, right. How about the phase?

AUDIENCE: [INAUDIBLE].

DENNIS FREEMAN: It's totally-- it's going completely out of phase. So the phase-- in fact, we will call that a lag, but that's not perfectly clear now. We'll see later why I'm going to call it that.

What do you think will happen if I go further higher? Hit the roof, right. So if I go-- so this is 4 1/2 volts. If I go to a higher frequency--

[MOTOR BUZZING]

Ooh, funky. Why is it funky?

AUDIENCE: Out of sync.

DENNIS FREEMAN: Out of sync of what? It's only got one input. How can it be out of sync?

AUDIENCE: [INAUDIBLE].

DENNIS FREEMAN: So it's remembering some of the response that it had before. Right, so I really have to wait for the old response to die. If I were to kill the input, it wouldn't stop moving right away. Right, so it's-- so the response to the previous input is interfering with the response to the current input. If I wait long enough, it ought to settle down.

OK, magnitude up or down?

AUDIENCE: [INAUDIBLE].

DENNIS FREEMAN: Compared to one? It's still down. It's down even compared to one. How about phase?

AUDIENCE: [INAUDIBLE].

DENNIS FREEMAN: So it's almost out of phase. Out of phase would be the top is going up, while the bottom is going down. So it's almost out of phase, so there's even more delay. OK, so that's the idea.

So that's-- what this is supposed to motivate is why we like to think about systems in terms of frequency response. It's a natural way to think of certain kinds of systems. Just like the time response, the impulse response, the time domain thinking convolution, was a convenient way to think about some kinds of systems, the optical system. Frequency response is a good way to think about some kinds of systems. Here's one. The reason it's a good way to think about it is that this particular system had a big response at a certain frequency. OK, now for something that has nothing to do with 003. Why is the CD there? Yes.

AUDIENCE: [INAUDIBLE].

DENNIS FREEMAN: It keep it-- so it somewhat stabilizes it, that's correct.

AUDIENCE: See it better?

DENNIS FREEMAN: See it better that's true, but then I should've put one up here too. But of course, who knows, I make mistakes. Yes.

AUDIENCE: [INAUDIBLE].

DENNIS FREEMAN: Damping. What's damping? So damping controls how big the amplitude got at the resonance frequency. Without the CD, it went crazy. Right. In order to get the response to be less than the elongation of the spring, I had to make the displacements tiny, tiny, tiny without the CD. So the CD was putting some damping-- air resistance. OK.

Loss-- So in physics terms, the mass and the spring are lossless, no energy goes away. I'm continuously pumping energy into it from the motor. The response gets bigger and bigger and bigger and bigger. With the CD, there's loss, because of viscosity loss to the air. And so that means that it doesn't reach as high a peak, as it would have otherwise. OK? OK.

So now what I want to do is think about having measured it, let's think about calculating it. Let's calculate the frequency response. We have a number of ways we could do it, because we have all those boxes, we have all the different methods we thought about for characterizing systems.

So for example, we could figure out the differential equation for the system, and we could solve it with a particular input, cos omega t. That's one way to do it. Another way we could do it is find the impulse to the system-- the impulse response of the system, and then we could convolve. Right, we're all very familiar, we could do all those things.

So rather than doing something we know how to do, what I'll do is something that we don't know how to do. I'll use a different method, which has to do with eigenfunctions and eigenvalues. And we'll see why that's a convenient way of thinking about frequency response in just a minute.

So we'll be define in eigenfunction to be sort of the same thing as we do in linear algebra. So it's very much the same concept. We think about if we have a system and if we put in an input, and the output has the same shape as the input, just changed in amplitude, then we will say the input was an eigenfunction. And the change in amplitude was the eigenvalue. So you put in x of t, you get out then x of t. Same idea that we have in linear algebra for eigenvalues. OK? So that's the idea.

And it turns out that certain very common functions are eigenfunctions of linear time invariant systems. Consider this system, y dot plus 2 y is x. Figure out if any of these functions, e to the minus c, e to the t, e to the jt, cos t, or u of t. Are any of those functions eigenfunctions of that system? And the answer is yes, and so the answer is really which ones?

So how do I think about is e to the minus t an eigenfunction of that system? What should I do? Let's think about-- how many think it is? How many think it isn't? OK, it must be, right. Crowdsourcing, it has to be true.

So how do I think about that? How could I set that up to convince somebody else that it is or is not an eigenfunction? What would I do? Yes.

AUDIENCE: [INAUDIBLE].

DENNIS FREEMAN: Close. So what should I make x be?

AUDIENCE: So you make x be either negative t or make y be [INAUDIBLE] negative t.

DENNIS FREEMAN: Exactly. So then y dot would be--

AUDIENCE: [INAUDIBLE].

DENNIS FREEMAN: So now if I plug that in, I get y dot minus lambda e to the minus-- e to the minus t plus 2 y plus 2 lambda e to the minus t, should be e to the minus t. Is that true for any lambda? What should lambda be? 1. So the answer is yes, this is true, and lambda would have to be 1 for that to be true.

How about this guy? Same thing, lambda comes out the same? What's lambda for the second line?

AUDIENCE: 1/3.

DENNIS FREEMAN: 1/3, yeah. So now we have to use x is e to the t lambda e to the t lambda e to the t. Did I do that right? Yeah. So now we have lambda plus 2 lambda e to the t, should be e to the t. So lambda's going to have to be 1/3 this time. OK.

How about this guy? What's lambda?

AUDIENCE: 1 over 2 plus j?

DENNIS FREEMAN: 1 over 2 plus j, exactly. Everybody see that?

How about cosine? Yes, no, I'd like to break the tie. So there's a tie, the yes's and the no's are equal. So what do I do? You do the same thing, right, no difference. Oops, a little too hard.

So I want to say that x is cos right. So y should be lambda cos. So y dot should be-- lambda sine minus, right. So then I need y dot minus lambda sine t plus twice, this one, lambda cosine t, should be that one, cosine t. What should lambda be?

AUDIENCE: [INAUDIBLE].

DENNIS FREEMAN: Complex exponentials. So if you're willing to somehow think about something over here is imaginary, maybe you can do that. If you think about these as purely real functions, there's no real-- there's no way to take a cosine and turn it into a sine. That's actually why we like the complex numbers, right. Had we thought about the cosine as the sum of these, if we had thought about cosine t as e to the j t plus-- minus j t by 2, if we had thought about it that way, it would have actually turned out that we could think about the complex exponentials as being eigenfunctions. But the cosine function itself, thought of it as a strictly real function, there's no way you could choose the constants to get rid of the phase shift that would be introduced by the sine. OK, if we call the phase of this 0, this is phase shifted 90 degrees relative to that, and there's no way that I could choose a real number lambda to make that come out with 0 phase. OK.

How about u of t? Im function? Same thing, right. I would think of the input is u of t, the output is lambda u of t, the derivative of the output. If the output is lambda u of t, what's the derivative? Lambda delta. So I'm left with trying to make a lambda-- left with trying to make a u function out of the sum of a delta and a u, and there's no way you can do that. So the answer is that the first three are eigenfunctions, and the last two are not. OK.

So now what's this have to do-- I mean, I set this all up by thinking about frequency response. So what we want to do is make the connection between the two. The first step in the connection is complex exponentials, just like I'm motivated here, right.

So the complex exponentials-- so it turns out that all complex exponentials are eigenfunctions to all linear time invariant systems. That's kind of amazing. That follows very directly from what we did last lecture. If we think about a linear time invariant system as having an impulse response, then we can find the response to the system, with the impulse response h of t, by convolving h of t with the input, which is a complex exponential. So if I say that my input is e to the st, and that my response is characterized by the impulse response h of t, all I need to do is convolve. So I convolve h, whatever it is, with x, which is e to the minus st.

You remember that when you convolve, you do the funny things with the axes. So instead of thinking about e to the s t as a function of t, I think about it as the function of t minus tau. Instead of thinking about h of t as a function of t, I think of it as h of tau, and I run an integral. And because of the special form of the exponential function-- which is no coincidence. Because of the special form of the exponential function the e to the st factors out.

So then I'm left with something that the integral, the function of tau, the taus all go away. I'm left with purely a function of s. And in fact, it's extraordinarily friendly. If I put in e to the st into a linear time invariant system, characterized by h of t, then the output has the shape e to the st, and the eigenvalue is h of s. Amazing. It's the value of the system function, the same thing we've been doing since week two, evaluated at the s that is the exponent in the complex exponential of interest. In my opinion, that's wholly remarkable.

Then, knowing that we can form sinusoids out of complex exponentials, the problem is done. All complex exponentials are eigenfunctions of all LTI systems. And I can write-- I can always write an eternal sine wave in terms of a complex exponential, by using Euler's expression. And so I'm done. Furthermore, the eigenvalues that I need to do this are trivial, they're the value of the system function.

That's even easier in the case that the LTI system can be written as a system of partial differential equations with constant coefficients. When that is true-- that is not all-- that is not true for all LTI systems. What I've said with the convolved part, that's true for all LTI systems. If I specialize it to the case that the system can be represented by a linear differential equation with constant coefficients, then the system function is always a rational polynomial in s. It's always the ratio of two polynomials in s.

The reason that's interesting is that we can factor it. The reason that's interesting is that each of those factors has a very simple geometric interpretation. So what-- so if I can represent the LTI system in terms of a system of linear differential equations with constant coefficients, then the transfer-- then the system function has to shape, has the form of a rational polynomial in s. By fundamental theorem in algebra, and by the factors here, I can factor it. Makes it look like that, and then each of these terms looks like a vector in the s plane. Difference between two complex numbers, that's a vector.

So for example, if I wanted to think about-- here's a system, a single pole at minus 2. Say I wanted to find the output when the input is e to 2 j t. E to the 2 jt is a complex exponential. The system is linear in time invariant, therefore I know that the complex exponential is an eigenfunction. Therefore, all I need to do is find the eigenvalue. The eigenvalue is the value of the system function at the s in question.

So all I need to do is look at this diagram, and I have the entire picture. The system is a single pole at minus 2. That's the x. I want to know the response when the input is e to the j 2 t. So the s in question is s equals 2 j. So it's that point right there.

So all I need to know is the length and direction of the vector that connects the pole to the frequency of interest. The length of that vector is 2 route 2, the angle is plus 45. It's a pole, so that contributions in the denominator. So the eigenvalue is 1 over the length of the vector, because it's a pole, and minus the angle, because it's the denominator.

Is that clear? Point is it's really easy to do, that's why we do it. We only do things in this class that are easy to do.

And you can then divide and conquer. If you have a system that's linear time invariant, that can be represented by a system of linear differential equations constant coefficients, rational polynomial s factor, do it for each factor. Do the same thing I just did for each factor. The magnitude of the response is the product of the magnitudes of each of those parts. The angle of the response is the sum of the angles of all of those parts. So the idea is that it's very simple.

One last step-- if I'm interested in eternal of sine waves, like I was for the motor example, then I'll always be interested, according to Euler, in two complex exponentials. So if I were interested in cos omeganaut t, I would be interested in the complex exponential e to the j, omeganaut t, and e to the minus j omeganaut t, because I need both of those in order to sum-- to get the purely real function cos omeganaut t. So that means then, I can write the response to this eternal cosine wave as simply the sum of the system function evaluated at j omegnaut times e to the j omeganaut t, and the system of function evaluated at s equals minus j omeganaut times e to the minus j omeganaut t. Done, I have the answer.

It's even easier, because the system function has a symmetry that we call conjugate symmetry. That's easy to show also by convolution, that's why we did convolution before we did frequency response. You can prove the properties of a frequency response by resorting back to convolution. Convolution is a very powerful thought tool. Less powerful as a computational tool, very powerful thought tool.

So here again, if we think about the system function is the Laplace transform of h of t-- you remember that from two lectures ago. System function is always the Laplace transform of h of t. If I have a physical system, like the mass spring dashpot, h of t is real, how could it be complex. H of t is the response when I have a particular input. The response of a real system is real. So if I have a system that is real, then h of t is real.

If I'm interested in frequencies that are the negative of each other, plus or minus j omeganaut in order to form cos omeganaut t, then I'm interested in two different system functions, h of j omega and h of minus j omega, which happen to be complex conjugates of each other. Since the h of t is real-- I just argued that for a physical system h of t has to be real. If h of t is real, then the only j in this equation is that one. It's negative here, so therefore negating j is the same as taking the complex conjugate. So I don't really have to compute two different system functions, two different eigenvalues. One is the complex conjugate of the other. That means that the response to the sum simplifies.

So if I want to think about the input for-- if I want to think about the input as an eternal cosine wave, cos omeganaut t for all time, I write that by Euler, this way. I know from this eigenvalue eigenfunction idea that the response to this guy can be written this way. It's a complex exponential. Complex exponentials are eigenfunctions of LTI systems, so it has the same shape in an eigenfunction-- in an eigenvalues, sorry.

This one can be written similarly, except there's a minus sign. Because this is the complex conjugate of this, and this is the complex conjugate of that, I've got the sum of a number and its complex conjugate. The sum of a number and its complex conjugate is just the real part of that number.

Then this h, this eigenvalue, the system function evaluated at j omeganaut, has a magnitude and then angle. If I write the system function in terms of its magnitude and angle, I can factor the magnitude out of the real part, and combine the angle by Euler with the angle that generates the cosine wave, j omeganaut t. And I'm left with my final answer, that you can compute the output as the magnitude of a system function evaluated as j omeganaut, with a phase shift on the cosine that's equal to the angle of h of j omeganaut. So this is the final answer.

If I want to think about the frequency response-- if I want to think about how this system has a response that depends on frequency, I think about the cos omega t going in, and what comes out is also cos omega t-- cos omega t. Except the magnitudes change by the magnitude of the system function, and the phase is changed by the phase of the system function. That's this thing I said back on slide 2. Sine in, sine out, however the magnitude can possibly change, and the angle, the phase can possibly change, but the frequency cannot change. Omega went in, omega comes out. OK?

So that leads to a very easy way of thinking about frequency responses, in terms of pole-zero diagrams. The idea is if you think about the pole-zero diagram-- if the system can be represented by a linear differential equation with constant coefficients, then the system can be represented by a collection of poles and 0s, a handful of numbers.

How many poles are there, how many 0s are there, and where are they? So if I have that representation, how many poles are there, how many 0s are there, where are they, I can think about the frequency response just by thinking about the vector that connects each pole and each 0 in turn, from the pole and 0 location to the point on the j omega axis at the frequency of interest. So if I have a system that has a single 0, here I have a 0 at z 1, at s equals z 1. Let's say that 0 is at the point z 1 equals minus 2. Single zero at the point s equals minus 2.

Then I only need to think about one vector. If I'm interested in the response for very low frequencies, very low frequencies are at omega equals 0, I only need to think about the vector that connects the 0 to the point j omega equals 0. So j 0, which you say is at zero. So the eigenvalue is the length of this vector, that's the magnitude, and the angle of this vector-- we always measure angle relative to the x-axis. So the angle of this vector is 0. So I get a magnitude which is that big, and an angle which is 0. And that's plotted over here. That's my result for a low frequency, a frequency close to 0.

Then if I think about a slightly higher frequency, what happens to the magnitude of the 0? Bigger. Bigger, it's the magnitude of a 0, the 0s are in the top. Bigger is bigger, so the bigger arrow translates to a slightly bigger magnitude. The angle that's made with the x-axis is now slightly positive, so that's illustrated by the fact that the angle is deviating from 0. And in general, I can think about the frequency response is just how the length of this vector changes as I change omega. The change in length tells me the magnitude, the change in angle tells me the phase.

Same thing happens if I do minus omega. What on earth is minus omega? So omega was cos omega t. Omega, right, it had to do with how fast the motor was going. What's minus omega? Why am I drawing minus omega here? Yes? Why am I interested in these negative frequencies that don't really exist? Because I have an hour to fill, and I don't have enough stuff to fill the hour. No. Why am I interested in negative frequencies? Why am I interested in negative frequencies? I'm interested in negative frequencies because--

AUDIENCE: [INAUDIBLE].

DENNIS FREEMAN: --they let me create sines and cosines. Euler-- I invent negative frequencies, because I like complex numbers. I like complex numbers, because they're eigenfunction-- because the corresponding complex exponentials are eigenfunctions. Cos omega t was not an eigenfunction, I don't like it. E to the j omega t is an eigenfunction, I like it.

I invent negative frequencies so that I can construct cos out of the sum of two complex things, one of them happens to be this imaginary frequency thing. Who cares, it's just a number, it's just math. So in general, we will think about positive and negative frequencies. Negative frequencies don't exist, we simply invent them to make the math easy.

OK, so generally speaking, because of the complex conjugate symmetry, you can predict what the negative frequency part of the system function will look like, if you know the positive part. So generally speaking, we'll only look at the positive parts when we know the system is real, right. This symmetry only comes about when the system was real. I proved it, because I knew the h of t was real value.

OK, so now I can think about the same sort of thing, but this time with a pole. The vector diagram looks the same. The shape of the curve's upside down, because the pole's in the bottom. Longer arrows now make smaller eigenvalues.

So now as you go to higher frequencies, the idea is that a single pole gives you a frequency response, whose magnitude decays as you deviate from 0. And you get increasing phase lag. You get increasing phase lag, because this angle increases with omega, but it's in the bottom. Increases in the bottom, that means it decreases. That's why the angle goes down, in this case.

And if you have a pole and a 0, the answer is easily derived from the previous two answers. You think about two vectors, one connecting the 0 to the point at the frequency of interest, the other connecting the pole. And the magnitude is the product to the lengths of those two vectors, and the angle is the sum of the angles of those two vectors.

So you can see that the 0 is slightly shorter at low frequencies, and becomes asymptotically the same length. That means that at high frequencies the magnitude goes to 1, except, of course, I've got a constant here, so it goes to 3. And at low frequencies, this length is shorter than that one, so it's slightly smaller. The magnitude smaller lower frequencies.

Similarly for the angles, the angles both start out at 0, so the angle starts out at 0. If you go to a high enough frequency, both angles go 2 pi over 2. So they both angle-- so the difference goes to 0 at high frequencies, as well. There's a little blip in the middle, because the angle of the zero increases more rapidly than the angle at the pole. 0's in the top, so there's a little blip for positive phase.

Same sort of thing works if I think about the mass, spring, dashpot system. Now if I write the-- so if I just write the differential equation, figure out the system function, I get a simple form that looks like so, second order. If I imagine very low damping, you can see that the poles are going to have the form s squared m plus k must be 0. So they are complex. OK? So if you factor this, if you set this equal to 0, you're going to get that squared m plus k must be zero if b were small. So the poles are going to be complex.

So for example, if the poles were, as indicated here, about minus 1 plus or minus j 3, all you do is the same thing we did before, consider the length of the vectors, consider the angle of the vectors. Here you get sum product, and you get angles that are complimentary, so plus and minus. So the angle starts out at 0. As you increase the frequency, one of the pole-- one of the arrows gets longer and longer by about a factor of 2, and the other gets shorter-- gets shorter by a lot more than a factor of 2. So that means that the product of the lankes becomes small, but they're in the bottom, so the eigenvalue becomes large. So that's what's going on here.

So that's the idea. I've written two check yourself questions. Those are intended to just be practice with complex numbers, because you need to know how to do complex numbers. But the point-- so the point of today is just that there's a very natural way-- that's completely complementary to thinking about convolution in time. There's a different natural way to think about some systems, in terms of the frequency response.

How does the system respond to an eternal sine wave? It's a very easy thing to compute. It follows very naturally from the things that we thought about with convolution, and the results quite amazing. The result is that the magnitude of the angle can be computed from the Laplace transform of the impulse response, which just happens to be the system function. So the point-- so hopefully, what you see is that we have lots of representations, and now we're seeing even more connections between them. OK. Have a good day.

Free Downloads

Video


Caption

  • English-US (SRT)