Lecture 16: Fourier Transform

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Instructor: Dennis Freeman

Description: The concept of the Fourier series can be applied to aperiodic functions by treating it as a periodic function with period T = infinity. This new transform has some key similarities and differences with the Laplace transform, its properties, and domains.

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

DENNIS FREEMAN: So for the last couple of times, we've been looking at Fourier series as a way of looking at signals as a way of being composed out of sinusoids, much the way we had previously looked at frequency responses as a way of thinking about systems characterized by sinusoids. And for the past two sessions, we've looked at Fourier series not because they were terribly useful, but because they were terribly simple. Today, I want to do the much more difficult task, but much more interesting task of thinking about the general case for thinking about a sinusoidal decomposition of an arbitrary signal, one that is not necessarily periodic.

So I should say upfront, what I'm going to talk about is motivational. It's not a proof. Proving Fourier series convergence is actually very complicated. It's something that mathematicians worked on for about 100 years. So I am not going to try to prove things in any rigorous fashion, but I am going to try to motivate things so that you should at least expect that such a thing should exist.

So the idea, the motivation is going to be how can I think about an aperiodic signal within a periodic framework because I already have worked out all the details. The details for Fourier series are relatively simple, well, at least compared to a Fourier transform, which is harder. Fourier series themselves are not that easy.

But if I believe the Fourier series idea, is there a way to leverage that to think about aperiodic signals? And the idea is going to be let's take an aperiodic signal. I've tried to choose something terribly simple. It's the simplest thing that I could think of it isn't zero.

So it's one for a while and zero most of the time. But I could make that signal, which is clearly not periodic, by thinking about periodically extending it. Copy it. Add it to itself many times, each time shifted by a capital T.

This signal is obviously periodic. This transformation is obviously going to take any signal regardless and turn it into something that is periodic in cap T. So if I did that, then I could kind of trivially say, well, the aperiodic thing is just the limit when capital T goes to infinity of the periodic thing. OK, that's pretty trivial. OK, that's obviously true.

The trick is, what if I took a Fourier series in the middle? What if I periodically extended this thing to get something that is periodic, take a Fourier series of this thing, and then take the limit of the series? So that's the thing I'm going to do over the next three slides.

So think about a general, aperiodic signal, periodically extended so it's now periodic in cap T, take a Fourier series-- just to motivate the kind of math that happens, I've written out the math for this particularly simple signal that is one for a while and zero most of the time. The Fourier series coefficient, a sub k is obviously 1 over t, the period, integral over the period-- I took the symmetric period because it's the easiest one-- signal of interest, basis function integrated over time. And that's pretty trivially-- those integrals are easy. That was chosen that way.

And so I get an answer that looks like that. The thing I want you to see about the answer is that I can think about it as a function of omega or k. And that's what I've plotted here.

In particular, if I multiply a sub k by capital T, so as to kill this 1 over t thing, and if I plot t times a sub k, I get a relationship 2 sine omega S over omega, omega being k 2pi over t. But for the Fourier series, that only exists for k and integer. So that's what's represented by the blue bars.

But what I want you to see is just from the math, the envelope doesn't depend on t. OK, that's the trick. So the idea is I'm plotting the Fourier coefficients a sub k as a function of k. So k equals 0, 1, 2, 3, 4, 5, et cetera. But I notice that the envelope can be written strictly as a function of omega where there is a simple relationship between omega and k. But omega is defined across the entire axis, and it's represented by this light black curve.

That's more apparent if I think about increasing capital T. What if I were to keep the base waveform the same, but change capital T? Say I double capital T. The thing that happens is the envelope stays the same, but the spacing of the k's becomes condensed. There's more k's in a given number of hertzes, in frequency, than there was before.

And if I double it again, it doubles again. The envelope didn't change. The k's did.

The interesting thing about that construction is that it has separated out the part that depends on capital T from the part that doesn't depend on capital T. Capital T was this arbitrary thing that I used to take an aperiodic signal and turn it into a periodic signal. And it has an effect on the answer that can be separated from the other part of the answer.

Some part of the answer depends on the base waveform. Some other part of the answer depends on capital T. Well, that's nice because now if I think about taking the limit as capital T goes to infinity, I have a prayer of interpreting things because part of my answer is changing with t and part of it isn't. So all I need to do now is focus on the part that is changing with t and separate it from the part that's not changing with t.

So now, I can think about taking-- so I just plug in this expression here for this integral. And what I get then is something that looks a lot like a Fourier series or even a Laplace transform. I get an integral-- ignore the limit part for a moment. I get an integral of something times some sort of a weighting function.

And I get something over here where the integral was over time, but the function over here doesn't have time in it. It only has omega in it. That's the sense in which it sort of looks like the analysis formula for either Fourier series or Laplace transforms.

It looks like the analysis formula because I'm calculating a T ak, the components of the series, or this new thing, E of omega, that doesn't depend-- neither of those depended on t. OK, so the idea is that when I do this kind of a limiting operation on the periodic extension, I get something that ends up looking like a transform relationship.

And if I think about going the other way, doing a synthesis operation, I can think about how I would construct x of t out of the Fourier coefficients But now, there's a simple relationship between the Fourier series coefficients, a sub k, and this thing Ew, which I've represented here.

And I don't like the t. So I'll do a substitution from here. t can be written as omega 0 over 2pi. 1 over t can be written as omega 0 ever 2pi.

And now, I've got everything I need to think about how that sum approaches an integral in a Riemann sum kind of sense. Think about as I add more and more-- as I make capital T get bigger and bigger, omega 0 gets smaller and smaller. As I make the capital T bigger and bigger, the spacing gets smaller and smaller. Increasingly, I can think about this function E of omega as being smooth and increasingly constant over the small interval between the bars. So I can think about the sum as a Riemann sum passed to the integral as the limit.

So when I do that, omega 0 is the spacing between the two adjacents. It's the region over which I want to think about that integrand being constant. And so that in the limit, this omega 0 passes to d omega. And I'm left with something that looks like a synthesis equation.

So if I just write those equations here and think about this Ew thing being some kind of a transform, which I'll mysteriously write as x of j omega, then the result for the aperiodic case has a structure that looks very much like the Fourier series, or for that matter like the Laplace transform. What it says is that I can synthesize an arbitrary x of t by adding together a whole bunch of components that already depend on omega weighted by some weighting function.

s this looks like a synthesis equation very much like the synthesis equation for Fourier series or for Laplace transforms. And I get an analysis equation that similarly has the same form again. I take the x of t and figure out the component that should be at omega by multiplying by a complex exponential and integrating.

OK, I have to emphasize this is not a proof. All I wanted to do was kind of motivate the way you can think about an aperiodic signal as being periodic in some time interval and passed to the limit. And if you do that, you can sort of see where the equations are coming from. OK?

So the idea then-- whoops. So the idea then is that we will use these relationships to define an analysis and synthesis of aperiodic signals. And we'll refer to that as a Fourier transform.

The Fourier transform will let us have insights that are completely analogous to the Fourier series, except they now apply for aperiodic signals. So in particular, we'll be able to think about a signal being composed of a bunch of sinusoidal components. And we'll be able to think about systems as filters.

OK, so I've already alluded to the fact that the Fourier transform relations look very similar in form to the Laplace transform relations. And so I've illustrated the analysis equations here just to emphasize the similarity. The Laplace transform, you'll remember, had the-- we integrated some signal that was a function of time times a complex exponential integrated over time to get a Laplace transform that was a function of s, not time.

It was a way of having an alternative representation for the signal. There was no new information. The same information was contained in s of x as was contained in x of t. Except now, where it was organized by time, now, it's organized by s.

We get the same sort of thing with a Fourier transform. And in fact, this gives away the mysterious reason for calling it x of j omega in the previous slide. You can see that a different way to think about the Fourier transform is that it's simply-- a trivial way to think about it, it's the value-- the Fourier transform is the value of the Laplace transform evaluated s equals j omega. All you do is you take this expression for the Laplace transform, and every place there was an s, make s equal j omega. And you get this equation.

So that's the reason we like the notation. The Fourier transform is x of j omega. There are confusions that arise by that. And I'll talk about those in a moment. But for the time being, the important thing is that the Fourier transform can be viewed as a special case looking at the j omega axis of the Laplace transform. OK?

So that view points out two things. There's a lot of similarities, and there are some differences. First, the similarities-- because you can regard the Fourier transform as kind of a special case-- that's not really true. And I will say something about that by the end of the hour as well. But because it's kind of a special case of the Laplace transform, the Fourier transform inherits a lot of the important properties of a Laplace transform.

In particular, the two things that we looked at most has been linearity. Because the Laplace transform is linear, we can do all manner of things with it. The same as we use the properties of linear systems to simplify our view of how to think about a system, we could, for example, because systems are linear, we can look at the response of a system to a sum of inputs as the sum of the responses to the individuals. That's a very important property that we used of systems as a result of linearity.

We did the same thing with Laplace transforms. The Laplace transform of a sum is the sum of a Laplace transforms. And in conjunction with the differentiation roll by which we knew that the Laplace transform of a derivative is s times the Laplace transform the function, the combination of linearity and the differentiation role allowed us to apply Laplace transforms to turn differential equations into algebraic equations.

Precisely the same thing will work with Fourier transforms. For reasons that should be clear, if the Laplace transform has the property of linearity, so does the Fourier. And if the Laplace transform is simply related to the Fourier transform, then there's a simple relationship between the Fourier transform of a derivative and the Fourier transform of the underlying function. So in the Laplace transform, you multiply by s. Not very surprisingly, in the Fourier transform, you multiply by j omega.

So there's enormous similarity. And in fact, most of what you know about Laplace, you can immediately carry over into Fourier. There are some differences. And if there weren't differences, we probably wouldn't bother with talking about both of them. Right?

There are some things that will be easy to think about with Fourier transforms. And that's the reason we do it. There are some things that are easy to think about with Laplace transforms. Otherwise, we would have just skipped straight to Fourier.

So there are some things that Fourier and Laplace share. There are some things that are different. One of the biggest differences is the domain.

When we think about a Laplace transform, we think about x of s. The domain or the Laplace transform is the domain of s. The domain of s, s is a complex number. For that reason, when we thought out Laplace transforms, we always talked about what does the Laplace transform look like in the s plane. And we thought about the real part of the s and the imaginary part of s.

When we think about Fourier transforms, we're thinking about a transform with real domain. Rather than thinking about x of s as a complex number, we're going to think of x of j omega, omega a real number that's a little confusing, right? Just sort of to confuse you, we rewrite the one that is a complex number as s-- no indication whatever that it's complex. And the one that is a real number, we put a j in front of it to remind you that it's real.

I apologize, I don't know why we do this. So just remember that s, which looks kind of real isn't. It's complex. And j omega, which looks kind of complex, well, it's the omega part that matters. It's real.

OK, so the important thing is the Laplace transform, the domain of a Laplace transform is complex number s, real and imaginary parts, characterized by a plane. The domain of the Fourier transform is real. That's enormously important. And we'll come back to that over and over again.

But just to drive home the point, one of the things we thought about with the Laplace transform was this idea of eigenfunctions and eigenvalues. It was an idea of linearity. It was the idea that we can think about a system by how you put in a function, like E to the st, and calculate the output. Well, if the output, if the system is linear time invariant and can be characterized by a Laplace transform h of s, what's the output of that system when the input is e to the st?

Everybody shout. It will make me feel much better. If you all shout at once, I won't be able to understand a word you said, and I'll assume you said the right thing.

OK I didn't understand a thing you said. So I assume you all said h of s e to the st. Right, e to the st is an eigenfunction of a linear time invariant system. Eigenfunction means the function in is the same form as the function out except it could be multiplied by a constant. The constant is the eigenvalue. The eigenvalue is h of s.

If we wanted to know, for example, if we wanted to characterize a very simple system, we might have a system of the form 1 over 1 plus s. We might have a signal of the form 1 over 1 plus s. So let's say we have a system now. Let's say that x represents some kind of a system. Then we would have said that that's a pole.

Where's the pole? Minus 1-- we would have said we have a system with a single pole at minus 1. I would never have drawn this complicated picture at the bottom because it would be frightening. I would always draw something friendly like the picture over here. Right, the entire system can be understood by a single x.

OK well, if you were computing eigenfunctions and eigenvalues, you would like to know what's the magnitude and phase. Sorry, the x of s is a complex valued function of complex domain. s is a complex number, and the answer is a complex number. So we'd like to know the real and imaginary parts of h of s or the magnitude and phase equivalently.

If we want to know the magnitude and phase, for example, of h of s, in principle, we need to know what magnitude could it be for all the different s's. So what's plotted here is a picture of the magnitude of this function as a function of all the different s's that can be an eigenfunction. Right, so for all of the-- so any s is an eigenfunction of the system. And that plot plots the magnitude of the associated eigenvalue.

The point is that I have to tell you a complex plane number of values. Right? There a value for s equals 1, s equals minus 1, s equals 2, s equals minus 2, s equals j, s equals 2j, s equals 17 plus 5j. All the different values, all the different points in the s plane have a different associated eigenvalue. And to completely characterize this system, I have to tell you all of those.

By contrast, if I think about the Fourier transform, the Fourier transform maps a function of time to a function of omega. The complete characterization of the Fourier transform is showed here. All I need to worry about is what are all the possible values of omega.

I'm thinking now instead of thinking s, I'm thinking how would I compose x of t by summing together a bunch of sine waves. The reason I want to think about that is because I want to think about systems in terms of frequency responses. So I want to know which frequencies are amplified, which ones are attenuated, which ones are phase delayed, which ones are phase advanced. And in order to do that kind of construction, all I need to know is what's the magnitude and angle of the system function for all possible values of omega.

So that's an enormous difference. Instead of having, in the previous case, I had a function of time turning into a function of two space. Function of one space turned into a function of two space. Here I have a function of one space turning into a function of one space. So that is conceptually a whole lot simpler.

Even more importantly, it is going to give rise to something that we'll spend most of the time for the rest of the term on-- the notion of signal processing where we can alternatively represent a signal x not by its time samples, but instead by its frequency samples. It would be very difficult to use that technique. Although it would work perfectly, there would be an explosion of information if we tried to use a signal processing technique with this where we represent this one-dimensional signal by a two-dimensional transform because we would be exploding the amount of information. We would be increasing substantially the amount of information required to specify the signal.

When we do the Fourier, there is no such explosion. It was a one-dimensional function of time. It is a one-dimensional function of omega.

OK, OK, I've been talking too much. I would like you to make sure that you understand the mechanics of what I've just said. So here's a signal, x1 of t. Which of these, if any, represents the Fourier transform?

You're all very quiet. Look at your neighbor. Don't be quiet. And then start. And then you can go back to being quiet.

[SIDE CONVERSATIONS]

So it's quiet. So I assume that means convergence. So which function represents the Fourier transform of x1 of t? Everybody raise your hand. Indicate by a number of fingers.

And it's overwhelmingly correct, which is wonderful. That's the point. The point is Fourier transforms are easy. And you've all got it.

So it's trivial to run this kind of an integral. It's not very different from doing a Laplace transform. Here I've indicated the Laplace transform. Right? We do e to the minus t. x of t is 1 or 0. We change the limits to indicate the 1 or zeroness.

Very trivial here, we get a slightly different looking answer because instead of e to the st, we have e to the j omega t. But otherwise, it's pretty much the same. The big difference, though, is again the domain.

So if you think about the answer-- so the answer is four like all of you said-- if you think about the answer from the point of view of eigenfunctions and eigenvalues, you have to think about a two space. The two space for even that simple function, sort of the least complicated function I could think of, is illustrated here. And what you're supposed to see there is if I were to integrate x of t e to the minus st dt to get x of s, if I think about s as sigma plus j omega-- it has a real part and an imaginary part-- the real part, as I make the real part big, e to the st becomes something that explodes.

And you can see that manifest here over in this region. So this is the real axis this way. This is the imaginary axis that way. You can see that as you go to bigger numbers in the positive real direction, the magnitude explodes. If you go in the negative direction because there was a sum here, the magnitude explodes again. You get this horrible function that spends a lot of its time near infinity. Right?

So that's a complicated picture by comparison to the picture that you get if you look at Fourier transform. So if you look at the Fourier transform, you get something that's relatively simpler. We're only looking along the imaginary axis now.

Furthermore, there's an easy way to interpret this. This is explicitly telling us if you put a certain frequency into the system, say this represented a system function, if this represented a system function, it's telling you that there's a simple way of thinking about how it amplifies or attenuates frequencies. Right?

It likes frequencies near the middle. There's a lot of frequencies-- so if this represented a system function, it would pass with a gain of two frequencies near 0. And the magnitude would be smaller for these others. And there is a phase relationship too.

So there's insights that you can get from this Fourier representation that are less easy to get from the Laplace. I mean the Laplace was a complete specification of a signal or a system, either. So all the information is there. It's just that it's more apparent-- some of the information is more apparent-- in the Fourier representation.

OK, second question, what if I stretched the time axis? x1 was 1 between minus 1 and 1. x2 is 1 between minus 2 and 2. So all I'm doing, stretching the axis. What happens to the Fourier transform? Look at your neighbor. Choose a number.

[SIDE CONVERSATIONS]

OK, which answer tells me what happens when I stretch time? So everybody raise your hand and tell me some number between 0 and 5, 1 and 5 actually. OK, 20% correct. S

So what's going to happen? Well, it's pretty easy to simply do out the integral again. Right, so that's the sort of most primitive way you can think about it. If you simply run the integral, I've written it in a kind of funny way. Right?

So a lot of you said one for the answer. This kind of looks like one. Why is that not one? That's actually three.

Why do I like to write it as-- instead of writing 2 since 2 omega over omega I like to write 4 signed 2 omega over 2 omega. Why do I like that? Because I'm completely random.

AUDIENCE: Omega is the same that way?

DENNIS FREEMAN: Excuse me.

AUDIENCE: Omega can be the same-- like you can have omega absorb the 2.

DENNIS FREEMAN: That's kind of right. So can you unscramble the sentence slightly? What is more-- yes.

AUDIENCE: Aren't they the same form that we use?

DENNIS FREEMAN: It's the same form in what sense? I mean what's the same about it? Yes. Yes.

AUDIENCE: Like I thought I was going to say omega is near 0 when number two is 4.

DENNIS FREEMAN: Correct, correct. If you think about what happens for omega near 0, I've got the sine of 2 omega, which is-- what's the sine of 2 omega when you make it 0? 0. So I have 0 over 0. Bad. So what do I do? L'Hospital.

So if I do L'Hospital's rule, then I can make this thing look like one. And if I write it in the form sine 2 omega over 2 omega, that has a value near 0 that approaches 1. So the amplitude is 4. So that's a way of separating out the part that's unity amplitude from the part that is the constant that multiplies the amplitude.

So the amplitude is 4. And frequency, which had been pi, moves to pi over 2. So the point is that-- so the answer is number three. The peak increases, and the frequency spacing decreases.

But more generally, the point is that if I stretched time, I compress frequency. But I compress frequency in a very special way. I compress frequency in an area-preserving way. That's why the peak popped up.

So what I'd like to do is think about a general scaling rule. If I wanted to think about scaling x1 into x2, such that x2 is a scaled version of time compared to x1, so if I wanted x2 of t to be x1 of at, and if I wanted to stretch x1 to turn it into x2, should I make a1-- should I make a bigger or less than 1? I'm trying to generalize the result that I just did. Right?

So I stretched x1 into x2. And what I saw is that frequency shrunk, and amplitude went up. So now, I'm thinking about what would happen if I did that in general. If I took x1, and I stretched it by setting x2 equal to x1 of at, would I want a to be bigger or less than 1 if I want to stretch x1 to turn it into x2?

AUDIENCE: Less than 1.

DENNIS FREEMAN: Less than 1 because then the logic is that if I wanted, for example, x2 of 2 to be x1 of 1, stretch x2-- stretch x1, sorry, so that it's value x2 at the position 2 is the same as the original function x1 at 1. If I stretched it, then I clearly have to have a equal to a half in that case. And in general, stretching would correspond to a less than 1.

And now, I can think about where that fits in the transform relationship. Think about finding the Fourier transform of x2, and substituting x1 of at for x2, and then making this relationship look more like a Fourier transform. So I don't want the at to be here. I want function of t. So I can rewrite at as tau. Now, this looks like a Fourier transform except that I've changed all my t's to tau's.

And the point is that that transformation of tau equals at shows up in two places. There's an explicit time here. And there's a time dependence in the dt. So the dt one is the one that gives me the shrinking and swelling of the axis. And the 1 over a from here is the one that gives me the changing amplitude.

That's how you get the area-preserving property. However much it got compressed-- however much it stretched in time so that it became compressed in frequency, whatever the factor is that compressed it in frequency, it also makes, by the same factor, it makes it taller. We'd like to build up intuition for how the Fourier transform works. That's the reason for doing these kinds of properties.

So now, there's another way of thinking about that same thing by thinking about what we call the moment theorems. Here what we think about is what would happen if we evaluated the Fourier transform at omega equals 0. Well, omega equals 0 is associated with a particularly simple complex exponential. If the frequency is 0, e to the j0 t is 1.

So what you see is that the value of the Fourier transform at omega equals 0 is the area under the curve. So the idea then is that if I took an x of t, which was x1, which was 1 between minus 1 and 1, there's an area of 2. And that's a way of directly saying, well, the Fourier transform at 0 better be 2.

And the intuitive thing that you're supposed to take away from that is when you look at a Fourier transform, the value at 0 is the dc. How much constant is there in that signal?

So there's a very explicit representation for the frequency content. I mean that's what the Fourier transform is all about. And in particular, the zero frequency is dc. It's the average value.

That kind of a relationship works both ways. If you were to use the synthesis formula and think about how do you synthesize x of 0, well, it's the same sort of thing except now the t is 0 instead of the omega being 0. And what we get is 1 over 2pi times the area under the transform.

So what that says is whatever is going on over in this wiggly thing, the net area, the average value, divided by 2pi has to equal the value at x1 of 0. x1 of 0 is clearly 1. So that means the area under this thing must be 2pi.

That wasn't particularly clear. I mean I don't know automatically the area under that curve. But that's such a frequently recurring thing that it's useful to notice that the area under this funny curve happens to be precisely the area of this inscribed triangle. So the height of the triangle is 2. Half the base is pi. So the area is 2pi.

I'm sure some Greek knew this. But I don't. So if somebody can think of a way to derive that answer without using Fourier transforms-- I can do it with Fourier transforms. And I can look it up in books where the authors also use Fourier transforms.

But I'm sure some ancient Greek can do this. So the question is if anybody can figure out how the ancient Greeks would have come to that conclusion, I would be very interested to know. Does everybody get-- so areas of inscribed whatevers, right, that's what they did. Right?

So I would like to know how to get the fact that the area under this wiggly function 2 sine omega over omega is 2pi without knowing Fourier transforms. So that's an open challenge. So try to figure out how to prove that without using Fourier transforms.

So if we use the moment idea, and we think about this scaling thing, we come up with a very interesting result that if you were to stretch x1, which had been 1 just between minus 1 and 1, to turn it into x2, which was 1 between minus 2 and 2, and just keep stretching, what would happen? Well, it gets skinnier and skinnier and skinnier and skinnier. But in a very special way, the area is the same.

Even though it got skinnier and skinnier and skinnier and skinnier, the area is the same. If you keep doing that, it turns into an impulse. Well, that's pretty interesting. That's an alternative way of deriving an impulse.

An impulse, we think about an impulse as a generalized function. Any function that has the property that in some kind of a limit, the area shrinks towards 0, but the area doesn't change, that turns into a delta function. That's a different way of thinking about the definition of a delta function.

And so we just found something very interesting. The Fourier transform of the constant 1 seems to be a delta function as 0 of area 2pi. Well, that's pretty interesting.

What's the Laplace transform of 1? Too shocking. What's the Laplace transform of 1?

AUDIENCE: It's a delta function.

DENNIS FREEMAN: Delta function. What's the Laplace transform of 1? And So Laplace transform, right, so x of s, integral 1e to the minus st dt. What's the Laplace transform of 1?

AUDIENCE: 1 over s.

DENNIS FREEMAN: 1 over s. How about 1 over s? Yes? No? 1 over s-- yes. 1 over s-- no. Me. 1 over s, no. Why not?

AUDIENCE: You would just use the su of 1 u of t.

DENNIS FREEMAN: It's 1, yes. So the Laplace transform of u of t is 1 over s. Right? You remember there was a region of convergence associated with Laplace transforms. The region of convergence, we thought about like if you had a time function like u of t, then it would converge as long as you multiplied by some factor that generally attenuated.

So that bounded what kinds of s's worked. We needed the real part of s bigger than 0. Because if the real part of s was the other way, the interval would diverge-- bad. Right so we could find the Laplace transform of u of t-- real part of s bigger than 0.

Or we could find the Laplace transform of a backward step. The region of convergence would flip. And we got a sign change.

But the Laplace transform of 1 doesn't exist. There is no region of convergence for the function 1. That's a big difference between Fourier and Laplace as well.

Even though Fourier, is in some sense, a subset of Laplace, there are some signals that have Fourier transforms and not Laplace transforms, and so in that sense, Laplace is a subset of Fourier. So in fact, you better think of them as Venn diagrams that overlap. So there are some signals that have both, but there are some signals that have one and not the other.

OK, so the final and maybe most important property of Laplace transforms is that they have a simple, inverse relationship. You may remember that I talked about there being an inverse relationship for Laplace. So you can think about x of t being 1 over j 2pi, the integral over some sort of a contour of x of s e to the st ds. And I told you don't ever try to do that without going over to math and talking to those folks first. That's complicated.

The interesting thing about this relationship is that it's really simple. So there's a very simple relationship between a Fourier transform and its inverse. OK, so I think I'll defer talking about that until next time, the reason being that I want to end a little earlier today. So I'll finish talking about the rest of the slides on the next lecture.

Free Downloads

Video


Caption

  • English-US (SRT)