Lecture 6: Laplace Transform

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Instructor: Dennis Freeman

Description: Building on concepts from the previous lecture, the Laplace transform is introduced as the continuous-time analogue of the Z transform. The lecture discusses the Laplace transform's definition, properties, applications, and inverse transform.

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

DENNIS FREEMAN: So today the idea is to think about CT systems in exactly the same way that we thought about DT systems last time. For the past couple of weeks, we've been looking at many representations of both CT and DT systems. Last lecture we made a picture of a bunch of different ways to think about DT systems and we tried to think about relations between them. The new thing from last time was this idea of a z transform, which formally is the link between a system function, a function of z, and a unit sample response, a function of n. We showed last time that because we already understand a whole lot of these other connections that connection is very straightforward.

What we're going to do today is precisely the same thing for CT. The boxes barely changed. You should see the relationship immediately between the former set of boxes and this set of boxes. And we'll do a very similar thing. We'll look at a relationship between the system function, which is a function of s, and the impulse response, which is a function of t. We'll look at that similar function for CT, the thing that's analogous to the z transform in DT.

So Laplace transform, just like in DT where it maps a function of time to a function of z, here it maps a function of time, which in CT we'll write that way. It maps that to a function of s. So s is going to be something like x of t e to the minus st dt.

So the idea is going to be that this was a function of time. This is only a function of s. We get the function of s by integrating some function of time, but we integrate time out so time disappears. So the idea then is that we end up with a map linking a function of time to a function of s.

Presumably you've seen this before. This is a topic in 1803. Smile. Nod your head. Make me feel like I'm connecting to you, right. Right, so you've all seen this before. Nothing's new.

Actually there is one tiny thing that's new. In 1803 they use a variant of the Laplace transform that we will call the unilateral Laplace transform, which means they started their integrals at 0. For reasons that will be clear at the very end of the course, when we do Fourier transforms to make the transition between this kind of a transform and a Fourier transform easier we will do something called a bilateral transform. The only difference being that we will start our integrals at minus infinity.

There are subtle differences between those two transforms. Rest assured, that the big picture is identical. And when there is an important difference we will point that out. So for the rest of today and for the rest of two weeks I will only talk about bilateral transforms, the kind that we integrate over all of time.

And the easiest way I know to get started is to just try some, right. You could study properties of the math. That's probably the way it was studied in 1803. I find it easier to develop some intuition for what's going on here by just doing some.

So here's probably the simplest Laplace transform we'll look at. What's the Laplace transform of a function of time that exponentially decays with positive time in a 0 for time less than 0? As you might expect, the first transform we'll do involve simply plugging that mathematical formula into the definition. So the x of s, the Laplace transform, is always the function of time integrated against e to the minus st.

We're only doing bilateral but notice that the 0 for t less than 0 cuts off the bottom. So in fact, you would get the same answer if you did bilateral or unilateral. And then we're left with running this integral over infinite time this kind of a function, which comes out looking like this. The only tricky thing is thinking about the implications of that infinity thing. There's going to be certain values of s for which that integral diverges.

So to think about that, think about the thing that we're integrating. We're integrating something that looks like e to the pt. We're integrating against some function. So define the original function of time is e to the minus t, here we have e to the minus st. Let's just focus on that one for a moment, the e to the st thing.

We're going to generally be worried about values of s, or here, values of p, that have complex values. We're going to be looking at the s plane just the same as last time we looked at the z plane. So we're going to think about this thing might have a real and imaginary component.

So we might have something like looks like this. Sigma would be the real part of p. Omega would be the imaginary part of p. And somebody want to hazard a guess at what that might look like if I tried to expand it as a real and imaginary part? Someone wanted to hazard a guess at the name of the equation I might use?

AUDIENCE: Euler.

DENNIS FREEMAN: Euler's equation, sure. So I would expand this by Euler's equation and I get something of the form e to the sigma t, cos omega t plus j sine omega t. The point of writing that out is that I hope it's clear that convergence of the integral is going to depend critically on the real or the imaginary part? Real, right?

So the convergence is going to be determined by this. This is the thing that's affecting the magnitude. That's the thing that's getting smaller or bigger as I go to infinity or minus infinity. The cosine term, the sine term, those are oscillating. They don't make the function be more or less convergent than it was or originally. So when I'm thinking about convergence I'm going to be thinking about the real part.

So over here if I want to make this integral converge I want to be thinking about t going from 0 to infinity. Well, 0 is not a problem. e to the 0 is 1, that's going to work. It's the infinity part that's a problem.

So if I have infinity plugged in for t, what's the range of values for s for plus 1 that makes sense? That's the important question, right. So I'm going to want to constrain the real part of that number, s plus 1, to be bigger than 0. If the real part of that number is bigger than 0, then this thing will converge to 0.

If that converges to 0 then I have a simple answer, it's just the answer in the bottom. It's in the bottom so I have to put a minus sign, that kills that minus sign. So my answer is 1 over s plus 1. Easy, right?

And we will say that the Laplace transform has a functional form, 1 over s plus 1. And it has region of convergence, the real part of s bigger then minus 1. Just solving that inequality for the real part of s so I can draw it on an s plane.

So we'll associate then with this s transform a picture which we will commonly call the pole 0 pattern. Is just a picture of the s plane which shows me the singularities, the poles and zeroes of the Laplace transform and the region of convergence. So there's a single pole. There's a pole at s equals a minus 1, so I indicate that by the x. And I have region of convergence which I'll illustrate here with the gray area.

So it converges for all values of s that are in the gray area. OK, easy right? Trivial.

OK, so now you get to talk to your neighbor. OK, turn to your neighbor. Say, hi.

[INTERPOSING VOICES]

And figure out the Laplace transform of a slightly more, but not very complicated, function.

[INTERPOSING VOICES]

DENNIS FREEMAN: So what's the answer? Everybody raise your hands, show some number of fingers equal to the number of the correct solution. Wonderful. The universal answer-- I think it was universal-- was 1. So how do you get 1? What do I do? How do I find the Laplace transform of the top function?

[INAUDIBLE]

DENNIS FREEMAN: Precisely. So what I do is just stick it in the formula, do it twice. Laplace transform, like the z transform, is linear. It will turn out that the Laplace of a sum is the sum of the Laplaces.

If I simply stick the expression for x2 into the definition, you can see it splits into two pieces. And I get two parts. 1 over s plus 1 just like I got-- so the first part looks just like x1, the first example. The second one looks almost the same except there's a 2 where there used to be 1. Shockingly, it changes one of the ones into a 2.

And then the only issue is where is the region of convergence. So this part converges if the real part of s is bigger than minus 1. This converges if the real part of s is bigger than minus 2. So they both converge if the real part of s is bigger than minus?

AUDIENCE: 1.

DENNIS FREEMAN: 1, right. If I'm in the region of s where both the first part and the second part converges then I'm fine. So that was all very straightforward. I get an answer that was number one, OK. So the transform is just the sum of those two pieces. And the region is the region of overlap.

A more interesting case, and where things are a little bit different when we think about the bilateral Laplace transform than unilateral, is that the regions can be more complicated. So here's an example where I've got a backwards traveling function. So now the function only exists for t less than 0. As you might expect, just stick it in the formula and see what happens.

So stick this expression into here. Now it lips off the top of the integral. But when you're grinding through integrals it's hard to tell the difference. You get something that looks very much the same.

The thing that is different, notice that the function was upside down. It was minus something rather than plus something like the first example was. So the first example was e to the minus t, t bigger than 0.

This example is minus e to the minus t, t less than 0. That gave me a minus sign here but the nontrivial limit is at the top rather than at the bottom. That's the reason I put the minus sign in. So it would kill the other minus sign.

Also, since I need convergence at t equals minus infinity, the region flips. I still need to have that sort of thing decay so that the integral exists. But now the functions are flipped in time. So the important range of s is now flipped. So now I get the same functional form exactly, but a region that's on the other side.

That's one of the central differences between the bilateral and the unilateral transform. I need to tell you the region of convergence for you to know which of those two functions that I'm talking about.

OK, so the important thing from this example is the functional form looks exactly the same. I had a functional form in time, e to the minus t, e to the minus t. It was that functional form of time that I crank through the integration for, and it's not surprising then that I get a functional form for the s transform that looks the same. If you start with functional forms in time that look the same, you get functional forms in s that look the same.

However, since the region of t was different for the two functions the region of s is different. There's also the negative sign. There's the fact that this flipped this way, and that's related to the fact that my region of integration extends from minus infinity to 0 versus 0 to infinity. The important limit flipped from being the bottom limit to the top limit. OK, that's roughly it.

So just to make sure that everybody is with me, what's the Laplace transform of this symmetric looking function?

So which of those functions is the Laplace transform of the symmetric function? It's about 3 calories to raise your hand, right? Good exercise, makes you breathe. The answer is? Oh, come on.

OK, it's more like 90% correct. So not everybody has it right. What am I going to do? It's just like the last one, right? Just stick it in.

If you think about writing the integral of this the complicated part is this absolute value sine thing. So split it into two parts, that way the absolute value is one thing when t is bigger than 0 and it's a different thing when t is less than 0. Now looks like just the sum of two functions that are both trivial. One is right sided, the other is left sided, so we might be expecting something a little funky compared to the last one. So this guy that is left sided in time ends up with a left-sided transform, just like the previous examples.

So we see that this part, which looks like a pole at 1, so we're going to get a part of the transform that has a pole at 1. And it's going to be valid for regions in s space for s magnitude less than 1, OK. Then we're going to get this other piece, that corresponds to a pole minus 1. And it's right sided, so like the right sided examples that we saw before there's a right sided region of convergence here. So it's going to be right of its pole, but it's pole is minus 1.

So in sum, the region is going to be to the right of the pole at minus 1, and to the left of the pole at 1 because we have to be in the part of s space where both of those two integrals converged. So we end up with the ROC being the intersection, OK. So we get a pole 0 pattern that looks like this. There are two poles now. When you add together a part that looks like a pole a minus 1 and a part that looks like a pole at 1, you get two poles.

And the region becomes the band in between. Again, because the regions of s transforms because of Euler's expression is determined by the real part of s. So that we're always going to get some kind of vertical band. The band is always going to be delimited by a pole, just like it was in a z transform.

There's two poles. So all we really need to worry about is which region bounded by two poles we're going to correspond to. OK, so the answer was two. Written in a little bit of a funky form, but if factor this you can see that there's two poles, one at 1, and one at minus 1.

So hopefully that's all trivial, that's all stuff you can do in your sleep. What I want to do is think a little bit about what the implications are. What are we doing? What is a region of convergence?

So I want to think now about four different functions, in four different poles zero patterns, in four different regions. And I want to think about the defining integral, that one up there. I want to think about x of s is the integral over all of time of x of te to the minus st dt.

What are we doing when we take a Laplace transform? And what are we doing when we specify a region of convergence? Because I can help you think about what should the answer be, which is helpful when you're trying to think about does my answer make any sense? Or what am I doing anyway?

So x1, this was e to the minus t. So it had a pole at minus 1 in the region to the right. Right-sided function, right-sided region. Here we have a more complicated function, which can be the sum of two poles, one at minus 1 and 1 at minus 2. And we got a region that was to the right of the rightmost pole. Right-sided function, right-sided region.

Here was x3, which was the negative part of that one. We got the same pole minus 1 one the left-sided region. And here was the symmetric one where we found that the region was in between. So what I want to think now is what happens when we have a particular value of s that is inside or outside those regions. What's really going on?

So let's start by thinking about what would happen if I considered an s depicted by this red x. I probably shouldn't have used x. I should have probably used a star. That's not a pole. That's the value of s in this integral.

What happens if I choose to integrate against the function e to the minus st where s is 0? If I choose to integrate against that function, that function as depicted by red here. So the convergence of the integral, the convergence of the thing that I'm calling a Laplace transform depends entirely on the convergence of x. That's what s equals 0 means. s equals 0 says, the converge of the transform depends entirely on the convergence of the function.

So that means that this function converges, it's in the region of convergence. This one converges, it's in the region of convergence. This one does not converge, this one diverges.

As you go to minus infinity the function is unbounded. The transform doesn't exist. s equals 0 is not in the region. That's what it means. Here, if I integrate against e to the 0 both sides converge, there is no problem. I'm in the region.

So now if I think about a different s, what if I move s a little bit to the left? Say, minus 1/2. If I'm thinking about integrating against e to the minus st, and if s is minus 1/2, that's e to the 1/2 t. e to the 1/2 t is a function that for all time becomes greater as I go toward positive infinity, exponentially greater. So if I think about what happens if I multiplied this x of t by that weighting function, does the product converge or diverge?

It converges because the convergence of the blue line is faster than the divergence of the red line. Even though the red line is diverging, the product is overall convergent.

OK that's because of the relative positions of the red x and the blue x. The blue function is converging faster than the red function is diverging. The product converges. So this x is in the region of convergence. That's what it means.

Similarly here, this is diverging. The red curve is still diverging. It's the same red curves are all these red curves are diverging to the right. This one is converging. It's ultimately converging as the sum of two things, e to the minus t and e to the minus 2t, both of which are fast compared to the explosion e to the 1/2 t. So I'm in the region.

Here it's exploding in a region of time that I don't care about. And it's convergent for this region of time that I do care about, but it's not convergent enough. So the integral still diverges, I'm not in the region.

And finally, this one is more complicated. The integrand, this thing that I'm integrating against, it's always becoming greater as I go toward positive infinity. That tends to be convergent on this side and divergent on that side, but it's not divergent enough to make the function not converge. So I'm in the region.

If I make an even bigger pole. So say I put e to the st, if I put that at minus 1 1/2, then that is fast enough that it breaks the product. The product is no longer integrable. So this divergent trend is enough to make that diverge. I'm not in the region anymore.

Similarly here, I'm not in the region because I've made it fast enough that one of the parts diverges. One of the parts coverages, but they have to both converge in order for the interval to converge. Here, I have finally made the left-hand side convergent enough to make the product converge. So that's fine. And here, this is accelerating so fast that although it was stabilizing here it became non-convergent over here.

What I want you to get from this is the idea that you can think physically about what the region of convergence is. The region of convergence is a function that I stick into the integral to make the integral converge. So the region of convergence corresponds to those exponents that makes sense for which the integral will converge.

OK, with that vast new insight this problem is now trivial. Enumerate all possible functions x of t for which that's the transform.

So how many functions are there? Keep going, keep going. I'm still looking for a right answer. That's a clue. Right, I don't see a right answer yet. So keep going. Keep going.

Actually, I do now. I see two right answers. Can we make it three? Going once. Going twice. Two right answers?

OK, the right answer is three. So now that I've told you the right answer, you rationalize for me why is the right answer three? Talk to your neighbor. Why is the right answer three?

[INTERPOSING VOICES]

OK, who can volunteer a concise statement for why the right answer is three and not four? Yes?

[INAUDIBLE]

That's exactly right. There are two poles, and there are three ways you can divide up the s plane by two poles. So there's four functions here, let's think about four s planes.

So each of these functions-- so here's a pole at minus 2. Here's a pole at 2. Minus 2, 2, minus 2, 2, minus 2, 2. They all have the same poles.

OK, where's the region that corresponds to function one? So here's pole at minus 2. e to the minus 2t corresponds to a pole at minus 2. u of t is right-sided function.

Where's the region that corresponds to the pole minus 2? To the right of minus 2. So this first function converges to the right of minus 2. This function is a pole at 2, e to the 2t and it's right sided. So it's right sided with regard to a pole at 2.

So what's the region that corresponds to a right-sided time function that corresponds to a pole at 2, or that's right sided with regard to here. So the net region of convergence is that region. Make sense?

So then I have a pole at minus 2 with the right sided. OK, so that's this. And a pole as 2, which is left sided. So that's this. So that corresponds to a region here.

Then the third one I have a pole at minus 2, which is left sided. So that means I want to have convergence here. And I have a pole at 2 which is right sided, and I need convergence here.

There's no way to make that convergent. If I choose my s to make the first part convergent then my second part is not convergent, and vice versa. So there's no way to do that one. So this is no. This is yes. This is yes.

And then finally, this is a pole at minus 2 to the left. And a pole at 2 to the left. So that's that one. So that's OK. So the idea is that there's only three ways to chop up a space delimited by two poles. OK, so it looks like you to have four because we're used to thinking about the two by two matrix. That's not the way it works.

OK, two last things. First last thing. Probably the most important thing about the Laplace transform, probably the only reason we bother with that whatever is that you can use it to solve differential equations. That's probably the most important reason for even talking about it.

The fact that we can take it is of zero consequence. We can take lots of integrals. The interesting thing about this integral is that it helps us to solve differential equations. And the trick is that if we start with the differential equation we can take the Laplace transform of the whole differential equation and that will end up making sense.

Just like the z transform, the Laplace transform is linear. The Laplace transform of a sum is sum of the Laplace transforms. That's pretty simple just by looking at the definition because we can distribute the e to the minus st over a sum, if x were a sum it distributes. So that's easy.

The other important thing is the derivative. What's the Laplace transform of a derivative? That turns out to be easy. If that weren't easy, we wouldn't even bother with it. But it is easy.

You can see that if we make the assumption that x is the Laplace transform of x, and then if we define y is the derivative of x, we can ask what's the Laplace transform of y and that will tell us what's the Laplace transform of the derivative. Take the definition of transform, stick in y equals x dot and integrate by parts. You all can integrate by parts much better than I can because I took it 35 years ago. But it's very easy to see that if we call this u and this v dot, integral u dv is uv minus the integral v du.

The trick is that taking integrals and derivatives of this part is easy, it's an exponential. Exponentials are the one function whose derivative has the same shape as the function. That's the reason it's easy.

So that means that I can easily integrate that part. I can easily differentiate that part to get something that looks just like this, except as multiplied by minus s. And that minus s is the only thing that ends up being important. The Laplace transform of the derivative is s times the Laplace transform of the original function.

OK, differentiating time is the same as multiplying by s in a Laplace transform. Because of that it's trivial to think about the Laplace transform of a differential equation. Linearity says you can do it term wise.

And a differential equation is something that has a bunch of derivatives in it. Those all turn into multiply by s. So you end up then with this differential equation being replaced by this. So the Laplace transform of the derivative is s times y. The Laplace transform of y is y.

Now I have to figure out the Laplace transform of delta. OK, turns out that's easy. What's the Laplace transform of delta? That's why we like delta functions. Delta functions seem mathematically bizarre but they're so easy to work with, that's the only reason we use them. If you think about the Laplace transform of delta just stick it in the formula.

The interesting thing about the delta function is that it's 0 almost everywhere. So we don't need to worry about the product except where the delta function is not 0. Because the delta function makes the time axis mostly 0. The only place that's not 0 is 0. So the only value of this function that matters in the least is its value at 0.

If you think about what that looks like is as a picture. So if we have something that looks like e to the minus st and if we multiply by an impulse. The impulse is 0 except at 0. The only thing that survives that multiplication is the value of e to the minus st at t equals 0. We can think about that as a limit.

We thought about the delta function as a limit of a function that looks like this, minus epsilon, epsilon t, 1 over 2 epsilon. So that the limit as epsilon goes to 0 becomes a delta. That's how we defined the impulse function. If we think about that definition in terms of this product you can see that you get exactly this expression.

So if you multiply this times this all you pick out is this part. As you make epsilon smaller, and smaller, and smaller, you focus just on that point right at 0. The area of the impulse is 1. Therefore, the integral of this thing is everywhere 1.

So this sifts out. That's what we'll call it. So this is called the sifting property. It's the nice part about the delta function. If you integrate a function with regard to time where the function is multiplied by delta, that integral is the value of the function at 0. It sifts out the 0 value.

That means that the Laplace transform of the delta function is 1. It's the most simple function you can imagine. So the net effect of that is that this differential equation, the Laplace transform of the differential equation becomes an algebraic equation. That means that we can solve it by doing algebra.

So in particular, we can readily solve the equation to find an expression for y. We saw that previously. We recognize ys. It's a recognition thing.

It's a table look up sort of thing. We know the form 1 over s plus 1. So we know the time function that is the answer. So notice that when we do this just the way that happened with the z transform.

When we solve a differential equation by using Laplace transform we basically don't use calculus. We use the Laplace transform to turn the differential equation which is calculus in 1803 and that kind of stuff, into algebra, which is high school. We do everything with algebra.

The only annoying thing is this table look up thing. It's a little annoying that we had to do the inverse Laplace transform by table look up. It's a little dissatisfying.

So I'll just mention that there is a formal way of not doing table look up. For the kinds of problems we will look at using this formula is so much more difficult than table look up that we will never use that formula. It will be useful for proving things, it will not be useful for inverting things. Because of the form of the things we do, linear differential equations with constant coefficients, it will always be easier to do table look up then it will be to run this integral.

If you're interested in that integral, fine, take 1804. There's a bunch of people who know all about how that integral works. It's spectacularly interesting but we won't use it here.

So the upshot then is that the Laplace transform, we learn about it because it's very useful and its utility comes from a bunch of properties. The ones we illustrated today, we use the fact that it was linear and we use the fact that there's a simple relationship with differentiation. Using just those facts it turns out easy to do differential equations using the Laplace transform. And there's many other things I've illustrated in the last two slides but I won't go over them right now.

The idea of taking limits using Laplace transforms. It turns out that the Laplace transform is what we call a reciprocal function. Since the integrand depends and the product of s and t, s gets large, corresponds to t gets small. So you can take the limit for things with time getting small by looking at s getting big.

Similarly, you can do the reverse. You can look at time gets big by looking at s gets small. That's just properties of the Laplace transform. And there's lots of others.

Besides the table there's lots of properties we won't have time to go over but I will hint at some of them. And in particular, there's a very useful transformation where you take the Laplace transform of a circuit. And that's in the homework. So that's extremely useful.

The major point then is just that there's lots of relationships, there's lots of ways we think about CT systems. This Laplace transform is a new one and it's very useful because of the properties in that table. OK, thanks a lot.

Free Downloads

Video


Caption

  • English-US (SRT)