Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Topics covered: Differentials, antiderivatives
Instructor: Prof. David Jerison
Lecture 15: Antiderivatives
Related Resources
Lecture Notes (PDF)
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: Today we're moving on from theoretical things, from the mean value theorem, to the introduction to what's going to occupy us for the whole rest of the course, which is integration. So, in order to introduce that subject, I need to introduce for you a new notation, which is called differentials. I'm going to tell you what a differential is, and we'll get used to using it over time. If you have a function which is y = f(x), then the differential of y is going to be denoted dy, and it's by definition f'(x) dx. So here's the notation. And because y is really equal to f, sometimes we also call it the differential of f. It's also called the differential of f. That's the notation, and it's the same thing as what happens if you formally just take this dx, act like it's a number and divide it into dy. So it means the same thing as this statement here. And this is more or less the Leibniz interpretation of derivatives. Of a derivative as a ratio of these so called differentials. It's a ratio of what are known as infinitesimals.
Now, this is kind of a vague notion, this little bit here being an infinitesimal. It's sort of like an infinitely small quantity. And Leibniz perfected the idea of dealing with these intuitively. And subsequently, mathematicians use them all the time. They're way more effective than the notation that Newton used. You might think that notations are a small matter, but they allow you to think much faster, sometimes. When you have the right names and the right symbols for everything. And in this case it made it very big difference. Leibniz's notation was adopted on the continent and Newton dominated in Britain and, as a result, the British fell behind by one or two hundred years in the development of calculus. It was really a serious matter. So it's really well worth your while to get used to this idea of ratios. And it comes up all over the place, both in this class and also in multivariable calculus. It's used in many contexts.
So first of all, just to go a little bit easy. We'll illustrate it by its use in linear approximations, which we've already done. The picture here, which we've drawn a number of times, is that you have some function. And here's a value of the function. And it's coming up like that. So here's our function. And we go forward a little increment to a place which is dx further along. The idea of this notation is that dx is going to replace the symbol delta x, which is the change in x. And we won't think too hard about-- well, this is a small quantity, this is a small quantity, we're not going to think too hard about what that means. Now, similarly, if you see how much we've gone up - well, this is kind of low, so it's a small bit here.
So this distance here is, previously we called it delta y. But now we're just going to call it dy. So dy replaces delta y. So this is the change in level of the function. And we'll represent it symbolically this way. Very frequently, this just saves a little bit of notation. For the purposes of this, we'll be doing the same things we did with delta x and delta y, but this is the way that Leibniz thought of it. And he would just have drawn it with this. So this distance here is dx and this distance here is dy. So for an example of linear approximation, we'll say what's 64.1, say, to the 1/3 power, approximately equal to? Now, I'm going to carry this out in this new notation here. The function involved is x^1/3. And then it's a differential, dy. Now, I want to use this rule to get used to it. Because this is what we're going to be doing all of today is, we're differentiating, or taking the differential of y. So that is going to be just the derivative. That's 1/3 x^(-2/3) dx. And now I'm just going to fill in exactly what this is. At x = 64, which is the natural place close by where it's easy to do the evaluations, we have y = 64^(1/3), which is just 4.
And how about dy? Well, so this is a little bit more complicated. Put it over here. So dy = 1/3 64^(-2/3) dx. And that is 1/3 * 1/16 dx, which is 1/48 dx. And now I'm going to work out what 64 to the, whatever it is here, this strange fraction. I just want to be very careful to explain to you one more thing. Which is that we're using x = 64, and so we're thinking of x + dx is going to be 64.1. So that means that dx is going to be 1/10. So that's the increment that we're interested in. And now I can carry out the approximation. The approximation says that 64.1^(1/3) is, well, it's approximately what I'm going to call y + dy. Because really, the dy that I'm determining here is determined by this linear relation. dy = 1/48 dx. And so this is only approximately true. Because what's really true is that this is equal to y + delta y. In our previous notation. So this is in disguise. What this is equal to. And that's the only approximately equal to what the linear approximation would give you. So, really, even though I wrote dy is this increment here, what it really is if dx is exactly that, is it's the amount it would go up if you went straight up the tangent line. So I'm not going to do that because that's not what people write. And that's not even what they think. They're really thinking of both dx and dy as being infinitesimally small. And here we're going to the finite level and doing it. So this is just something you have to live with, is a little ambiguity in this notation.
This is the approximation. And now I can just calculate these numbers here. y at this value is 4. And dy, as I said, is 1/48 dx. And that turns out to be 4 + 1/480, because dx is 1/10. So that's approximately 4.002. And that's our approximation. Now, let's just compare it to our previous notation. This will serve as a review of, if you like, of linear approximation. But what I want to emphasize is that these things are supposed to be the same. Just that it's really the same thing. It's just a different notation for the same thing. I remind you the basic formula for linear approximation is that f(x) is approximately f(a) + f'(a) (x-a). And we're applying it in the situation that a = 64 and f(x) = x^(1/3). And so f(a), which is f(64), is of course 4. And f'(a), which is 1/3 a^(-2/3), is in our case 1/16. No, 1/48. OK, that's the same calculation as before. And then our relationship becomes x^(1/3) is approximately equal to 4 plus 1/48 times x minus a, which is 64. So look, every single number that I've written over here has a corresponding number for this other method. And now if I plug in the value we happen to want, which is the 64.1, this would be 4 + 1/48 1/10, which is just the same thing we had before. So again, same answer. Same method, new notation.
Well, now I get to use this notation in a novel way. So again, here's the notation. This notation of differential. The way I'm going to use it is in discussing something called antiderivative Again, this is a new notation now. But it's also a new idea. It's one that we haven't discussed yet. Namely, the notation that I want to describe here is what's called the integral of g(x) dx. And I'll denote that by a function capital G of x. So it's, you start with a function g(x) and you produce a function capital G(x), which is called the antiderivative of g. Notice there's a differential sitting in here. This symbol, this guy here, is called an integral sign. Or an integral, or this whole thing is called an integral. And another name for the antiderivative of g is the indefinite integral of g. And I'll explain to you why it's indefinite in just-- very shortly here.
Well, so let's carry out some examples. Basically what I'd like to do is as many examples along the lines of all the derivatives that we derived at the beginning of the course. In other words, in principle you want to be able to integrate as many things as possible. We're going to start out with the integral of sin x dx. That's a function whose derivative is sin x. So what function would that be? Cosine x, minus, right. It's -cos x. So -cos x differentiated gives you sin x. So that is an antiderivative of sine. And it satisfies this property. So this function, G(x) = - cos x, has the property that its derivative is sin x. On the other hand, if you differentiate a constant, you get 0. So this answer is what's called indefinite. Because you can also add any constant here. And the same thing will be true. So, c is constant. And as I said, the integral is called indefinite. So that's an explanation for this modifier in front of the "integral". It's indefinite because we actually didn't specify a single function. We don't get a single answer. Whenever you take the antiderivative of something it's ambiguous up to a constant.
Next, let's do some other standard functions from our repertoire. We have the integral of x^a dx. Some power, the integral of a power. And if you think about it, what you should be differentiating is one power larger than that. But then you have to divide by 1/(a+1), in order that the differentiation be correct. So this just is the fact that d/dx of x^(a+1), or maybe I should even say it this way. Maybe I'll do it in differential notation. d(x^(a+1)) = (a+1) x^a dx. So if I divide that through by a+1, then I get the relation above. And because this is ambiguous up to a constant, it could be any additional constant added to that function.
Now, the identity that I wrote down below is correct. But this one is not always correct. What's the exception? Yeah. a equals--
STUDENT: 0.
PROFESSOR: Negative 1. So this one is OK for all a. But this one fails because we've divided by 0 when a = -1. So this is only true when a is not equal to -1. And in fact, of course, what's happening when a = 0, you're getting 0 when you differentiate the constant. So there's a third case that we have to carry out. Which is the exceptional case, namely the integral of dx/x. And this time, if we just think back to what our-- So what we're doing is thinking backwards here, which a very important thing to do in math at all stages. We got all of our formulas, now we're reading them backwards. And so this one, you may remember, is ln x.
The reason why I want to do this carefully and slowly now, is right now I also want to write the more standard form which is presented. So first of all, first we have to add a constant. And please don't put the parentheses here. The parentheses go there. But there's another formula hiding in the woodwork here behind this one. Which is that you can also get the correct formula when x is negative. And that turns out to be this one here. So I'm treating the case, x positive, as being something that you know. But let's check the case, x negative. In order to check the case x negative, I have to differentiate the logarithm of the absolute value of x in that case. And that's the same thing, again, for x negative as the derivative of the logarithm of negative x. That's the formula, when x is negative. And if you carry that out, what you get, maybe I'll put this over here, is, well, it's the chain rule. It's 1/(-x) times the derivative with respect to x of -x.
So see that there are two minus signs. There's a -x in the denominator and then there's the derivative of -x in the numerator. That's just -1. This part is -1. So this -1 over -x, which is 1/x. So the negative signs cancel. If you just keep track of this in terms of ln(-x) and its graph, that's a function that looks like this. For x negative. And its derivative is 1/x, I claim. And if you just look at it a little bit carefully, you see that the slope is always negative. Right? So here the slope is negative. So it's going to be below the axis. And, in fact, it's getting steeper and steeper negative as we go down. And it's getting less and less negative as we go horizontally. So it's going like this, which is indeed the graph of this function, for x negative. Again, x negative.
So that's one other standard formula. And very quickly, very often, we won't put the absolute value signs. We'll only consider the case x positive here. But I just want you to have the tools to do it in case we want to use, we want to handle, both positive and negative x. Now, let's do two more examples. The integral of sec^2 x dx. These are supposed to get you to remember all of your differentiation formulas, the standard ones. So this one, integral of sec^2 dx is what? tan x. And here we have + c, all right? And then the last one of, a couple of, this type would be, let's see. I should do at least this one here, square root of 1 - x^2. This is another notation, by the way, which is perfectly acceptable. Notice I've put the dx in the numerator and the function in the denominator here. So this one turns out to be sin^(-1) x. And, finally, let's see. About the integral of dx / (1 + x^2). That one is tan^(-1) x.
For a little while, because you're reading these things backwards and forwards, you'll find this happens to you on exams. It gets slightly worse for a little while. You will antidifferentiate when you meant to differentiate. And you'll differentiate when you're meant to antidifferentiate. Don't get too frustrated by that. But eventually, you'll get them squared away. And it actually helps to do a lot of practice with antidifferentiations, or integrations, as they're sometimes called. Because that will solidify your remembering all of the differentiation formulas. So, last bit of information that I want to emphasize before we go on some more complicated examples is this. It's obvious because the derivative of a constant is 0. That the antiderivative is ambiguous up to a constant. But it's very important to realize that this is the only ambiguity that there is.
So the last thing that I want to tell you about is uniqueness of antiderivatives up to a constant. The theorem is the following. The theorem is if F' = G', then F equals G-- so F(x) equals G(x) plus a constant. But that means, not only that these are antiderivatives, all these things with these plus c's are antiderivatives. But these are the only ones. Which is very reassuring. And that's a kind of uniqueness, although its uniqueness up to a constant, it's acceptable to us. Now, the proof of this is very quick. But this is a fundamental fact. The proof is the following. If F' = G', then if you take the difference between the two functions, its derivative, which of course is F' - G', is equal to 0. Hence, F(x) - G(x) is a constant. Now, this is a key fact. Very important fact. We deduced it last time from the mean value theorem. It's not a small matter. It's a very, very important thing. It's the basis for calculus. It's the reason why calculus make sense. If we didn't have the fact that the derivative is 0 implied that the function was constant, we would be done. We would have-- Calculus would be just useless for us. The point is, the rate of change is supposed to determine the function up to this starting value. So this conclusion is very important. And we already checked it last time, this conclusion. And now just by algebra, I can rearrange this to say that F(x) is equal to G(x) plus a constant.
Now, maybe I should leave differentials up here. Because I want to illustrate-- So let's go on to some trickier, slightly trickier, integrals. Here's an example. The integral of, say, x^3 (x^4 + 2)^5 dx. This is a function which you actually do know how to integrate, because we already have a formula for all powers. Namely, the integral of x^a is equal to this. And even if it were a negative power, we could do it. So it's OK. On the other hand, to expand the 5th power here is quite a mess. And this is just a very, very bad idea. There's another trick for doing this that evaluates this much more efficiently. And it's the only device that we're going to learn now for integrating. Integration actually is much harder than differentiation, symbolically. It's quite difficult. And occasionally impossible. And so we have to go about it gently. But for the purposes of this unit, we're only going to use one method. Which is very good. That means whenever you see an integral, either you'll be able to divine immediately what the answer is, or you'll use this method. So this is it. The trick is called the method of substitution. And it is tailor-made for notion of differentials. So tailor-made for differential notation.
The idea is the following. I'm going to to define a new function. And it's the messiest function that I see here. It's u = x^4 + 2. And then, I'm going to take its differential and what I discover, if I look at its formula, and the rule for differentials, which is right here. Its formula is what? 4x^3 dx. Now, lo and behold with these two quantities, I can substitute, I can plug in to this integral. And I will simplify it considerably. So how does that happen? Well, this integral is the same thing as, well, really I should combine it the other way. So let me move this over. So there are two pieces here. And this one is u^5. And this one is 1/4 du. Now, that makes it the integral of u^5 du / 4. And that's relatively easy to integrate. That is just a power. So let's see. It's just 1/20 u to the-- whoops, not 1/20. The antiderivative of u^5 is u^6. With the 1/6, so it's 1/24 u^6 + c. Now, that's not the answer to the question. It's almost the answer to the question. Why isn't it the answer? It isn't the answer because now the answer's expressed in terms of u. Whereas the problem was posed in terms of this variable x. So we must change back to our variable here. And we do that just by writing it in. So it's 1/24 (x^4 + 2)^6 + c. And this is the end of the problem. Yeah, question.
STUDENT: [INAUDIBLE]
PROFESSOR: The question is, can you see it directly? Yeah. And we're going to talk about that in just one second. OK.
Now, I'm going to do one more example and illustrate this method. Here's another example. The integral of x dx over the square root of 1 + x^2. Now, here's another example. Now, the method of substitution leads us to the idea u = 1 + x^2. du = 2x dx, etc. It takes about as long as this other problem did. To figure out what's going on. It's a very similar sort of thing. You end up integrating u^(-1/2). It leads to the integral of u^(-1/2) du. Is everybody seeing where this...? However, there is a slightly better method. So recommended method. And I call this method advanced guessing. What advanced guessing means is that you've done enough of these problems that you can see two steps ahead. And you know what's going to happen. So the advanced guessing leads you to believe that here you had a power -1/2, here you have the differential of the thing. So it's going to work out somehow. And the advanced guessing allows you to guess that the answer should be something like this. (1 + x^2)^(1/2). So this is your advanced guess. And now you just differentiate it, and see whether it works. Well, here it is. It's 1/2 (1 + x^2)^(-1/2) 2x, that's the chain rule here. Which, sure enough, gives you x over square root of 1 + x^2. So we're done. And so the answer is square root of (1 + x^2) + c.
Let me illustrate this further with another example. I strongly recommend that you do this, but you have to get used to it. So here's another example. e^(6x) dx. My advanced guess is e^(6x). And if I check, when I differentiate it, I get 6e^(6x). That's the derivative. And so I know that the answer, so now I know what the answer is. It's 1/6 e^(6x) + c. Now, OK, you could, it's also OK, but slow, to use a substitution, to use u = 6x. Then you're going to get du = 6dx, dot, dot, dot. It's going to work, it's just a waste of time.
Well, I'm going to give you a couple more examples. So how about this one. x e^(-x^2) dx. What's the guess? Anybody have a guess? Well, you could also correct. So I don't want you to bother - yeah, go ahead.
STUDENT: [INAUDIBLE]
PROFESSOR: Yeah, so you're already one step ahead of me. Because this is too easy. When they get more complicated, you just want to make this guess here. So various people have said 1/2, and they understand that there's 1/2 going here. But let me just show you what happens, OK? If you make this guess and you differentiate it, what you get here is e^(-x^2) times the derivative of negative 2x, so that's -2x. x^2, so it's -2x. So now you see that you're off by a factor of not 2, but -2. So a number of you were saying that. So the answer is -1/2 e^(-x^2) + c. And I can guarantee you, having watched this on various problems, that people who don't write this out make arithmetic mistakes. In other words, there is a limit to how much people can think ahead and guess correctly. Another way of doing it, by the way, is simply to write this thing in and then fix the coefficient by doing the differentiation here. That's perfectly OK as well.
All right, one more example. We're going to integrate sin x cos x dx. So what's a good guess for this one?
STUDENT: [INAUDIBLE]
PROFESSOR: Someone suggesting sin^2 x. So let's try that. Over 2 - well, we'll get the coefficient in just a second. So sin^2 x, if I differentiate, I get 2 sin x cos x. So that's off by a factor of 2. So the answer is 1/2 sin^2 x. But now I want to point out to you that there's another way of doing this problem. It's also true that if you differentiate cos^2 x, you get 2 cos x (-sin x). So another answer is that the integral of sin x cos x dx is equal to -1/2 cos^2 x + c. So what is going on here? What's the problem with this?
STUDENT: [INAUDIBLE]
PROFESSOR: Pardon me?
STUDENT: [INAUDIBLE]
PROFESSOR: Integrals aren't unique. That's part of the-- but somehow these two answers still have to be the same.
STUDENT: [INAUDIBLE]
PROFESSOR: OK. What do you think?
STUDENT: If you add them together, you just get c.
PROFESSOR: If you add them together you get c. Well, actually, that's almost right. That's not what you want to do, though. You don't want to add them. You want to subtract them. So let's see what happens when you subtract them. I'm going to ignore the c, for the time being. I get sin^2 x, 1/2 sin^2 x - (-1/2 cos^2 x). So the difference between them, we hope to be 0. But actually of course it's not 0. What it is, is it's 1/2 sine squared plus cosine squared, which is 1/2. It's not 0, it's a constant. So what's really going on here is that these two formulas are the same. But you have to understand how to interpret them. The two constants, here's a constant up here. There's a constant, c_1 associated to this one. There's a different constant, c_2 associated to this one. And this family of functions for all possible c_1's and all possible c_2's, is the same family of functions. Now, what's the relationship between c_1 and c_2? Well, if you do the subtraction, c_1 - c_2 has to be equal to 1/2. They're both constants, but they differ by 1/2. So this explains, when you're dealing with families of things, they don't have to look the same. And there are lots of trig functions which look a little different. So there can be several formulas that actually are the same. And it's hard to check that they're actually the same. You need some trig identities to do it.
Let's do one more example here. Here's another one. Now, you may be thinking, and a lot of people are, thinking ugh, it's got a ln in it. If you're experienced, you actually can read off the answer just the way there were several people who were shouting out the answers when we were doing the rest of these problems. But, you do need to relax. Because in this case, now this is definitely not true in general when we do integrals. But, for now, when we do integrals, they'll all be manageable. And there's only one method. Which is substitution. And in the substitution method, you want to go for the trickiest part. And substitute for that. So the substitution that I proposed to you is that this should be, u should be ln x. And the advantage that that has is that its differential is simpler than itself. So du = dx / x. Remember, we use that in logarithmic differentiation, too. So now we can express this using this substitution. And what we get is, the integral of, so I'll divide the two parts here. It's 1 / ln x, and then it's dx / x. And this part is 1 / u, and this part is du. So it's the integral of du / u. And that is ln u + c. Which altogether, if I put back in what u is, is ln (ln x) + c. And now we see some uglier things. In fact, technically speaking, we could take the absolute value here. And then this would be absolute values there. So this is the type of example where I really would recommend that you actually use the substitution, at least for now. All right, tomorrow we're going to be doing differential equations. And we're going to review for the test. I'm going to give you a handout telling you just exactly what's going to be on the test. So, see you tomorrow.
Free Downloads
Free Streaming
Video
- iTunes U (MP4 - 107MB)
- Internet Archive (MP4 - 107MB)
Caption
- English-US (SRT)