Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Video Description: Herb Gross illustrates how the Jacobian arises when changing coordinates in order to calculate a double integral.
Instructor/speaker: Prof. Herbert Gross
Lecture 3: Multiple Integra...
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: Hi. In the reading assignment for today, you'll notice in the textbook that the topic is double integrals in terms of polar coordinates. Now, the trouble with polar coordinates is that aside from straight lines, they're perhaps the only coordinate system that we've studied in terms of Euclidean geometry in a high school class. And consequently, if we were to have a change of variables other than strict linear changes of variables or polar coordinates, it might be difficult to geometrically try to determine what the new double integral looks like with respect to the new variables.
The text, you may recall, when you-- or not you may recall, you haven't read it yet-- but when you read the text, you'll notice at the end that Professor Thomas says there is a technique called the Jacobian, multiplying by the Jacobian determinant, that tells you how to transfer a double integral from x- and y-coordinates into another coordinate system.
At any rate, with that prolog as background, our aim in today's lecture is to show more generally how the Jacobian sneaks into the study of multiple integrals. In particular, we call today's lecture multiple integration and the Jacobian. And by way of review, let me pick a problem that we've solved in the past in great detail, but perhaps from a slightly different perspective, that will lead into where the Jacobian matrix and the Jacobian determinant comes from.
Recall that when we want to compute the definite integral 1 to 3 2x squared root of x squared plus 1 dx, we make the substitution u equals x squared plus 1. Or inverting this, x equals positive square root of u minus 1. And I emphasize the positive to point out that in general the inverse of a squaring function is not one to one. See a square root is usually double valued. But notice that with the restriction that x must be on the integral from 1 through 3, x cannot be negative. And therefore, we certainly can assume that locally, meaning in the region in which we're interested in, that x is the positive square root of u minus 1.
From this we saw that du was 2xdx. We then went back to this equation here. We replaced 2xdx by its value du. We replaced the square root of x squared plus 1 by the square root of u. And then noticing that when x equaled 1, u equaled 2 and when x equaled 3, u equaled 10, we wound up with the fact that the number named by this definite integral was the same as the number named by this definite integral.
Now the only thing that I'd like to say here as an aside is the following, there is sometimes a tendency to think of dx as just being a symbol over here, that we think of it as saying all we want is a function whose derivative with respect to x is this. And that other than that, it makes no difference what we put in here.
What I would like you to see at this time and-- to review at this time because we know it happens-- is that if we made the substitutions mentioned here, forgetting about the dx-- in other words, if we replaced x by the positive square root of u minus 1, if we replace x squared plus 1 by u, and if we replace the limits 1 to 3 by 2 to 10 and then just tacked on the du to indicate that we were doing this problem with respect to u, the resulting definite integral would not be equivalent to the original one.
That is not to say that this couldn't be computed. What I mean is this number is incorrect if by this number you mean the value of this definite integral here. And notice that from a pictorial point of view, all we're really saying is that the integral 1 to 3 2x squared root of x squared plus 1 dx is the area of the region R when R is that region in the xy-plane bounded between the lines x equals 1, x equals 3, below by the x-axis and above by the curve y equals 2x times the square root of x squared plus 1. And I've simply put the values of these endpoints in here-- namely when x is 1, y is 2 square roots of 2, when x is 3, y is 6 square roots of 10-- to give you sort of an orientation of this particular curve.
On the other hand, that other integral that was incorrect-- integral from 2 to 10, et cetera-- is the area of the region S where S is the region that's obtained by taking that integral from 1 to 3 along the x-axis, mapping it by u equals x squared plus 1, in other words it goes from 2 to 10. And in fact, you don't even have to know that. All I'm saying is if you just read this thing mechanically in the yu-plane, this would be the area of the region S where S is bounded vertically by the lines u equal 2 and u equal 10, below by the u-axis and above by the curve y equals twice square root u minus 1 square root of u. And it should be clear by inspection that there is no reason to expect that the area of the region R is the same as the area of the region S even though both R and S have areas.
Now, what the whole geometrical impact is on this technique of integration, techniques of integration, what the whole geometric impact is is this. This is a difficult integral to evaluate to find the area. Hopefully, one would hope that we could find a way of scaling an element of area here to correspond to an element of area here which was easier to compute. And that since the mapping was one to one, by adding up the appropriately scaled pieces here, we equivalently add up the pieces to find the area of the region R.
Now, because I know that sounds vague to you, I am going to do that in much more detail. For the time being let me point out, though, that if I want to view this as a mapping, the interesting thing is that any point on the x-axis maps into the corresponding point on the u-axis by the mapping what? u equals x squared plus 1. But it's important to point out that the values of x and u were not independent. They were chosen to obey the identity u equals x squared plus 1, or x is the positive square root of u minus 1. So what that means geometrically is that whatever height was above a point in the region R along the x-axis-- whatever height was here-- that height is the same when that point is moved to the region S.
Because that again sounds like a difficult mouthful, let me write that. All I'm saying is notice that for the u corresponding to a given x, 2x times the square root of x squared plus 1 is equal to 2 times the square root of u minus 1 times the square root of u.
How do I know that? Well, I know that because I picked x to be the square root of u minus 1. Or equivalently what? u equals x squared plus 1. This says that I can replace x by the square root of u minus 1. This says I can replace x squared plus 1 by u. Consequently, as long as the x matches with the image u, this number is the same as this number.
In other words again, in terms of a picture, if I start with the point on the x-axis x equals 2 and I'm looking at the point p being the point on the region R directly above x equals 2, since 2 gets mapped into 5 by the mapping u equals x squared plus 1-- see 2 squared plus 1 is 5-- what was the height that went with the point 2 over here? The height that went with the point 2 over here was simply what? 4 square roots of 5. And I claim that that's the same as this, because when x was 2, u is 5. This is 2 square roots of 5. 5 minus 1 is 4. Square root of 4 is 2. This is 4, therefore-- 2 times 2-- 4 square roots of 5.
It makes no difference whether your using the x or the u, but that the point keeps the same height. It shifted laterally, but it does not distort the height. Which means now if we want to view this not as a mapping from xy-plane into the uy-plane but more traditionally in terms of the xy-plane into the uv-plane, that's what this v is in parentheses here. It means that I might want to name the y-axis the v-axis just so that I can use my identification established in the previous block in our course when we talked about mapping the xy-plane into the uv-plane.
That is not an accent mark over the v. That just happens to be an interruption of this arrow that connects 2 to 5.
But at any rate, in terms of mappings, notice that the region R is mapped onto the region S by the mapping u equals x squared plus 1 and v equals y. In other words, u equals x squared plus 1. But the y-coordinate is the same as the v-coordinate. Just changing the name of the axis here to correspond to the uv-plane.
And now, the idea is simply this. Since-- at least in the domain that we're interested in-- the mapping u equals x squared plus 1 v equals y maps R onto S in a one to one manner, each increment of area delta A sub S matches one and only one delta A sub R.
Let me give you an example of what I mean by this. Let's suppose I start with the region S and let me arbitrarily divide it up into a number-- the interval from 2 to 10-- into a number of pieces here. Oh, I guess in the diagram that I've used here, I've divided this up into four pieces, all right. So I get these four little rectangles. An approximation for the area of the region S would be the sum of the areas of these four rectangles.
What I'm saying is that under our mapping, these four regions induce four mutually exclusive regions that cover all of R. In fact, since v equals y, the way we do this mechanically is-- for example, let's focus on just one of these little elements over here. Let's suppose I want to find how to match this shaded rectangle with a suitable rectangle of R. What I said is whatever the v-value is over here, it must be the same as the y-value of the point that mapped into this point on the axis. In other words, if I call this point here u0, that comes from some point here which I'll call x0. See u0 comes from x0.
What must the height above this point be? Since the transformation does not change the y-value at all since v equals y, what this means is I can now draw a line parallel to the u-axis here, come over to here, and that locates where on the curve I'm going to locate the point x0. See, in other words, I just come across like this, either of these two ways. This is how I locate the x0 that matches the u0. I take its height, take that same height over to this curve, and project down.
Notice, of course, that the delta x that measures the difference between these two points on the x-axis is not the same as the delta u that measures the distance between these two points. But what I want you to see is that these four rectangles here induce four rectangles here. And what I would like to be able to do is somehow be able, hopefully, to find out how to express a typical rectangle here as a scaled version of one of these rectangles, hoping that when I then take the sum the resulting summation leads to an integral which is easy to evaluate.
And before I get into that to show you what does happen here-- I think I'm making this longer than it may really seem-- but let me just get onto the next step. What I want to do next is to blow up these two shaded area so I can look at them in more detail. What I have is a region which I'll call delta A sub S in the uv-plane and a delta A sub R in the xy-plane where the mapping is, again, what? u equals x squared plus 1 v equals y. This piece matches with this piece. All right.
Now, for small delta u notice that by definition of derivative, delta x divided by delta u is approximately dxdu. Consequently, I can say that delta x is approximately dxdu times du.
Now the thing that I really want is not the area of a piece of S, I want a portion of the area of R. I want delta A sub R. Notice that delta A sub R has as its height y sub 0 and as it's width delta x. Notice also that since v sub 0 equals y sub 0-- see y equals v in this transformation-- notice that the area delta AR is v0 times delta x. OK. We also know that delta x is approximately dxdu times delta u. So making this substitution, I see that my little increment of area in the region R is precisely what? It's v0 times the replacement for delta x, dxdu times delta u.
Let me also notice that the region delta AS over here has as its height v0, as its base delta u. So the area of this rectangle is v0du. Let me, therefore, group these two factors together, rewrite this term in this fashion, noticing that v0 delta u is delta A sub S. And I now have the very interesting result that delta A sub R is not delta A sub S. But the correction factor is what? It's dxdu multiplying delta A sub S where, for the sake of argument over a small enough strip here, let we assume that I've chosen dxdu to be evaluated at u equals u0, say.
At any rate, what we are saying is to find all of the area of region R, we want to add up all of these delta A sub Rs as the maximum delta x sub k goes to 0. But from what we've just seen, a typical delta A sub R is a dxdu evaluated at u equals u0 times delta A sub S. And therefore, to find delta A sub R, this limit is precisely the same as this limit.
Now the point is that delta A sub S is certainly just as messy as delta A sub R, in general. It may also happened that when I scale delta A sub S by multiplying it by dxdu, the result is even more messy than the original expression. But it's also possible that dxdu happens to be the factor that wipes out the nasty part of delta A sub S. You see, what I'm saying is if there is a one to one correspondence-- which there is-- between the delta ASes and the delta ARs, if this expression here happens to be convenient, I can find this sum simply by computing this sum.
And that's why in techniques of integration, that's precisely what we look for. We look for the change of variable, the substitution, that makes this thing simplify. And that's precisely what happened in this particular example. Keep in mind that what I've written down over here is true for any substitution when x is some function of u, not just for x equals u squared plus 1. I could've done this any time. But what I claim is that if x weren't equal to-- what was it-- the square root of u minus 1, this wouldn't have turned out to be a very nice expression.
And you see this is going to be called the one-dimensional Jacobian later on. This is the correction factor, the scaling factor, you see. Let's see how that worked out. Notice that delta A sub S was v0 times delta u. And notice that by definition of what the curve looked like in the uv-plane, v0 is twice the square root of u0 minus 1 times the square root of u0.
On the other hand, what is dxdu? Since u is equal to x squared plus 1, it's easy to show that dxdu is 1 over twice the square root of u minus 1. So in particular when u equals u0, dxdu is 1 over 2 square root of u0 minus 1. Notice now, even though delta A sub S is quite messy, when I multiply it by this particular scaling factor, look at what that scaling factor wipes out. The 2 square root of u0 minus 1 wipes this out. All I have left is the square root of u0 times delta u.
When I form the definite integral summing this thing up, it's trivial, you see, to see that that simply comes out to be what? The definite integral from 2 to 10 square root of u du. And since this particular sum was equal to A sub R, that is the mapping interpretation of why the area of the region R can be evaluated by this particular integral. Now in a sense, all of this has been review even though the pitch has been slightly different.
Let me now generalized this to a bona fide mapping of two-dimensional space into two-dimensional space. And the reason I use the word bona fide is that when you say let u equal x squared plus 1 and let v equal y, you really haven't got a general mapping of two-dimensional space into two-dimensional space. You've essentially let the y-axis remain fixed. So let me talk about something more general.
Let's suppose I have an arbitrary region R and an arbitrary function f bar which maps R onto S in a one to one manner. By the way, notice the whole idea is this. When I want the area of the region R, it's going to involve a dxdy inside the double integral. When I want the area of the region S, that's what's going to involve a delta u times delta v. Now the reason that delta u times delta v is indeed a bona fide element of area when we're breaking up S lies in the fact that in the us-plane delta u times delta v is the actual area of an increment of area in S when we break up S by lines parallel to the u and v axis.
On the other hand, notice that if we see what the line u equals a constant comes from, back in the xy-plane u is a function of x and y. And to say that u of xy equals a constant does not mean that you have a straight line. You could have some pretty squiggly lines over here. In other words, it might be a very funny looking line that maps into a straight line, a straight vertical line, in the uv-plane with respect to the mapping f bar.
And in a similar way, the lines v equal a constant-- the lines v equal a constant-- in the xy-plane I read what? v of xy equals a constant. That's a general curve in the xy-plane. What we're saying is that since this mapping is one to one, when I break up this region into elementary elements of rectangles, that will induce a breaking up of this region into little elements here.
But notice that the resulting elements-- say we take a piece like this, and we see where that piece comes from. Let's say that particular piece happened to come from here. Say that that was the one to one correspondence. You can take delta u times delta v here. But notice that delta u times delta v simply means multiplying two edges which aren't straight lines, which aren't necessarily perpendicular, and hence in no way should represent what the area of this little element here is.
The key point is we do not want the area of the region S. We want the area of the region R. And what we're hoping that we can do is that by making the change of variables that maps the region R in the xy-plane into the region S in the uv-plane that we somehow find a convenient way of scaling an individual element of area here with respect to one here, and define the area of this region by adding up the appropriate pieces here.
And, again, let me show you what that means in terms of just an enlargement again. You see, in the same as I did before, let me take this little piece over here and really blow it up. Let me take this little piece that is the backmap of this-- in other words, the piece that maps into this-- let me blow that up.
And the idea is this. If I pick delta u and delta v sufficiently small, notice that the area of the backmap off delta A sub S is approximately the area of a parallelogram. You see, come back to this statement after I've explained the picture.
What I'm saying is I start with an element of area delta A sub S in the uv-plane. You see I pick its vertices. I'll call them A bar, B bar, D bar, C bar. That's a little rectangle over here. Because the mapping is one to one, I know that there is one and only one point in the xy-plane that maps into A bar. See that? Let's call that point A. There is one and only one point in the xy-plane that maps into B bar. Let's call that B. One and only one point in the xy-plane that maps into C bar. Let's call that point C. And let me leave the point D out for a moment.
Now the idea is if we call the coordinates of A bar u0 comma v0, then because this is a line parallel to the axis-- call this dimension delta u-- b bar is u0 plus delta u comma v0. C bar-- call this dimension delta v-- is u0 comma v0 plus delta v. Now the point is there is no reason why the image of B bar-- name of the point B-- has to be on the line that joins A parallel to the x-axis. In other words, B is up here someplace, C is up here some place. In other words, again, there's no reason why these backmaps give me a rectangle over here.
The point is that B has some coordinates. It's x0 plus some increment involving x-- let me call that delta x1-- and its y-coordinate is y0 plus delta y1. C is the point x0 plus some increment delta x2 comma y0 plus delta y2. And what I'm saying is now, if I were to just look at the parallelogram which had AB and AC as adjacent sides, it's very easy for me to find the area of that parallelogram. Namely to find the area of a parallelogram in vector form, I just take the magnitude of the cross-product of the two vectors AB and AC.
You see what's wrong with this is that that particular parallelogram is not the exact image of the backmap of delta AS. Sure, A bar maps exactly into A. C bar maps exactly into C. B bar maps exactly into B. But the point is that these points along A bar C bar do not map into the straight line from A to C, in general.
In other words, what characterizes this? This is characterized by delta u equals 0. And when u is written in terms of x and y that doesn't say that delta x or delta y is 0. So, the true image of this might be what I've represented with this dotted array here. In other words, what the true backmap of delta A is this dotted thing where D is now this vertex here. You see, there's no guarantee that the backmap of D bar is going to be the fourth vertex of this parallelogram.
But what the key point is-- and this is where that familiarity is so important-- is that if the transformation is smooth enough, continuously differentiable, then what it does mean is that as long as delta u and delta v are sufficiently small, the true area of the region delta AR that we're looking for is approximately equal to the area of this parallelogram. And by approximately equal I mean what? That the arrow goes to 0 as we take the limit in forming the double sum.
In other words, again, the key point is this. The backmap of delta A sub S yields delta AR, but that delta AR is approximately this parallelogram. And the area of this parallelogram is exactly the magnitude of AB-- the vector AB-- crossed with AC. In other words, the approximation comes in because this is exactly the area of the parallelogram. But delta A sub R is only approximately equal to the area of the parallelogram.
At any rate, notice in terms of i and j components how easy it is to compute AB and AC. You see, where are the components of the vector from A to B? The i component is delta x1. See, this minus this. The j component is this minus this, right. In other words, AC has what as its components? It's this minus this, namely delta x2. This minus this is the y-component. That's delta y2.
In other words-- to write this out so you don't have to listen to how fast I'm talking-- AB is this particular vector. AC is this particular vector. Remembering that when we take a cross-product, i crossed j is 0, j cross j is 0, i cross j is k, and j cross i is minus k. Remember, for the cross-product we can't change the order of the terms in the order in which they appear.
We then determined what? That AB cross AC is delta x1 delta y2 minus delta x2 delta y1 times the vector k, the unit vector in the z direction. Now, delta x1 is exactly-- see, remember, we're looking at the parallelogram, which is a straight line. If you want to think of it in terms of the region R, by differentials delta x1 would be what? It's approximately the partial of x with respect to u times delta u plus the partial of x with respect to v times delta v. Similarly, delta y1 is y sub u times delta u plus y sub v times delta v.
But now keep in mind where delta x1 and delta y1 come from. Delta x1 and delta y1 come from the back mapping from A bar B bar back to AB. And along A bar B bar, noticed that v is always equal to v0. So that means that delta v is 0. That means, therefore, that because delta v is 0, delta x1 and delta y1 are simply this. See this drops out.
Similarly, delta x2 and delta y2 come from the backmap of A bar A bar. Along A bar C bar, u is equal to u0. So delta u is zero. So writing this out, these drop out. And now with these terms being 0, with these terms being 0, I could very simply compute the product delta x1 delta y2 minus delta x2 delta y1. And if I do that, you see, right away what I obtain is what? It's this times this.
I now, OK, collect the terms here. In other words, I want the delta u and the delta v together. The multiplier out front here is x sub u times y sub v. In a similar way, delta x2 times delta y1 is x sub v y sub u times delta u delta v. Therefore, putting that into here, the magnitude of AB cross AC is simply this expression here, noticing that the k vector drops out because its magnitude is 1.
Notice that delta u times delta v is precisely delta A sub S and that x sub u times y sub v minus x sub v times y sub u is precisely the determinant of our old friend the Jacobian matrix, the Jacobian matrix of x and y with respect to u and v. In other words, delta A sub R is approximately equal to the determinant of the Jacobian matrix of x and y with respect to u and v times delta AS.
And if I now perform this double sum, you see this becomes what? The area of the region R-- just write that in, that's really the area of the region R-- is a double integral over R dxdy. And that's the same as what? The double integral over S times dudv multiplied by the scaling factor of the Jacobian determinant.
By the way, notice I dropped the determinant symbol over here. The reason for that is that many textbooks, including our own, use this notation not to name the Jacobian matrix but to name the Jacobian determinant. I have been using this to name the Jacobian matrix. From this point on, I will now switch to become uniform with the text. And unless otherwise specified, I will write this rather than put the determinant symbol in. From now on in our course, when I write this I am referring to the Jacobian determinant, OK.
But the thing is this. Notice that the given mapping might straighten out the region R into a nicer looking region S. But to offset this, it may also turn out the resulting integrand here is much worse than the integrand here. Here the multiplier of dxdy was the simple number 1, wasn't it? Here, what's multiplying dudv, no matter how nice S is, is this expression here, which may be quite messy.
And therefore, in most practical applications where one solves multiple integrals by a change of variable, one not only wants a change of variables that straightens out the region into a nice looking one, he wants a combination of two things. He would like a nice looking region. And more importantly, even if he can't get a nicer looking region, at least if he gets a correction factor, a Jacobian determinant, that gives him something that's easy to integrate, he'll settle for that. And what that means, hopefully, will become clearer as we go through the exercises and the reading material.
At any rate, I think that's all I want to say about a supplement to Professor Thomas' treatment of polar coordinates at this time. And until next time, then, good bye.
Funding for the publication of this video was provided by the Gabrielle and Paul Rosenbaum Foundation. Help OCW continue to provide free and open access to MIT courses by making a donation at ocw.mit.edu/donate.
Study Guide for Lecture 3: Multiple Integration and the Jacobian
- Chalkboard Photos, Reading Assignments, and Exercises (PDF)
- Solutions (PDF - 4.2MB)
To complete the reading assignments, see the Supplementary Notes in the Study Materials section.