Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Instructor: Prof. Gilbert Strang
Lecture 35: Convolution Equ...
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR STRANG: Well, hope you had a good Thanksgiving. So this is partly review today, even. Wednesday even more review. Wednesday evening, or Wednesday at 4 I'll be here for any questions. And then the exam is Thursday at 7:30 in Walker. Top floor of Walker this time, not the same 54-100. OK, and then, no lectures after that. Holiday, whatever. Yes. Right, you get a chance to do something. Catch up with all those other courses that are being neglected in favor of 18.085. Right. OK, so here's a bit of review right away. We really had four cases. We started with Fourier series, that was periodic functions. And then discrete Fourier series, also periodic in a way. Because w^N was one. So that we have N numbers and then we could repeat them if we wanted. So those are the two that repeat. This is the f(x), this is all x, so that would be the Fourier integral that we did just last week. Fourier integral transform. And this was the-- Well, these are all, this is the discrete all the way. So that's-- oh, you can see these pair off, right? The periodic function, the 2pi periodic function has Fourier coefficients for all k, so that's the pair that we started with. Section 4.1. This sort of pairs, I don't know whether to say with itself. I mean, we start with N numbers and we end with N numbers. We have N numbers in physical space. And we have N numbers in frequency space. Right, so we call those, so those went to c_0 up to c_(N-1).
And this, all x, pair-- for the function, paired off with itself. Or with this went to F-- well maybe I used small f. I guess I did in last week. So that, and I called its Fourier transform f hat of k, all k. So that's the pairing kind of inside n-dimensional space, with the Fourier matrix. This is the pairing of the formula for f, and its similar formula for f hat, and these are the guys that connect with each other. OK, so that's what we know. What we haven't done is anything in two dimensions. So I would like to include that today. I think my real message about 2-D, and I'm not going to include it on the exam, but you might wonder, OK, can I have a function of x and y? And will the whole setup work? And the answer is yes. So really, my message is not to be afraid in any way of 2-D. It's just the same formulas with x,y or two indices, k,l. Yeah. You'll see that. OK, now for the new part. What's a convolution equation? That's my word for an equation where instead of doing a convolution and finding the right-hand side, instead we're given the right-hand side. And the unknown is in the convolution. So let me write examples of convolution equation. Every one of these would allow a convolution. So the convolution equation would be something the integral of F(t) u, for the unknown, at x-t, is-- Oh no, sorry. F will be the right-hand side. F of, well, can I-- Yeah, better if I put it on the right-hand side. Wouldn't want to call it the right-hand side. So this would be some, shall I call it often K for kernel is sometimes the word.
So what I'm saying is equations come this way. This is really K convolved with u. Equals F. You see, the only novelty is the unknown is here. So that's why the word deconvolution is up there. Because that's what we have to do. We have to undo the convolution, this unknown function is convolved with a known, K is known, some known kernel that tells us the point spread of the telescope or whatever we're doing. And gives us the output that we're looking at. And then we have to find the input. OK, can I write down the similar equations for the other three here? And then we'll just think how would we find u, how would we solve them? So the equation here might be that some kernel circle convolved with the unknown u is some y. These are now vectors. This is known. This is known. And those, the N components of u, are unknown. OK, so that would be the same problem here. What would be here? Same thing. Now the integral will go from-- The only difference is the integral will go from minus infinity to infinity, K(t)u(x-t) dt equal f(x). And finally regular convolution. What am I going to call it? K would be a sequence, maybe I should call it a, known, convolved with u, unknown, is some c, known. Yeah. So those would be four equations. You might say, wait a minute, where is Professor Strang come up with these problems at the last week of the course. But, these are exactly the type of problems that we know and love. These come from constant-coefficient, time-invariant, shift-invariant linear problems. LTI, linear time-invariant. And my lecture Wednesday, just before Thanksgiving, took a differential equation for u and found, and put it in this form. I'll come back to that. So suddenly we're seeing, I mean, we're actually seeing some new things but also it includes all the old ones.
These are all of the best problems in the world. These linear constant-coefficient problems. Time invariant, of any of these types. This one was an integral from minus pi to pi, where this one went all the way. So this is not brand new stuff. But it sort of looks new. And now the question is, so my immediate question is, before doing any example, how would you solve such an equation. And I saw on old exams, some of this sort for example, let me focus on this one. Let me, instead of K there, I'm not used to using K for a vector, I'm used to, well maybe I'll use c. For the vector there. So this is n equations, n unknowns. Oops, capital N is our usual here. For the number. N u, N unknown u's. It's a matrix equation with a circulant matrix. So all these equations are sort of the special best kind. Because they're convolutions. And now tell me the main point. How do we solve equations like this? How do we do a deconvolution, so the unknown is convolved with c here, it's convolved with K, it's convolved with a, how do we deconvolve it to get u by itself? So what's the central idea here? Central idea: go into frequency space. Use the convolution rule. In frequency space, where these transform, we're looking at multiplication. And multiplication, we can undo. We can de-multiply. De-multiply is just a big word for divide, right? So that's the point. Get into that space. That's what we've been doing all the time.
I better get one example, the example from the problem Wednesday, just up here. Just so you see it. This won't look like a convolution equation, but do you remember that it was -u'' plus a squared u equal some f(x)? So that's a constant, it's certainly constant-coefficient, linear, time-invariant. Right, OK. And how did we solve that? We took Fourier transforms. So this was the second derivative, the Fourier transform. What is the rule for the Fourier transform of derivative? Every derivative brings down an ik in the transform. So we get ik twice. So it's k squared. i squared cancels the minus one. So that's the transform of -u''. This is the transform of ordinary a squared u, just a squared. And this is f hat. So we've got into frequency space. Where we are just seeing a multiplication, k squared plus a squared, u hat, of-- this is u hat of k, equals f hat of k, right? Oh well, sorry we were-- Yeah, that's right. f hat of k, right. So we're in frequency space, where we just see a multiplication. So again, this is now we just demultiply, just divide. And then we have the answer, but we have its transform. And then we have to transform back. So we have to do the Fourier transform to get to the, I'll say the inverse Fourier transform. To get back to u(x), the answer. That's the model. That's the model. And that's maybe the one that we've seen, now we're able to think about all these four topics. Right.
OK, so what was the key idea? Get into frequency space and then it's just a, the equation is just a multiplication, so the solution is just a division. So can I do that now with these four examples, just see. So this is like bring the pieces together. OK, and deconvolution is a very key thing to do. OK, so I'll take all four of those and bring them into frequency space. So this will be maybe, you'll let me use K hat of, oh, K hat of k that's not too good. Well, stuck with it. What am I doing here? In this 2pi periodic one? That's the one I started with, but now I've got, I'm using hats and so on. I didn't do that in Section 4.1. What the heck am I going to, what notation am I going to do? And I really didn't do convolution that much, for functions. So let me jump to here. I'll come back. It follows exactly the same pattern. So let me jump to this one. OK, so I have a convolution equation now. This is one where you could do this one. This could appear on the quiz because I can do all of it. So what is this convolution? OK. I've got N equations, N unknowns. Let me write them in matrix form, just so you see it that way too. c_0, c_1, c_2, c_3. I'll make N equal four. And then these are, this convolution has c_0, c_2, c_1. c_3, c_2, c_1, c_0. This'll be c_3, c_3, c_3, I'm writing down all the right numbers in the right places. So that when I do that multiplication with the unknown, [u_0, u_1, u_2, u_3], I get the right-hand, the known right-hand side. Maybe b would be a little better. Because we're more used to b as as a known. It's just an Ax=b problem, or an Au=b problem. It looks like a convolution but now it's just a matrix multiplication. So this is just [b_0, b_1, b_2, b_3].
OK. That's our equation. Special type of matrix. Circulant matrix. So this is just literally the same as c circularly convolved with u equals b. I just wrote it out in matrix language. So you could call MATLAB with that matrix, and so one way to answer it would be get the inverse of the matrix. But if it was large, a better way would be switch over to frequency space. Think, now. What happens when I switch these vectors to frequency space? It becomes a multiplication. So this becomes a multiplication. Now so c, I want the Fourier, from the c's, what am I going to-- So these are all in the space where it's a convolution. What am I going to call it where it's in the space where it's a multiplication? I just need three new names. Maybe I'll use c hat, u hat, and b hat just because there's no doubt in anybody's mind that when you see that hat, you've gone into frequency space. Now, what's the equation in frequency space? And then I'll do an example. It's a multiplication, but I don't usually see a vector and nothing there. What's the multiplication in frequency space? It's component by component. c_0*u_0 equals b_0. c hat 1 u hat 1 equals b hat 1. c hat 2 u hat 2 equals b hat 2. And finally, c hat 3 u hat 3 equals b hat 3. And there might be, I don't swear that there isn't, a 1/4 somewhere. Right? But the point is, we're in frequency space now. We just have a component by component, each component of c hat times each component of u hat gives us a component of b hat; now we're ready for a deconvolution; just divide.
So now u hat, obviously I don't have to write all these, b hat 0 over c hat 0. Right? I just do a division. So on down to u hat 3 is b hat, is the third component of b, divided by the third component of c. OK, now don't forget here. That in going from here to here, I had to figure out what the c hats were, right? I had to do the Fourier matrix, or the inverse Fourier matrix to go from c to c hat, from b to b hat, so everything got Fourier transformed. But the object was to make the equation easy. And of course, now we've got four trivial equations that we just solved that way. Alright, let me see if I can just pull this down with some questions. Here's a good question. When is a circulant matrix invertible? When will this method work? The circulant matrix could fail to be invertible. How would I know that? If it's singular, and how would I, if I proceed this way, here I've got an answer. But if it's singular I'm not really expecting to get an answer. Let me lift the board a little. So where would I get, oops, have to stop this method. In solving those four equations. Where would I learn that it's is singular? What could go wrong in this? Yes.
AUDIENCE: [INAUDIBLE]
PROFESSOR STRANG: That's right. Always in math, the question is are you dividing by zero. So the question of whether the matrix is singular, is the same as the question of whether c_0 hat, c_01, c_02, and c_03-- sorry, c_0 hat, c_1 hat, c_2 hat, and c_3 hat, can't be zero. That's, in fact, even better those four numbers, those four c hats, are actually the eigenvalues of the matrix. We've switched, what the Fourier transform did, was switch over to the eigenvalues and eigenvectors. And there, that's the whole message of those guys is, you follow each one separately. Just the way we're doing here. So this is the component of the b in the four eigenvector directions. Those are the four eigenvalues, and I have to divide by them. You see, the idea is, like, we've diagonalized the matrix. We've had that matrix, which is full. And we take-- By taking the Fourier transforms, that's the same thing as as putting in the eigenvectors, switching the matrix to this diagonal matrix, right? Our problem has become, like, the diagonalized form is c_0 hat down to c_3 hat, sitting on the diagonal. All zeroes elsewhere. That's when we switched, when we did Fourier transform we were switching to eigenvectors. OK, so that's the message. That the test for singularity is the Fourier, the transform of c hits zero. Then we're in trouble. Let me do an example you know. Let me do an example you know.
OK here's, so finally now we get a numerical example. The example we really know is this one, right? As I start writing that, you may say in your mind, oh no not again. But give it to me, one more week with these matrices. But it'll be the C matrix, so it's going to be the circulant. Recognize this? And it's got those minus ones in the corners, too. OK, let's go back to day one. Is that matrix invertible? Yes or no. Please, no. Everybody knows that matrix is not invertible. And do you remember what's in the null space? Yes, what's the vector in the null space of that matrix? All ones. Now, when I take, just think now. When I take Fourier transform, that all ones is going to transform to what? It's going to transform to the delta. It'll transform to the one that is like [1, 0, 0, 0]. Or maybe it's [4, 0, 0, 0]. But it's that, well, OK, now I'm ready to take, so here's my c. So what's my method now? I'm going to do this method, and I'm going to run into, this thing is going to be zero. Because that's the eigenvalue that goes with the [1, 1, 1, 1, 1] column, the constant, the zero frequency in frequency space. You'll see it happen. So let's take the Fourier transform of that. And then we would have to take the Fourier transform of the right-hand side b, whatever that happened to be. But it's always the left side. The singular or not matrix. I believe we'll be singular here. So, OK, just remind me how do I take transforms of this guy? Gosh, we have to be able to do that. That's Section 4 point, well, 4.3 isn't, yeah. The DFT of that vector. What do I get? Yes.
How do I take the DFT of a vector? I multiply by the Fourier matrix, right? Yes. So I have to multiply that thing by the Fourier matrix. So to get c hat, this was big C for the matrix, little c for the vector that goes into it, into column zero. And c hat for its transform. OK, so now here comes the Fourier matrix that we know, 1, i, i^2, i^3; 1, i^2, i^4, i^6; 1, i^3, i^6, and i^9. So I want to transform that c to get, to find out c hat. OK, and what do I get up there? What's the first component, the zeroth component I should say, when I take this guy, this four, this vector with four components and I get back four components, the frequency components, what's the first one? Ones times this, what am I getting? Zero. That's what I expected. That tells me the matrix is not going to be invertible. Because in a different language, I'm finding the eigenvalues and that's one of them. And if an eigenvalue is zero, that means the eigenvector is getting knocked out completely. And there's no way a c inverse could recover when that eigenvector is gone. OK, let's do the other ones. Two minus i, nothing. What's the other one here? Two, this is minus i, and that's plus i, I think. So I think it's just two. Alright, this is 2 i squared, can I write in some of these just so I have a little, i squared is minus one, and i^4 is one, and that's minus one. So that's two plus one, plus one, I think is four. And then i^3 is the same as minus i. And i^9 is the same as plus i. So I think I'm getting two plus i, nothing, minus i: two. So what's my claim? My claim is that these are the four eigenvalues, that the Fourier-- Fourier diagonalizes these problems.
That's what it comes to. Fourier diagonalizes all constant-coefficient, shift-invariant, linear problems. And tells us here are eigenvalues. So [0, 2, 4, 2]. Would you like to, how do I check the eigenvalues of a matrix? Let's just remember. If I give you four numbers and I say those are the eigenvalues, and you look at that matrix, what quick check does everybody do? Compute the? The trace. Add up the diagonal of the matrix, add up the proposed eigenvalues, they had better be the same. And they are. I get eight both ways. That doesn't mean, of course, that these four numbers are right, but I think they are. Yeah, yeah. So those added up to eight, those numbers added up to eight. And yep. And these are real. They came out real, and how did I know that would happen from the matrix? What matrices am I certain to get real eigenvalues for? Symmetric. Right. Now, what about, I heard the word positive. Of course, that's the other question I have to ask. Is this matrix positive definite? OK, everybody, this is the language we've learned in 18.085. Is that matrix positive definite, yes or no? No. What is it? It's positive semi-definite. What does that tell me about eigenvalues? There, one is zero, that's why it's not positive definite. But the others are positive. So that sure enough, in other words, what we've done here, for that matrix that came on day one and now we're seeing it on day N-1 here. We're we're seeing sort of in a new way, because at that time we didn't know these four were the eigenvectors of that matrix. But they are. And we're coming to the same conclusion we came to on day one. That the matrix is positive semi-definite and that we know its eigenvalues. And we can, actually, let me even take it one more step, just because this example is so perfect.
Some right-hand sides we could solve for, right? If I have a matrix that's singular. Way, way back, even, I think it was like a worked example in Section 1.1, I could ask the question when is Cx=b solvable. Because there are some right-hand sides that'll work. Because if I just take an x and multiply by C, I'll get a right-hand side that works. But for which vectors, right-hand sides b, will my method work? The ones that have...? Yeah, the ones that have which? What do I need with these c's, c_0, c_1, c_2, and c_3, for my solution to be possible. I need b_0 hat. Equals zero. I need b_0 hat equals zero. And then what does that say? That means that the b, the vector b, has no constant term in the Fourier series. It means that the vector b is orthogonal to the [1, 1, 1, 1], eigenvector. So this is like a subtle point but just driving home the point that what Fourier does is diagonalize everything. It diagonalizes all the important problems of, all the simplest problems, of differential equations. You know, I mean this is like 18.03 looked at from Fourier's point of view. OK, what more could I do with that equation? I think you really are seeing all the good stuff here. You're seeing the matrix. We're recognizing it as a circulant. We're realizing that we could take its Fourier transform. We get the eigenvalues. We're diagonalizing the matrix, the convolution becomes a multiplication, and the solution becomes, inversion becomes division. I hope you see that. That's really a model problem for this course.
OK. yeah. Questions. Good.
AUDIENCE: [INAUDIBLE] PROFESSOR STRANG: Would I give you a six by six Fourier matrix on a test? Probably not. No. I just about could. I mean, it's, six by six, those are pretty decent numbers. Right. Those six roots of unity, but not quite. Right, yeah. Yeah. Yeah. So four by four is, five by five would not be nice, certainly. Who knows the cosine of 72 degrees? Crazy. But, 60 degrees we could do. So the Fourier matrix would be full of square roots of three over two, and one over two, and i's, and so on. But it wouldn't be as nice, so really four by four is sort of the model. Yeah, yeah. So four by four is that model. Other questions? Because this is really a key example. Yeah. When I calculated the eigenvalues, yeah. Ah. Because this matrix, I know everything about that matrix when I know its first vector.
AUDIENCE: [INAUDIBLE]
PROFESSOR STRANG: Yeah, it's because it's a circulant matrix. It's because that matrix is expressing convolution with this vector. [2, -1, 0, -1]. That circulant matrix essentially is built from four numbers, right. Yeah. Yeah, and they go in the zeroth column. Right, yeah. Yeah. Right, so there is an example where we could like do everything. Now, and let me just remember that with this example, we could do everything. So this is an example of, you could say this type of problem. But with a very special kernel there, so it turned out to be, it looks like an integral equation here but if that kernel involves delta functions and so on then it can be just a differential equation. And then that's what we got there. So we took all-- The same steps we did here, we did here. We took the Fourier transform, and I emphasize there, just to remember Wednesday, this was a delta function. When I took the Fourier transform I got a one, so this was a one over this. And I did the inverse transform and I got back to the function that I drew. Which was e^(-ax) over 2a. And even. So, yeah. this was the answer u(x). So I was able to do that, I mean this step was easy, that step is easy. That step is easy, the division is easy. And then I just recognize this as the transform of this one, this example that we had done. Once I divided by 2a. So you should be able to do this. So those are two that you should really be able to do. I'm not going to, obviously I'm not going to, ask you a 2-D problem on the exam or even on a homework.
But now if you'll allow me, I'd like to spend a few minutes to get into 2-D. Because really, you've got the main thoughts here. That Fourier is the same as finding eigenvectors and eigenvalues. That's the main thought for these LTI problems. OK, now suppose I have, let's just get the formalities straight here. Suppose I have a function of x and y. 2pi periodic in x, and in y. So if I bump x by 2pi, or if I bump y by 2pi -- oh, I'm using capital F for the periodic guys. So let me stay with capital F -- x, y+2pi. OK, so I have a function. This is given. This is, it's in 2-D now. And I want to write its Fourier series. So I'm just asking the question what does the Fourier series look like for a function of two variables. The point is, it's going to be a nice answer. And so everything, what you know how to do in 1-D you can do in 2-D. So let me write the complex form, the e^(ik) stuff. So what would I write, how would I write this? I would write that as a sum, but it'll have, I'll make it a double sum, I'll write two sigmas just to emphasize that we're summing from k equal minus infinity to infinity, and from l equal minus infinity to infinity. We have coefficients c_kl. They depend on two indices, this is the pattern to know. Multiplying our e^(ikx), and our e^(ily). Right, good. So, alright. Let me ask you. How would I find c_23? Just to know that-- We could find all these coefficients, find formulas for them. We could do examples. How would I find c_23? So this is my F. I know F. I want to find c_23. What's the magic trick?
And I'm 2pi periodic, so all integrals, all the integrals-- And I'm giving you a hint, of course. I'm going to integrate. And the integrals will all go from minus pi to pi in x and in y. They'll integrate over the period square. Here's the period square from, there's the center. x direction, y direction, goes out to pi and goes up to pi. So all integrals will be over dxdy. But what do I integrate? To find c_23. Well, these guys are orthogonal. That's what's making everything work, they're orthogonal and very special. So that by use orthogonality, what do I do? I multiply by? Just tell me what to multiply by. By this and integrate. OK, what is it that I multiply by if I'm shooting for c_23, for example? e^(i2x), is it e^(i2x)? Minus, right. I multiply by e^(-i2x), e^(-i3y), and integrate. Yeah. So when I multiply by that and integrate, everything will go except the c_23 term. Which will be multiplied by what? So I'll just have c_23 times probably 2pi squared. I guess 2pi will come in from both integrals, so the formula will be c_kl-- c_kl will be, do you want me to write this formula? I'll write it here and then forget it right away. c_kl will be one over 2pi squared, the integral of my function times my e^(-ikx), times my e^(-ily) dxdy. So that just makes the point. That there's nothing new here, it's just up a dimension. But the formulas all look the same, and if F was a-- Well, if F is a delta function, if F is now a 2-D delta function. We haven't done delta functions in 2-D, why don't we? Suppose F is the delta function in 2-D. Then what are the coefficients?
What do you think you this means, this delta function in 2-D? So if I put in the delta here, and I integrate. And what do I get then? So if this guy is a delta, a two-dimensional delta function, the rule is that when I integrate over a region that includes the spike, so it's a spike sitting up above a plane now, instead of sitting above a line it's sitting above a plane. Then I get the value, so this is the delta function at the origin. So I get the value of this at the origin. So what answer do I get? I get one out of the integral and then I just have this constant. So it's constant again. And it's just one. So the Fourier coefficients of the delta function are constant. All frequencies there are the same. What about a line of delta functions? And what does that mean? What about, yeah let me try to draw delta(x). Suppose I have a function of x and y -- it's just worth imagining a line of delta functions. So I'm in the xy-- Let me look again at this thing. I have delta functions all along this line. Now. Here is a crazy example, just to say well there is something new in 2-D. So previously my delta function was just at that point. And the old integrals just picked out the value at that point. But now think of a delta function's a sort of line of spikes. Going up here, and then of course it's periodic. Everything's periodic so that line continues and this line appears here, and this line appears here. But I only have to focus on one period square. What's my answer now? If this function suddenly changes from a one-point delta function to a line of delta functions?
Now tell me what the coefficients are. What are the Fourier coefficients in 2-D for a line of delta functions? A straight line of delta functions going up the y axis? It'll be-- Let's see. What do I do? I'm go to integrate-- Oh yeah, what is it? Good question. OK. So what is the-- When do I get zero and when do I not get zero out of this? Yeah, tell me first when do I get zero out of this integral? And when do I not? What am I doing here? Help me. I said 2-D was easy and I've got in over my head here. So look. I can do the x integral, right? We all know how to do the x integral. Yes, is that right? If I integrate with respect to x, what do I get? Let's see, I'll keep that one over 2pi squared. Now I'm trying to do this integral. Do I get a one? If I get a one from the x integral. So then I'm down to one integral, just the y integral is left. e^(-ily) dy, right? I did the x part, which said, OK, take the value at x=0, which was one. So the x integral was one, good. And now I've got down to this part. Now what is that integral? It's two-- wait a minute. Depends on l, doesn't it? When l-- Yeah, so it's going to depend on whether l is zero or not. Is that right? Yeah, that's sort of interesting. If l is zero, then I'm getting -- then this is a 2pi. So the answer, so I'm getting c_k0, when l is zero I'm getting a 2pi out of that. If l is zero, I'm integrating one, I get a 2pi, cancels one of those. I get a one over 2pi. And otherwise the other c_kl's, when l is not zero, are what? Just zero, I think. The integral of this thing, this is a periodic guy, if I integrate it from minus pi to pi it's zero. What am I-- I'm making a big deal out of something that shouldn't be a big deal. The delta(x) function, this is just, its Fourier series is just the one we know. Sum of e^(ikx)'s.
Do you see what's happened here? It was supposed to be a double sum. But the ones, when l wasn't zero, aren't there. The only ones-- Yeah. So I'm back to the, for a line of spikes, a line of deltas, I'm back to-- So it only depended on x, so the Fourier series is just the one I already know. All ones. When there's no-- When l is zero, all ones or 1/(2pi)'s, all constants when l is zero, but there's no y-- There's no oscillation in the y direction. OK, I don't know why I got into that example, because the conclusion was just it's the Fourier series that we already know and it doesn't depend on l, because the function didn't depend on y. OK, then we could imagine delta functions in other positions, or a general function.
OK, so that's 2-D. Would I want to tackle a 2-D-- Ha, we've got two minutes. that's one dimension a minute. Right, OK. What happens, what's a 2-D discrete convolution? What's a 2-D discrete convolution? Now, you might say OK, why is Professor Strang inventing these problems? Because a 2-D discrete convolution is the core idea of image processing. If I have an image, what does image processing do? Image processing takes my image, it separates it into pixels, right, that's all the image is, bunch of pixels. Then many 2-D image processing algorithms, will take-- JPEG, old JPEG for example, would take an eight by eight, eight by eight 2-D, in other words-- Eight by eight, 2-D is the main point, set of pixels. And transform it. Do a 2-D transform. So what is a 2-D transform? What would be the 2-D transform that would correspond to this? First of all how big's the matrix? Just so we get an idea. I probably won't get to the end of this example. But just, so in 1-D, my matrix was four by four. Now I've got, so that was for four points on a line. Now I've got a square of points. So how big is my matrix? 16, right? 16 by 16, because it's operating on 16 pixels. It's operating on 16 pixels, where in 1-D it only had four to act on. So I'm going to end up with a 16 by 16 matrix here. And I think-- Let me see, what do I need? Oh, wait a minute. Uh-oh. Yeah, I think the time's up here. Yeah, because my C, has my C got to have 16 components? Yes. My u has to have 16, my right-hand side has got these 16 components. Yeah. So I'm up to 16 but a very special circulant of a circulant. It'll be a circulant of a circulant somehow. OK, enough for 2-D. I'll see you Wednesday and we're back to reality. OK.