Lecture 17: Discrete-Time (DT) Frequency Representations

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Instructor: Dennis Freeman

Description: As digital signal processing components have become cheaper, traditional design problems in audio and video systems have converted to discrete-time. This lecture compares system responses and Fourier representations in discrete- and continuous-time.

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

DENNIS FREEMAN: Hello, welcome. So as you might expect, there is yet another announcement. So this time, Exam 3, the last of the midterms, same rules-- everything is roughly the same. Walker, next Wednesday, 7:30 to 9:30.

No recitations on the day of the exam. Coverage through Lecture 18, which is next lecture, coverage through Recitation 16-- that's tomorrow-- homeworks 1 to 10. Homework 10 won't be graded-- there will be solutions posted. 3 pages, your old 2 plus 1.

No calculators, blah, blah, blah-- same as last time. Designed to be one hour, two hours to complete-- same as last time. There'll be a review session-- there'll be one on Monday at 3 PM in 36-112 and, of course, you can ask any review questions you'd like at any of the open office hours, but there'll be a formal review in 36-112 on Monday at 3:00. Prior term exams have already been posted and if you have a conflict, please tell me because I have to find rooms and a proctor.

OK, questions, comments-- completely routine at this point, right? No problems-- everybody knows what they're doing? It'll be fun, it'll be enjoyable-- smile. Questions, comments?

OK, so what I want to talk today about is DT, signal-processing. So formerly, I'm going to talk about things like DT Fourier series. Next time we'll talk about DT Fourier Transform, but the context for that-- the reason we do this-- is because the discrete time approaches are so useful for processing signals. And I want to motivate that by thinking about how signal processing has been viewed historically.

For over the past 20 or 30 years, there's been enormous interest in signal processing and most of that interest evolved out of CT applications that we wanted to make better. We wanted to make our radio reception better, we wanted to make telephone reception better, we wanted to make hi-fis work better, we wanted to make television work better, photography, x-rays, blah, blah, blah. There were lots of reasons why we wanted to be able to alter the signals that were available to us. So for example, in radio-- how do you alter the signal so as to reduce static?

In the early radios, there was a big problem with automatic gain control. So as you went over the hills, the line of sight to the radio station changed and that changed the gain so there was an interest in controlling gain. So there were lots of applications where we wanted to account for deficiencies in the hardware and that gave rise to the notion of signal processing. Those signal processing applications were largely in continuous time, because the signals were largely in continuous time-- all that's changed.

Now when you say signal processing, you're almost always mean discrete. And there's a very good reason for it and it's digital electronics. Digital electronics are very inexpensive and they work very well. I want to give just one example chosen from my history.

So like most teenage people-- I was teenage once, right? It was a long time ago, but I was one once. And like most teenage types, even then we got afflicted with the audio hi-fi listen to music virus problem. The good news is I recovered-- I'm fully functional, it's gone, right?

So you have every reason to think that you'll recover, but that was a problem. I was interested in hi-fi and that was a hard problem. So if you think about how do you make a speaker system, one of the problems that you get when you try to design a speaker system is reproducing low frequencies. And my guess is that you're as interested in low frequencies as we were back then.

The problem with low frequencies is that if you have just a speaker, the speaker has an electromagnet. The electromagnet moves so that it pushes the cone. The cone pushes air this way, but if cone is coming out, it also pulls air that way. So that's a problem because that means it can do that and the person who is over here hears nothing.

That's especially bad at low frequencies for reasons I won't go into because the wavelengths are long, but that's the problem. So what do you do? Well, you put it in a box. So instead of having your speaker in space, you put the speaker in a box.

Now when the cone goes that way, the air gets compressed toward the person-- that's our second person. And now the air that goes this way is constrained to the box, so it doesn't interfere, but what's the new problem? So putting it in the box solves one problem, but it introduces a new problem. What's the new problem?

AUDIENCE: Box vibrates.

DENNIS FREEMAN: The box will vibrate-- even worse.

AUDIENCE: Negative pressure.

DENNIS FREEMAN: Native pressure. So how does that show up as a performance problem? That's exactly right. What's the problem that's generated because of the negative pressure in the box? Yes?

AUDIENCE: [INAUDIBLE]

DENNIS FREEMAN: So there's something to do with gain-- that's exactly right. Because of the negative pressure, what happens to the speaker cone?

AUDIENCE: Not be able to push.

DENNIS FREEMAN: Yes, it can't move-- you put a load on it. If in order to come out, you have to generate negative pressure, but the negative pressure's going to pull it back in. So the problem is that you put the speaker in the box, now you no longer have the short circuit of the acoustic path. However, you've now introduced a load, which makes the speaker much harder to move.

Everyone with me? So the problem was you put the speaker in the box and now the speaker, which was making a lot of air motion, makes very little air motion because of the box. So what do you do next? Make the box big.

OK, well that works for a while. Ideally if you were trying to build a speaker for this room, you would make a box this size. Well, that's kind of an architectural waste, so that's not really the right solution. So here's an acoustic solution that was very popular when I was your age.

So this is called a reflex port. So the idea is make a hole in the box-- so if you make a hole in the box, OK, now you're back to the same problem that if the speaker comes out this way, then it sucks air that way-- that's bad. So what you do instead is you put a tube and if you make the length of the tube exactly the right size, there is mass entrained in the tube. And then you have a system where the air is springy, but there's mass in here and you get a mass spring dashpot.

So now when this is going in and out, the air goes in and out of this tube, and if you adjust the length of the tube so you get the mass right, you can get them going in phase. Anyone have a clue what I just talked about? The idea was to use mass spring dashpot resonance theory to change the acoustics of a speaker box so you get more low frequencies out. That was the way we thought about speaker design back then.

The speakers that I bought were the Electric Voice Interface A's, right? At that time, they cost about $1,500, I was making $600 a month-- so times have changed slightly. So $1,500 was then a fortune, but I had to have them, right? There was any question.

And the way these worked was like the reflex design. So in the reflex design, we had the port that was tuned so the mass resonated against the stiffness of the air. Here what you did-- this was not a speaker. This is the speaker--

AUDIENCE: [INAUDIBLE]

DENNIS FREEMAN: --and the idea was that you adjust the mass and stiffness of this big passive thing. There were no electrical wires hooked up to that thing-- that was called a passive resonator. It was tuned so that when the 8-inch speaker up here was trying to move, it would move in resonance so that it was out of phase with the 8-inch speaker up on the top so they were both going in and out the same time. Same idea as the reflex, but you could get a bigger motion from this passive resonator than you could with the acoustic reflex.

So the next big advance was by Amar Bose. Bose was here at the time-- he was a professor in electroengineering at that time and he invented the Bose 901s, which is something everybody had to have. What Bose did that was extraordinarily clever was he used really chintzy speakers-- CTS speakers, 4-inch diameter. They had absolutely crummy frequency response-- flat up to a couple of hundred Hertz and then their response fell roughly with one pole-- so that by the time you get out to the highest frequencies that we can hear, the response is greatly attenuated-- like 40 decibels.

40 decibels is a factor of 100 in pressure. So the problem is that these speakers had exactly the same problems that we had in the acoustic reflex problem, except they were even worse. This was an 8-inch speaker, this is a 4-inch speaker, so they were even worse. But what Bose did-- he took advantage of electronics.

He put an electronic signal processing box to put the inverse filter, so the box preprocessed the sound to boost the frequencies that he knew the speaker would attenuate. So this box was filled with op-amps, gains, buffers-- all the kinds of things you know about, which were then just barely available. That box cost $1,000 back then because that was very state-of-the-art. So the idea was that he switched the problem from being acoustics problem to an electronics problem-- very clever.

Today we go one more step and we do the signal processing digitally. So any modern stereo system has a discrete time filter in it. The idea is that you take the signal that's coming off the source, like an MP3-- you take an MP3-type source, you run it through a digital filter-- this is a filter that works on numbers, not on volts-- and so you convert the source from the MP3 into a stream of numbers that gets crunched digitally, giving rise to a different stream of numbers, which you then convert back into an analog signal. This is the approach you find in every modern stereo system.

The reason being that it works so well. Here's a chip, the Texas Instruments TAS3004, which was specifically designed for car stereos. It's got amazing specs-- it has two channels, state-of-the-art converting rates, state-of-the-art digitisation. It has a processor internally that runs at 100 mega instructions per second.

It implements all the common things-- treble, volume, loudness, everything you can imagine-- and it costs $9.63 off Digi-Key in units of one. And if you buy units of 500, they are $5.20 apiece-- a trivial amount of money by any standard. So the idea then is that it's just very effective to do number-crunching digitally. So that's where we want to talk about-- we want to talk about the same kinds of Fourier transforms and so forth that we've talked about with CT signals, but we want to know the new set of rules that apply when the signals are discrete in time.

OK, everybody with me? So this is just a little bit of review. You'll remember that we started with discrete time, we did examples in discrete time when we were thinking about feedback, then we went on to the CT and thought about feedback systems and so forth, then we did CT Fourier because I think the Fourier stuff is easier to see in CT. Now we're folding back to do DT. So it's been about six weeks since we talked about DT, so a little bit of a refresher.

We're going to be thinking about frequency responses, right? That was the basis of the Fourier transform, that's going to be the basis of the Fourier signal processing in discrete time. And the idea of frequency analysis starts with the idea of an eigenfunction. The eigenfunctions for a discrete time system are very similar to the eigenfunctions for a continuous time, but they're different, right?

The eigenfunctions for a CT system are complex exponentials, e to the st. Nod your head-- it'll make me feel good to think that you can remember that we did e to the st at one point. So the eigenfunctions for CT systems are complex exponentials, e to the st. Eigenfunctions for DT systems are complex geometrics.

Complex geometrics in DT play the same role as complex exponentials in CT, and they play that role for exactly the same reason. So when we wanted to prove that the complex exponentials were eigenfunctions of LTI systems, we took the complex exponential and we convolved it with the impulse response of an LTI system. All LTI systems can be represented by their impulse response. If you convolve any real-valued function with a complex exponential, you get back a complex exponential of the same shape, but possibly different complex amplitude.

The same thing happens in DT. So in order to see that that's true, we think about the input to an LTI system being z to the n-- complex exponentials, so z is some complex number, z to the n is the evolution of the complex geometric as a function of time. We let the system be represented by the unit sample response, which is analogous to the unit impulse response. And we simply convolve the unit sample response with the complex geometric, and if you go through the convolution's thumb-- it's a thumb now because it's DT-- you find that the output has the same base, z, that the input did but now the amplitude has been modified by the value of the system function evaluated at the location, z, OK?

We did this before-- this is review. Similarly, if you think about representing a discrete time system in terms of a linear difference equation with constant co-efficients, you can see that type of system can always be represented by the ratio of polynomials in z. The same as with CT-- we had the ratio of polynomials in s. Here it's the polynomials in z.

And if we want to evaluate that h of z thing, the system function, there's a very easy way to do it since it's a polynomial. There's nothing different about polynomials in z and polynomials in s in far as their being polynomials. The fact that they're a polynomial means you can factor the numerator, you can factor the denominator, you can think about poles and zeros, and you can evaluate the system function. After you've factored the numerator and the denominator, think about each contribution.

So z0 minus q0-- so if I have a zero at q0 and I want to know how big is the eigenvalue at z0, I think about the arrow that connects the 0 to the point of interest. And I can say something about the magnitude by the length of the arrow and the phase by the angle of the arrow-- same as CT. And I get the same divide and conquer idea. If I know how one 0 works and if I know how one pole works, I can combine the responses to the numerous poles in a complicated system in order to figure out how the complicated system works.

I can think about the poles and zeros one at a times, just like I did for CT. So the magnitude is determined by the product and division of all the magnitude factors and the angle is determined by the sum and difference of all the angle factors. OK, I'm going over this quickly, but that's because I'm expecting that you know this, you just need to be reminded. If we want to think about the frequency response, we do the same thing we did in CT.

We like complex geometrics, so we try to think about the eternal cosine waves as being expressed in terms of complex geometrics. The difference is that now we think about this e to the j omega 0 end-term. So we had the same form-- we had e to the j omega 0 t. We thought about that as a complex exponential with a frequency omega 0.

Now we're thinking about e to the j omega 0 n as z being this e to the j omega 0 thing, OK? It's the same thing-- it's still Euler's equation, we're just parsing it differently. We used to think of it as a complex exponential, now we think of it as a complex geometric-- that's the only difference. So now what we need to do if we're thinking about the response to the cosine of omega 0 n, we have to think about adding two complex exponentials, one at z0 and one at z1, where z0 is e to the j omega 0 an z1 is e to the minus j omega 0, OK?

Then the response, since the z0 to the n is an eigenfunction, the answer is just that same eigenfunction premultiplied by the system function evaluated at z0 and similarly here for z1. So we get an expression here that looks for all the world, except that this is the base of a geometric sequence. Except for that, it looks just the same as the CT example from before. Just like CT, the system function has conjugate symmetry and, in fact, that's for the same reason too.

So you can see that there's complex symmetry just by thinking about the expansion in terms of the z transform. And if you think about the fact that the h of n's are all real, taking the complex conjugate of this side is the same as taking the complex conjugate of that. The complex conjugate of a sum is the sum of the complex conjugates. The only thing that's imaginary is that.

So the only thing that happens is that the sign changes. So it's those complex conjugates' relationship in DT, just the same as there is in CT for exactly the same reason. And finally, we get our final answer, which is that if we want to know about the frequency response, we therefore think about what was the magnitude and angle of the system function. We only need to look at one of the frequencies since we know the frequency response has conjugate symmetry, and we can figure out the response to cos omega n.

It's still cos omega n, except the magnitude is different by the magnitude of the system function and phase delayed by the angle of the system function-- everything looks exactly the same. The one thing that's different-- now we're getting something a little more interesting-- the one thing that's different is that we evaluate the system function when we're interested to find the frequency response. The thing we need to do is evaluate the system function on the unit circle. If we wanted to know the frequency response for a CT system, the frequency response for a CT system lives where?

AUDIENCE: [INAUDIBLE]

DENNIS FREEMAN: J omega-axis, right? Here, the frequency response lives on the unit circle. That's the only difference and that means that the vector diagrams work precisely the same way, except that we don't connect them up to the j omega-axis. We connect them up to the unit circle. So where is DC?

AUDIENCE: [INAUDIBLE]

DENNIS FREEMAN: DC means cos omega n where omega is 0. Where is that? Where was DC in CT? Frequency response lives on the j omega-axis. Where is DC?

AUDIENCE: 0.

DENNIS FREEMAN: 0. Where is DC in a DT system?

AUDIENCE: 1.

DENNIS FREEMAN: 1, OK? It's because we've shifted from thinking about complex exponentials to thinking about complex geometrics. So now DC lives on the unit circle at the place where the angle is 0, right? So now we have to think about things of this form.

If we want omega 0 to be 0, we need to have this number be e to the 0, which is 1. So the DC response-- the thing that we plot-- we're going to plot frequency response just the same as we did in CT. They're going to have the same trick they had in CT. In CT, the system function has a non-zero answer throughout the entire s-plane.

In DT, the system function has a non-zero response in the entire z-plane. The frequency response in CT lives on the j omega-axis, the frequency response in DT lives on the unit circle, right? But the frequency response itself is simply a plot of the magnitude and angle as a function of omega. I should mention that I sneakily use a different symbol for omega.

I like to use a little omega for a CT and capital Omega for DT. The reason I like to do that is that the dimensions are different, right? When we think about little omega 0, we're thinking about cosine of omega 0 t. When we think about this one, we're thinking about cosine of omega 0 n.

Can somebody tell me something different about little and capital Omega 0? What's different about them? Anything? Yes?

AUDIENCE: The little one goes to infinity.

DENNIS FREEMAN: The little one goes to infinity, that's correct. Yes?

AUDIENCE: The little one [INAUDIBLE].

DENNIS FREEMAN: So you can think about omega 0-- omega 0 has dimensions like radian per second, where omega 0 has the dimensions of radians, right? So the dimension will be completely different-- that will be very important when we think about systems. So the last two weeks of the course, we'll think about systems that convert CT to DT and back. When we're doing that conversion, it's very important to keep track of how the frequencies change and this dimensional difference will be very handy to help us remember how to convert one into the other.

So that's the reason for keeping them separate-- so they're dimensionally distinct. So all we need to do then to think about the frequency response-- in CT, the frequency response would be the magnitude and angle plotted versus little omega 0. All we do that's different is we plot against capital Omega 0. Capital Omega 0 specifies the angle on the unit circle.

So if the angle is 0, we pick out 0 frequency. We think about that as having two pieces-- there's the piece associated with the pole, there's the piece that's associated with the 0. The 0 is in the top, so we take the length of the top divided by the length in the bottom, and that tells us the magnitude is 0, which is a number bigger than 1, right? Because the pole's in the bottom-- pole's the short one, pole's in the bottom.

And as you increase capital Omega, you sweep out the frequency response. So by the time you get over to pi, the length of the vector associated with the pole is bigger than the length of the vector associated with 0, so now you're less than 1, OK? So everything's exactly the same as CT except that we're looking at the unit circle rather than looking at the j omega-axis. The same sort of thing happens for negative frequency, just like it did in CT, the only difference being that in CT, we were computing the frequency axis by looking at the j omega-axis.

In DT, we're computing the frequency axis by looking at the unit circle. There is one huge difference. OK, so summary, everything's the same except we look at the unit circle-- except frequency responses are now periodic. Someone previously said that one of the differences between little omega big Omega is that little omega goes to infinity. That's absolutely true.

What happens when big Omega goes to infinity? Well, angles wrap. So if you think about this diagram-- if you start with capital Omega being 0, by the time you get over here capital Omega is pi. If you keep increasing, you'll come back here-- if you keep increasing, you go again.

The big difference between CT and DT is that the DT complex exponentials are periodic in 2 pi-- every 2 pi, they repeat themselves. Because of that DT system functions repeat themselves. So if you think about evaluating the system function at some frequency omega 2, which happens to be omega 1 plus 2 pi k, you get the same thing as if you had evaluated the system function and omega 1 because the frequency e to the j omega 2 is the same as the frequency e to the j omega 1 because of the periodicity of angles. Is that clear?

So we get something that's very different in DT. In DT, the frequency response we only ever plotted up to pi because above pi it repeats itself. By plotting it between minus pi and pi, I've showed a two pi range and the frequency response is always periodic in 2 pi because discrete frequencies are always periodic in 2 pi. Yes?

AUDIENCE: How does that affect the filters if the number repeats itself?

DENNIS FREEMAN: How does that affect the filters? That's an excellent question and not a trivial one to answer. So one of the big important differences, which you will under-appreciate right now, is that because of this repetition, there's no equivalent thing to bode. Now I know you all love bode and because you like it so much-- I'm being slightly sarcastic.

I like it a lot because it's easy, right? You haven't quite got over the barrier yet and so I've heard some negative feedback about bode. Bode's wonderful, but in case you haven't got that message, one of the biggest problems in DT-- which comes about exactly because of the thing you said-- that the DT frequencies are periodic. We don't have an analog of bode, which means that when we try to sketch the magnitude and phase for a discrete time filter, it's a much harder problem.

One for which we almost always use a computer because we don't know any good rules to let us think about it in our heads. So that's one of the implications, but there's a lot of others and, in fact, that's one of the main themes for the rest of the course. So the important difference then is two important differences, frequency response lives on the unit circle and discrete frequencies are periodic. OK, I've talked way too much, so finally I'm going to ask you to do something.

I want you to think about three CT frequencies-- cos 3,000t, cos 4,000t, cos 5,000t-- and think about sampling them with a sampling interval capital T who is at 0.001 and put them in order from lowest to highest DT frequency. So first, look at your neighbor, say hello.

AUDIENCE: Hello.

DENNIS FREEMAN: And now choose an answer between 0 and 5.

OK, so which list goes from lowest to highest DT frequency? So raise your hand with the number of fingers equal to the answer between 0 and 5. High, high.

OK, so I see people disagreeing with their partner. I think that's always good. OK, the correct answer is in the minority. So let's ask, according to the theory of lectures, what's the right answer? If you were the lecturer asking this question, what would you make the answer be?

AUDIENCE: Probably none of them.

DENNIS FREEMAN: None of them, except I don't have none-- but that would be the kind of thing I would do, I guess. If you were the lecturer and you wanted to make a point about x1, x2, x3, 3,000, 4,000, 5,000-- what would you want the answer to be?

AUDIENCE: Number 5.

DENNIS FREEMAN: Sure, you'd like to be number 5, completely backwards. OK, talk to your neighbor-- figure out why it's completely backwards. That is the right answer-- the theory of lecturers always works. So why is 5 the right answer?

Is 5 the right answer? 5 is the right answer-- why is 5 the right answer? OK, the corollary to the theory of lecturers-- always look at the last line. What was the last point?

AUDIENCE: [INAUDIBLE] wrap [INAUDIBLE]

DENNIS FREEMAN: Things wrap. So obviously these things must be wrapping, right, because according to the corollary to the theory of lecturers, it has to do with wrapping. According to the theory of lecturers, it's number 5. Why would it wrap that way?

Well think about after you do the sampling, if you substitute capital T equals 0.001, you look up x1 of nT. So x1 of T is this, so substitute nT every place there was a T. So you get 3,000 nT. 3,000 times capital T is 3. So x1 of n is cos of 3n, right, and that's written here.

Similarly, the second one is 4n and the third one is 5n. So the discrete frequency, capital Omega, is 3, 4, or 5. Think about where 3, 4, or 5 are on the unit circle, OK? 3 is a number just less than pi, so that puts 3 here.

4 is a number that's bigger than pi, so you go around, you pass pi, and you go a little further. And 5 is a number that is slightly bigger than 3/4 of 2 pi, OK? So by that logic, right, this is low frequencies-- positive and negatives are both necessary because of Euler's rule. In order to make a real value cosine function, right, we need e to the plus and we need e to the minus.

So we start at 0 frequency. As we increase frequency, we're thinking about conjugate frequencies, like so. So this is the highest frequency, then they cross-- so which is the highest frequency among 3, 4, and 5? Oh, doesn't make any sense.

According to the rules I just said, the answer must be 3, right? 3 is closest to the high frequency, then the next closest is 4, and the lowest frequency must be 5. Why does that make any sense? The intuition comes from it if you think it about what's happening when you're sampling.

Think about what would happen if capital Omega were a quarter. Then if we plotted cos omega t and compared the samples, cos 0.25n, you would see that at the discrete frequency, the samples are a very good representation of the unsampled signal. I'm trying to compare the CT signal to the DT sampled signal, OK? If you changed the discrete frequency-- if you double it-- you still get a good representation of the CT signal.

If you double it again, you can still see the CT signal. I'm up to 1-- double it to 2, you can still sort of see it. Increase it to 3-- OK, now it's getting a little harder to see. Go a little higher to 4-- in this plot, the red line is showing the CT signal that was sampled.

Here, the red curve is still showing the CT signal that was sampled, but the green curve is showing an alternative way to think about that set of samples. That set of samples could alternatively have come from the frequency 2 pi minus 4n. If you go to an even higher frequency-- if you go to 5, the problem is even worse.

There's an alternative way of thinking about it that's a much lower frequency. If you go to 6, it's absurd to think about 6 as being the frequency, right? The thing that your eyes sees is this low frequency thing. You would never guess that I had sampled the high frequency thing.

That's what we mean by the frequencies wrapping. There are two different waveforms that you can sample and get the same blue circles-- the same samples. You could sample of 6n or 2 pi minus 6n and you would get the same thing. That's the reason we say that the discrete frequencies wrap, OK?

So that's the reasoning behind thinking about this. So there's a way of thinking about 5 as this whole distance around like that, but there's another way of thinking about it as just the negative frequency coming this way. And since they come in pairs, regardless of the way I think about it, I've always got the pair there to make sense out of either interpretation. That's the hardest part in DT-- it's not hard, it's just the hardest part, OK?

If you get that, that's the only trick in DT. So the answer then is number 5. So what kind of a filter is this? You remember, we want to have an intuition just like if we look at CT, right?

If I told you that I had a CT and I looked at the s-plane, and I told you that there was a pole there-- what kind of a system is that? High-pass, low-pass, band-pass, stop-band. High-pass, low-pass, band-pass, what? CT system, single pole.

Low pass-- it's a low pass because of bode, right? Right, you like bode, right? It's because if we think about frequencies near 0, we get one gain and if we think about frequencies becoming very large, the gain falls off linearly after that. What kind of a filter is this one?

1, 2, 3, 4, 5? Yes, it's a high-pass filter, right? It's a high-pass filter because this pole is in the denominator, this vector is large when we're at DC, and it's short where we're into high frequencies in the denominator, so that makes the filter be big for high frequencies compared to low frequencies, OK? So that's the only thing that's different about DT.

OK, in the last 10 minutes what I want to do is introduce the idea of a Fourier series because that's where we're going with all this junk. We want to understand DT frequency responses so that we can generate a Fourier series. So the idea is exactly the same as what we do in CT. We want to represent a DT signal as a sum of complex geometrics, right?

We want to think of it as a sum of frequencies and the trick is to find out what that sum is. The thing that's different is that frequencies wrap. Because the frequencies wrap-- in CT the frequencies don't wrap. In CT, how many harmonics do we have to think about for an arbitrary CT signal?

Infinite, right? When we think about a CT Fourier series, you think about the fundamental, the second harmonic, the third harmonic, the fourth, fifth, sixth, seventh they find 10th, through infinity. In principle, they all matter. In DT, because of the wrapping, there is a finite number of them.

It's pretty easy to see that-- choose any frequency, assume that that's periodic in capital N-- I'm just choosing the period to be capital N just like we would say the [INAUDIBLE] NCT is capital T. If the signal is periodic in capital N-- if it was originally a complex geometric, then-- because of the factoring properties of the complex geometric-- it must be the case that this number, in order to make it equal to that-- this number must be 1. The only way that number could be 1 would be if capital Omega is a submultiple of 2 pi.

That means that if we're thinking about periodic signals in 8, there are exactly eight frequencies we need to worry about and no more. The ninth one aliases back to the first, the 10th one aliases back to the second, et cetera. There is only eight of them. That's the big difference between CT and DT.

In DT, if we were thinking about sequences of length 3, there would be three frequencies we'd have to worry about. If we were thinking of sequences of length 4, there's 4. The number of frequencies and the number of samples matches. In fact, that's not too surprising because if you just think about it in terms of knowns and unknowns, if we want to represent an arbitrary signal of length 4 in terms of frequencies, we better have four of them-- here they are.

The idea is precisely the same as CT except that there is a finite number of harmonics that we have to worry about. The rules for figuring out what the harmonics are-- and there is also a convenient way to think about-- DT Fourier series as matrices. Since we have a finite number of frequencies and a finite number of times, there is some relation between the two-- you could have an arbitrary, linear relationship between those sets of numbers expressed by a matrix. So matrix turns out to be a very convenient way of thinking about DT Fourier series.

And there's an orthogonality principle exactly like there was in CT. The orthogonality looks just like an inner product, just like it did in CT. In CT, it was the integral over any time of length capital T of the product of two frequencies is either 0 or capital T depending on whether the frequencies are the same or different-- same thing happens here. And because of that, there is a very simple way to sift out the k-th element of the series, just like there was an analogously simple way to sift out the k-th element of a CT series. All of this is precisely the same as what we did in CT.

The result is analysis and synthesis formulas that look similar, except there are sums instead of integrals. They look just like the CT-- the interesting thing is that they both have a finite length. In CT, there was an infinite sum over k. In DT, there's a finite sum over k. And so that means that there is a very convenient matrix way of looking at how would you construct an arbitrary signal in time from frequency components and how would you compute the frequency components from the time components?

It's just that in DT, the convenient way to express that relationship-- you can still express it as an analysis and synthesis formula-- that's what these are. But a much more friendly way of thinking about it is as a matrix. Those two representations are absolutely identical. So the idea then for today was generalize CT to DT.

DT is really useful when you're trying to do signal processing with digital electronics. Digital electronics is the way to go because they're so inexpensive and powerful. So unit circle wrapped, finite length-- and that finite length one is really the key to signal processing because we can represent a finite length signal in time with a finite number of samples. There is a match and that's the reason that's the method of choice when we do digital signal processing, and we'll talk more about that next time.

Free Downloads

Video


Caption

  • English-US (SRT)