Lecture 11

Flash and JavaScript are required for this feature.

Download the video from Internet Archive.

Previous track Next track

Instructor: Abby Noyce

Lecture Topics:
Review, Working Memory, Modal Model for Memory, Short Term Memory, Memory Exercise, Chunking Information, Working Memory Capacity, Working Memory Model, The Phonological Loop, The Visuospatial Sketchpad, The Central Executive, Attention Paper Presentations, Attention Paper Conclusions

» Download English-US transcript (PDF)

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR: We've been talking for two days about--

AUDIENCE: [INAUDIBLE]

PROFESSOR: --attention.

[LAUGHING]

AUDIENCE: Sorry.

PROFESSOR: Jen.

AUDIENCE: [? I don't see it. ?]

PROFESSOR: I would like you to direct your attention to me and not the ladybug.

AUDIENCE: There's a ladybug? Where?

AUDIENCE: [INAUDIBLE] pay attention.

PROFESSOR: Right. So we've been talking for two days about attention. So attention is this idea that you focus your processing resources on a subset of all the input that's coming in at one time. You can't deal with all the sensory information out there. You've got to select some portion of it to direct further.

And we talked yesterday about the idea that this is a competition between inputs coming in, that all of the different possible things you could be attending to compete. And whichever one gets deemed the most salient, the most important or relevant to what you're doing manages to lay claim to the processing level that it needs. So we're going to shift into a very related topic and a topic that seems to get more and more related as we look at modern models. And we're going to look at working memory. Working memory is another term for short-term memory.

So what is this working memory business? So this is this brief, short-term, immediate memory for data you're currently working with. Looking up a phone number, hanging onto it for the time it takes you to punch it into the phone, or doing mental arithmetic requires that you hang on to the numbers involved.

Or the store I used to work at, we had a credit card machine that was separate from the register. So if someone wanted to pay with a credit card, you took their card, and you swiped it, and you typed in the amount. You'd have to hang on to the total off the register long enough to punch it into the machine. All of these are examples of tasks that use working memory-- short-term memory.

Short-term memory is closely related to attention. Attending to something and having it in your working memory are often-- some people would say always, some people would not-- equivalent states.

AUDIENCE: What would make someone have a bad memory, as opposed to a good memory for that--

PROFESSOR: For short-- so when people talk about memory, memory researchers say there's two really different things. There's short-term memory, which is this immediate hanging on to the stuff you're working with right now. And then there's long-term memory, right? Like remembering what you did last week, or last year, or something that you learned in third grade. Long-term memory is really different.

And people's capacities for both of these things-- short term and long term-- vary among individuals. But exactly why is not terribly well understood. We'll talk about this some more when we talk about longer term memory next week. You guys might remember that the very first paper we read the first week of class talked about training people on this particular working memory task, and that it was closely related to fluid intelligence, that improving your working memory-- or improving on this particular working memory task-- seemed to improve people's fluid intelligence on a number of different tests.

So working memory is good stuff. It correlates with SAT scores. It correlates with scores on intelligence tests. It correlates reasonably well with success in school, about as well as SAT scores do.

OK, so Atkinson and Shiffrin in 1968, back in the proverbial day, came up with a model for how memory works that a lot of people call the modal model. It's the statistical use of the term modal. It's the one that gets cited the most often.

It's not as common now as it was. But it's still worth thinking about. So their model says that, OK, we've got all of this external input coming in. And it goes into an immediate short-term sensory storage.

You can think of this as being, like, the primary sensory portions of your brain-- like primary visual cortex that we talked about, or auditory cortex, or tactile cortex-- all of this. And that's very brief. And this is probably not within conscious awareness.

And some of it gets lost. But some of it goes into short-term memory. And there again, still working with this idea that short-term memory is information that's, like, immediately accessible. You can work on it there.

Atkinson and Shiffrin were really trying to understand how learning works-- how this longer-term, get a piece of information, and be able to recall it forever. What's the capital of Massachusetts?

AUDIENCE: Boston.

PROFESSOR: Boston. When did you learn this? Like yea big? Back in the day. So this is something that's just in your long-term memory. You can get it.

And they were interested in how information gets to long-term memory. And they said that this short-term processing store processes stuff that's going into long-term memory. You've got to put things into short-term memory first.

There's this sequential model. Stuff can get lost out of memory at any of these stages, although once something's really consolidated into long-term memory, you're not very likely to lose it. And they said, short-term memory can also pull information back from long-term memory to work with it at any given point in time. So there's connections both ways between those two storage spaces.

Atkinson and Shiffrin-- right. So sensory, short term, and long term-- you can lose stuff at every stage of it. Once something really makes it into long-term memory, whether or not it's ever really lost is a point of theoretical debate.

People will say, but of course stuff is lost. I can't remember what I did on June 2, 1994-- [INAUDIBLE] 1998. So I think you guys were, like, what, 2 in 1994-- ish? So young enough that you probably don't remember anything. But from, like, childhood-- random day. But where exactly in the memory process that drop happens is another thing we'll talk about next week.

So Atkinson and Shiffrin's model says that short-term memory is this key step in storage of long-term memories. And it can also pull information from long-term memory and use it. They thought of short-term memory as being more or less all one unit.

Modern theorists would say there's different kinds of short-term memory. Working memory is the more modern term. This model of how memory works was one of the very early pieces of theory in this shift that happened in the middle of the 20th century from behaviorist perspectives in psychology to cognitive perspectives. These guys are really talking about mental events-- what's happening inside your head-- in a way that psychologists 10 or 20 years earlier wouldn't really have wanted to do, which makes it cool.

OK, while we're on working memory, you guys have also probably encountered this idea before, that your short-term memory can hold seven, plus or minus two, pieces of information. The guy who came up with this is George Miller. The paper that he wrote in '56 says something like, I have been being pursued by an integer.

It's the opening sentence. It keeps slipping into my data. It keeps showing up.

This is a little bit before the Atkinson-Shiffrin model here. But Miller was one of the first guys to really pin down not only that short-term memory has a limited capacity but actually pin down what that capacity is. He said people can remember seven, plus or minus two, chunks of information. So what's a chunk?

If you are given a string of information, you're probably going to try and make some kind of organizational sense out of it. So a chunk is a well-earned cognitive unit made up of a number of components representing a frequently occurring and consistent perceptual pattern. So if, for example, you're trying to learn your friend's phone number, and the area code is 617, or 781, or one of those common Boston and Boston suburbs area codes, you probably don't need to remember that as 6, 1, 7. You probably remember the whole 617 part as a unit.

If you grow up in a small town, like I did, where everyone's phone number starts with a 603-465, where both the area code and the exchange are all the same, then again, those are individual chunks. You don't have to learn each digit individually. If I ask you to learn a list of words and then tell me about the letters and then write them back down, you're probably not remembering individual letters. You're remembering the words. All of these are examples of our brains taking information, grouping it up to make it into fewer pieces to work with.

Chunking is something that your brain just does. You don't really have to think about it, although finding ways to chunk information helps to remember larger things. Anyone learn, like, a goofy mnemonic for something ever, like for, oh, kingdom, phylum-- kings play chess on fine green sand for the kingdom, phylum, class, order string of taxonomies or something like that? So that's almost an example of taking advantage of the way that your brain can grab a meaningful unit, like a sentence, much more easily than just a random string of words.

AUDIENCE: [INAUDIBLE]

PROFESSOR: Back onto chunking. So I had a train of thought. I don't know what it was. Drat. I know what I was going to say.

So you know how, like, when you have a long number that you have to remember for some reason, like your Social Security number-- they break it up with the little dashes in the middle? Part of that is to, instead of just it being a random string of digits, to give you a piece of chunking, so you can think about it as three separate parts. And it's actually easier for people to remember information when it's broken up this way. Same thing with the little spaces on, like, a credit card number.

AUDIENCE: [INAUDIBLE] 1945.

PROFESSOR: End of World War II.

AUDIENCE: Yay. [INAUDIBLE].

PROFESSOR: All right.

AUDIENCE: [? Bet ?] [? that ?] [? was ?] [? going ?] [? to ?] bother you.

PROFESSOR: Moving right along. So when you chunk information, you can hold more of it in working memory, as we just saw. Did anyone do worse on the second round of that, with the numbers clumped up?

AUDIENCE: I did the same.

PROFESSOR: You did the same.

AUDIENCE: I think it's because I wasn't paying attention to the lecture [INAUDIBLE]

PROFESSOR: Possibly. So Miller said seven plus or minus one is working memory's capacity. There's some more modern numbers showing that it's closer to, like, three plus or minus one. And Miller's numbers didn't take into account things like chunking and rehearsal. And three plus or minus one-- well, that was four years. So that's probably pushing the limits of a lot of people's unrehearsed working memory capabilities.

OK, so things that affect how much stuff you can stick into your working memory at one point in time-- this is pretty variable among people. It's definitely got a genetic component. But even for an individual, you can change how much stuff you can store by chunking, by how fast these things can actually be spoken out loud, which is an unintuitive result.

But people tend to recall about one and a half seconds' worth of material. So if I show you a list of simple nouns or, let's take, barn animals-- a cat, a dog, a cow, a chicken-- and have you remember a list of four or five of these, you'll probably be able to remember about as many of these words as you could pronounce in about a second and a half. And much more than that, you just can't hang on to.

And this has been pretty well documented, in part because it's such a weird result. People have tested it for color names, for shape names, for a whole bunch of different kinds of nouns, even just for nonsense words or nonsense syllables. Someone did a really interesting study, looking at how many numbers people could recall compared across English speakers-- most English digits have one syllable-- Spanish and Hebrew speakers, both of which tend to even out to around two syllables per number, close to; and Arabic speakers. And Arabic actually averages to a little over two syllables per number. And just checked digit recollection across different people whose native language was these different things.

And they found that people had the longest recollection for numbers in English out of these four and the shortest for Arabic. And Spanish and Hebrew were somewhere in the middle. So even though it's the exact same semantic content-- I'm still asking you to think about a 7 or a 3 or something-- and I think they were all written in numerals, people's performance varied just by what language they were speaking, by how long it took to pronounce that, which is a nifty result.

AUDIENCE: [INAUDIBLE]

PROFESSOR: Shorter or longer?

AUDIENCE: [INAUDIBLE]. Well, Japanese numbers are pronounced shorter than [INAUDIBLE] numbers. [INAUDIBLE].

PROFESSOR: [COUNTING IN JAPANESE]

But it holds even, like, within single syllable. Syllables with short vowels, you can pack more in than syllables with long vowels. And you'll see that kind of difference, too.

AUDIENCE: There aren't many, like, long vowels [INAUDIBLE].

PROFESSOR: Yeah? Cool.

AUDIENCE: Well, what if someone was, like, bilingual? Would it be based on the language they're more comfortable with?

PROFESSOR: Yeah, or whichever language they're using one they encode the data in the first place. I don't know. I'm not bilingual, not even close. I don't have a good perspective on this.

AUDIENCE: There's people-- like, I know there's people [INAUDIBLE].

PROFESSOR: Everyone here, like, had to memorize your timetables in about third grade, right? You know 2 times 2 is 4, and 2 times 3 is 6, and 2 times 4 is 8, right? Anyone here still run through that little thing in their head when they're doing out a multiplication problem?

6 times 4. And 6 times 4 is 24. OK, so that's 4. And find yourself running through these same things.

There's some interesting research somebody did because in Chinese, the standard way of doing a times table-- and this is what I was told, OK, this is second-hand knowledge-- is instead of saying 1 times 1 is 2--

AUDIENCE: They just do [INAUDIBLE].

PROFESSOR: 1, 1, 2, or 1 times 1 is 1. I may not be fibbing to you guys. And so it's faster. And so they're actually noticeably measurably faster on a lot of these quick arithmetic tasks simply because the verbal encoding of the information is that much shorter, which is interesting. I don't think you're likely to see the way that we teach timetables in American schools changing just for that anytime soon, but a good example of partly just how tied to the language we speak we are.

OK, one more thing that affects your working memory capability is how similar the things that you're trying to put into your working memory are. If I ask you to remember a lot of words for parts of a car and then test you with words that were and were not in the original list, you're going to be prone to thinking-- if I ask you to memorize seat belt, hubcap, hood, trunk, and then ask you if engine was in the list, you're likely to think that engine was, even though it wasn't ever. Likewise, if I give you a set of four words that are all parts of a car and then throw in chocolate along with them, you're more likely to remember chocolate than any of the other ones. The fact that it is different causes it to stand out in some way, and you're more likely to recall it. So these are all things that affect what gets held onto well in working memory, how much stuff you can put there.

But what, anyways, is working memory? Remember, Atkinson and Shiffrin said short-term memory is just all one box. And this is a very linear system. Well, some guy in the '70s-- oh, backing up.

OK, one more thing I wanted to talk about this is what happens when you're losing stuff out of memory? And this is something that most people who study short-term memory-- working memory-- and people who study long-term memory argue about. Does information that is in storage just drop off over time? Does it just fall out of your brain somehow-- the representation naturally decays? Or does other information that's coming in interfere with it-- mess it up?

All right, so these guys were really trying to put together a model that is about how you learn, how you get information from your sensory inputs into long-term memory. And more recently, scientists have been like, hey look, working memory. Working memory is required for a whole bundle of different things that we do.

It's required for math. It's required for counting. It's required for reading or holding a conversation. It's required for following driving directions. All of these are things where you've got to hold information where it's accessible, where you can work with it and use it to influence other things you're doing.

So Baddeley and Hitch-- the current canonical model of short-term memory was come up with by this guy named Alan Baddeley and Christopher Hitch, although Baddeley's the guy who's really still working on it. And these guys said, OK, so if we want to figure out how short-term memory works, we've got to be able to really define what it does. It's not just a box into which information is put before it can get [? passed ?] to long-term memory. Short-term memory-- working memory-- holds several pieces of information that may or may not be related in some way so they can be worked with and processed. So the stuff that you're actively thinking about, actively reasoning about, actively applying cognitive capabilities to is stuff that's in working memory. Working memory is like a workbench where you can work with information.

So Baddeley and Hitch came up with what's pretty much the canonical model of working memory. And they said that working memory-- and they came up with this name that is working, rather than short-term-- who's drawing stuff on my boards? Huh? Huh? Who got yellow chalk? I want yellow chalk.

OK, so they said there's three main parts to working memory, that there's this phonological part over here. And there's this visuospatial part-- and I'm writing that too high up-- go. And they refer to the visuospatial part as a sketch pad. Some people call it a scratch pad. Some people just call it a module.

And then there's a central executive component. And the central executive is connected to the other parts. It's also connected to long-term memory. It's also connected to sensory input. So these three parts, together, comprise a model for working memory that allows it to actively hold, change, modify, process information.

So let's talk first about the phonological loop-- this verbal part. So phonological is phonology, is a linguistics subfield. It's the study of the sounds in a particular language. So phonological means, like, auditory sound based.

So why did they-- backing up. I keep getting ahead of myself with this stuff because I think it's cool. OK, so why did Baddeley and Hitch think that there's three parts to working memory?

Well, so for example, they had this 1974 paper where they had subjects memorize a string of digits. Between, I think, two and eight digits were presented. So you might have to memorize 3, 1, 7, or 8, 2, 4, 5, 0, or something.

And they had subjects rehearse them. So that would be like saying, you read 3, 1, 7. And you [INAUDIBLE] 3, 1, 7, 3, 1, 7, 3, 1, 7. I can do this. 3, 1, 7, because as long as you just keep saying it to yourself, you're not going to forget it, right?

And at the same time, they had them perform a spatial reasoning task, which was pretty basic. They just showed them two letters, like on a screen. And they asked them X follows Y. And they had to push either a button for Yes or a button for No. And they had them do this at the same time that they were still rehearsing the string of digits.

And what you would imagine is that the people who are hanging on to a longer string of digits-- an eight-digit string-- would be poorer at this than people who are hanging on to a shorter one. But what they found is that people in the short and in the long conditions were equally accurate on the spatial task. So no matter how long a digit string you're rehearsing, you can still do this "which letters in which order" task. And they were only a smidgen slower at answering yes or no to this task.

So we talked yesterday about divided attention. You can see some of a divided attention effect here. People who are holding onto longer digit strings are performing poorer on this spatial task, but not by as much as anybody expected. And Baddeley and Hitch used this data to argue that these two tasks are using different parts of your working memory, that the rehearsal of the digit string, where you're going 8, 4, 3, 5, 0, 8, 4, 3, 5, 0-- not even out loud, right, but in your head-- and this spatial task, where they're thinking about which order the letters are in are using different parts of your working memory. And so you're not seeing as much interference between them as you might otherwise expect.

OK, so the first part we're going to talk about is this phonological loop part. So this stores a limited number of sounds for a short period of time. If you are thinking about what remembering a digit string feels like, if you read a string of numbers, you probably feel like you hear yourself reading them aloud as you read them, even if you're reading silently. And as you rehearse it, you probably feel like you're saying-- "saying"-- them over and over subvocally in your head. That's this phonological loop going on.

So as long as you're rehearsing a digit string like that, your recall for it is, as far as anyone can tell, pretty much endless. You can hold this eight-digit string for about 20 minutes. And certain kinds of errors in recall can get traced back to confusions in the phonological loop. If you're remembering letters, you might get two letters that rhyme with each other mixed up because they sound the same. So certain kinds of errors can be traced to acoustical confusions in the phonological loop.

Take a look at your string of letters and numbers from 10 minutes ago. Did anyone have any mistakes in there where they got two letters that sounded the same mixed up or a letter that replaced it? One, yeah.

AUDIENCE: Nearly, but I switched it back because after I thought about it, I kind wasn't sure. But I switched it back. So--

PROFESSOR: Which two? What two letters?

AUDIENCE: X and S.

PROFESSOR: X and S? You guys can hear the acoustical confusion there.

AUDIENCE: But I still switched it back.

PROFESSOR: You caught it.

AUDIENCE: I almost did B and V, but then I [INAUDIBLE].

PROFESSOR: Same thing, yeah.

AUDIENCE: I got one and nine--

PROFESSOR: One and nine?

AUDIENCE: [INAUDIBLE] I and Y.

PROFESSOR: I and Y? One and nine-- they've both got the N in there. They're both numbers.

AUDIENCE: [? I ?] [? nearly ?] [? got 1 and ?] [? 4s. ?]

PROFESSOR: Yeah, it's certainly not the only way that you can make working memory errors. But it's definitely something that happens. Your classic Freudian slip is sort of related to this. You've got two things that sound enough alike, and you've got both of them kind of in mind. And whoops, the wrong thing just came out.

OK, so we've got a phonological loop. Here we go. All right, read the digits below. Close your eyes, and try to remember them silently. So this is that silent rehearsal thing.

I'm getting all out of order today. I'm frazzled.

AUDIENCE: Wait, when should we close our eyes?

PROFESSOR: Once you've read them. So read them to yourself, and then close your eyes.

AUDIENCE: Wait. [INAUDIBLE] again.

PROFESSOR: OK. Anyone think they remember them? Sara?

AUDIENCE: [INAUDIBLE]

PROFESSOR: Sounds good. So you read them. People tend to report they were experienced hearing themselves say them, not aloud, but imagining that you hear yourself say them. And as you rehearse them, you hear yourself repeating them. You feel like you keep speaking them. This is what's commonly reported as what that experience feels like.

AUDIENCE: You can look at the patterns, and you'll find out that 5 is two less than 7, and 9 is two more than 7. And then you find out that 4, 1, 3, 2 is actually a pattern because they're all in the first four digits except it's switched around.

AUDIENCE: Yeah.

PROFESSOR: I do that with numbers when I know I'm not going to get to continually rehearse them like that. I do something very similar with, like, phone numbers that I want to learn, although I've totally gotten out of the habit of actually memorizing phone numbers anymore. Yeah. I don't know anybody's.

And it's so funny because I remember being in, like, middle school and not having one. And I probably knew 25 phone numbers [SNAPS] like that. I'm not sure I-- I know my parents'. I know my husband's.

AUDIENCE: [INAUDIBLE]

PROFESSOR: I know mine.

[INTERPOSING VOICES]

PROFESSOR: And so it's interesting. Anyway, so what you see in that is that there's two different subcomponents of this phonological storage idea. There's this phonological store, which is that reading it out loud to yourself effect. That's this input coming in.

And then there's this articulatory rehearsal process. And that second one is when you're sitting there trying to remember that string of digits. And you keep repeating it to yourself.

The best example of this I know of is if you're sitting, and you're concentrating heavily on something. And somebody says something to you, and you don't catch it. And you look up and you say, what?

And about the time you actually say what, that whatever it was that they just said to you reprocesses and makes itself into your brain. It's this kind of delayed hearing effect. Happens to me all the time.

That's so this auditory information is held in the phonological store there. The phonological store, as far as anyone in this model, isn't semantic. It's holding stuff just based on sound. And so the auditory information coming in can be held for just a few seconds until it can get passed to the speech-processing portions of your mind.

The phonological loop is used heavily when you're counting. Look around and try and count the number of chairs in this room.

AUDIENCE: [INAUDIBLE]

PROFESSOR: Hear yourself saying, 1, 2, 3, 4, 5.

[INTERPOSING VOICES]

PROFESSOR: Yes, go ahead. Now, under your breath to yourself, say, 6, 8, 6, 8, 6, 8. Just keep saying the numbers, and try and count at the same time.

AUDIENCE: Oh.

PROFESSOR: All right, I won't make you do a number. Just say, like, the, the, the, the, the under your breath and try and count. Fast-- say it fast.

AUDIENCE: Ugh.

PROFESSOR: Is it harder?

AUDIENCE: Was it in the reading, this thing? Oh, I thought [? it was, like, ?] [? a law. ?]

[INTERPOSING VOICES]

PROFESSOR: Yeah, you can do it with laws. The 6, 8, 6, 8 is one that my friend who does attention and working memory research. All of these, by having you say something or even just say it under your breath, you're filling up this working memory phonological space.

You're demanding that it just work on what you're saying. And you can't let it do the counting or the rehearsal that are so important to it. So hard to count when your speech centers are doing other stuff.

Come on. Space-- there we go. OK, so the other big half here is this idea of a visuospatial sketchpad. What is a visuospatial sketchpad? It stores, again, a limited amount of visual information or spatial information.

So, for example, picture yourself in a room you know your way around very well-- maybe your room at home. And think about what's hanging on the walls. Just walk around your room and count them off. Can you do it?

AUDIENCE: Hey, I should really clean.

PROFESSOR: So in a task like this, where you're envisioning a space, you're using your visuospatial information to pull up what's basically a visual recollection. Remember that working memory not only takes sensory input coming in. It also pulls things from long-term storage. The appearance of your room is probably a long-term storage piece of information.

And you can pull it up. You can picture your room from different angles. You can stand at the door and think about what it looks like, and picture yourself walking across the room and looking at it from the other angle.

So the visuospatial sketchpad lets us have this data. We can modify it. We can move it around.

Another good example of this is, OK, so picture a letter D, right? Rotate it 90 degrees to the right.

AUDIENCE: You mean clockwise?

PROFESSOR: Top to the right, yeah, clockwise. Put a number 4 above it. Take away the horizontal segment of the 4 that's to the right of the vertical part. What object are you left with?

AUDIENCE: [INAUDIBLE]

PROFESSOR: Imagine a D-- capital D.

AUDIENCE: Wait, like a 4 that's [INAUDIBLE]?

PROFESSOR: Mm-hm.

AUDIENCE: Oh, a sailboat.

PROFESSOR: Yeah. Good.

AUDIENCE: Yay.

PROFESSOR: Picture a letter D.

AUDIENCE: Well, it depends on how you draw your 4.

PROFESSOR: It does.

AUDIENCE: Yeah, I don't draw my 4s like that.

[INTERPOSING VOICES]

PROFESSOR: A printed 4, yes.

AUDIENCE: Do you do the curly 2s or the regular 2s?

AUDIENCE: Regular 2s.

AUDIENCE: I used to do the curly 2s.

AUDIENCE: I did the curly 2s [INAUDIBLE].

PROFESSOR: All right, so this is another good example. This is another task that's using your visuospatial sketchpad to actively take a simple image-- a D-- probably from storage-- we all know what a D looks like-- and then do some operations on it. So you can take this information.

You can modify it. This is a really good example of this idea that Baddeley has that working memory is not just a storage box. It's actually an active work space.

Visual encoding seems to be not the preferred means of encoding for most material. Most people seem to encode stuff verbally when given the option. So there was a pretty classic study where they gave subjects a set of, like, six photos of simple nouns, like a piece of candy, or a pipe, or I don't remember what else was in it.

And they were asked to memorize the list. And when they would prove that they had memorized the list of objects by taking a set of cards of these objects and putting them in the correct order, then the experimenter said, so OK, I want you to visualize object number four. So if object-- I will not knock stuff over.

So if object number four was the piece of candy-- so we're thinking something that looks-- all right, now it's a piece of candy, right? And the experimenter would say, I want you to visualize object number four. And I want you to subtract the part of it that looks like this. What do you see?

AUDIENCE: Fish.

PROFESSOR: Right. So what they found was that--

[INTERPOSING VOICES]

AUDIENCE: I see a [INAUDIBLE].

AUDIENCE: Where the fish? If you drew the [INAUDIBLE].

PROFESSOR: Yeah, I think my candy wrappers are a little bit too enthusiastic. What do you think? Does that look more fish-like?

AUDIENCE: Yeah.

PROFESSOR: Yeah.

AUDIENCE: [INAUDIBLE]

PROFESSOR: So they were asked to do this. And what they found is that for one group, they just didn't tell them anything about how to memorize them. They just said, go ahead and memorize these things. For the other group, they said, while you're memorizing them, sit there and go, la la la la la la la la la, which suppresses the phonological loop and keeps the verbal portion of your memorization system from being able to use this.

And what they found is that people who'd been able to use their phonological loop to learn the list, who were really bad at finding these kind of secondary objects hidden in by subtracting part of the image, but people whose phonological loop had been suppressed managed to see them much better. And so their hypothesis is that verbal encoding these-- candy, pipe, house, cat-- was the preferred way of learning that list. And it wasn't unless you took that option away and forced people to learn it using their visual memory that they would do that. So visuospatial encoding, for most things, seems to be not as preferred as verbal encoding.

All right, so we are talking about Alan Baddeley's model of working memory, which has three key components. It's got this phonological component and this visuospatial component. And Baddeley was one of the first guys who really said, hey, look, working memory isn't just all one unit. Different kinds of working-memory tasks don't interfere with each other as much as you'd expect. They seem to be being handled by independent processes.

So he said that there's at least two kind of storage buffers-- this phonological auditory storage buffer, and this visuospatial storage buffer, and that these are controlled and managed by a central executive function. So this central executive has a lot of really important and really sophisticated roles. It determines how information is moved into and out of these phonological and visuospatial storage buffers. It assigns it to which one.

So this task with the memorizing the list of shapes that we were talking about right before the break, you can think of your central executive as looking at that saying, OK, names of nouns-- nouns are easier to store as verbal information. I will put it in the phonological buffer. But in the case where subjects had to sit there and go, la la la la la la la la la, the central executive says, hey look, there isn't a phonological buffer available. All right, I feel pretty flimsy. Someone want to give me a hand with this?

AUDIENCE: I'll do it.

PROFESSOR: You want to get it? Rah. It doesn't have to be closed, closed. It's just loud. I wanted to get it down to about 6 inches open. I'm pathetic.

OK, so they had this idea that this central executive is what's putting things into and out of these storage buffers and is making that judgment of, hey look, the phonological buffer that I would like to use--

[INTERPOSING VOICES]

PROFESSOR: --is--

AUDIENCE: [INAUDIBLE]

PROFESSOR: --full, is busy, I can't have it. I'm going to store this information visually instead. So it's taking all of this information from both those short-term buffers that are part of working memory. It's taking information from sensory input. It's taking information from long-term memory, integrating it all, coordinating it, and it's also got this really important role of suppressing irrelevant information.

That should sound familiar. That's basically what your attentional system does, like we've been talking about for a few days. Attention is all about taking the flood of information that's available to you at any point in time, deciding which piece of it is most important, and getting the rest of it out of the way so it doesn't distract you, so it doesn't get in the way of what you're trying to do.

So the central executive seems to be closely related to this attention stuff, possibly even the thing that does attention, although our nice competitive model for how attention works doesn't fit quite so neatly with other things that the central executive does. It's involved in planning, not planning. It's involved in coordinating behavior.

So the central executive is this really sophisticated piece of the model. And a lot of people really don't like the central executive part of this, except that something needs to be doing it. The classic problem with thinking about how your brain handles any complicated processing task is it's tempting to believe that there's a mini Abby inside my head that's doing all of this decision-making.

But that doesn't really solve the problem. It just nudges it down a level, because how does that mini Abby make any kind of decision? How does it handle information?

Well, maybe there's a mini-mini Abby inside that mini Abby's head. And this is a really dangerous idea. It's not a very functional idea.

And it's something that's really hard to keep yourself from falling into when you're trying to solve these sorts of problems. But nonetheless, somehow, our brains do need to be able to do all of these what are called executive functions. And so just saying, hey look, there's some portion of this that does that is important.

AUDIENCE: Oh, OK. Are we going?

PROFESSOR: Are you the introduction?

AUDIENCE: We are the introduction.

PROFESSOR: Then you are going first.

AUDIENCE: OK. So, well, the introduction was, like, talking about the entire study and stuff. And it said that there were, like, two ways that spatial attention is directed to [? peripheral ?] [? vision ?] events. So there's, like, overt shifts of attention and covert shifts of attention. And overt shifts of attention are like [? where you ?] [? head and ?] [? eye movements ?] and stuff. So, like, you're deliberately moving your head and stuff to focus on something else.

PROFESSOR: Mm-hm.

AUDIENCE: And then covert shifts of attention are like when you're still looking at something, but you're focusing on something else, right? Yeah. OK, and then there are also, like, two different shifts of attention stimuli. So there's exogenous-- is that how you say it?

PROFESSOR: Exogenous.

AUDIENCE: OK, exogenous, which is we have, like, externally driven shifts of attention. So that means, like, if you see a crash or something, like, something can be outside that makes you pay attention to that. And then endogenous shifts of attention are like strategic shifts of attention. So it's like when you, like, purposely try and make yourself pay attention to something.

AUDIENCE: [CHUCKLES]

AUDIENCE: What?

PROFESSOR: Yep, no, that's great.

AUDIENCE: [CHUCKLES]

AUDIENCE: Stop laughing at me.

AUDIENCE: I'm not.

AUDIENCE: OK, yeah, and then both covert and overt shifts of attention can be either exogenous or endogenous. And in previous studies, there's been, like, three possible relationship theory things between covert and over shifts of attention. So one is that, like, covert and over shifts of attention are completely independent of one another, which means that they happen in, like, different parts of the brain and stuff. And they just aren't even connected at all.

PROFESSOR: Mm-hm.

AUDIENCE: And then another one is covert and over shifts of attention are completely interdependent, which means, like, they use the same part of the brain, basically.

[INAUDIBLE].

AUDIENCE: It's three. One is they're independent.

[INTERPOSING VOICES]

AUDIENCE: Oh, OK. [INAUDIBLE], they talk about, like--

AUDIENCE: One is [INAUDIBLE].

AUDIENCE: Like, it talked about the middle one [INAUDIBLE].

AUDIENCE: [INAUDIBLE]

AUDIENCE: OK, whatever.

PROFESSOR: Order doesn't matter. There are three. I've heard two. They were both ones that were listed. What's the third case?

AUDIENCE: OK, so one is like when they're interdependent, which means they, like, use the exact same parts of the brain and stuff and like, yeah. There's a name for that, but it's not here. OK, so anyways, and then there's another one where--

AUDIENCE: That's premotive.

AUDIENCE: OK, that one was premotive. Yeah, OK, and then there's another one where it's like, it's in between. So, like, some parts are like the same, and some parts are different.

PROFESSOR: Do they kind of have--

AUDIENCE: So they use some of the same things but not all of the same things.

PROFESSOR: Mm-hm.

AUDIENCE: [INAUDIBLE]

AUDIENCE: Obviously, if the whole purpose of the study was just to check all the other studies' results, they wouldn't get much funding, because who really goes around [INAUDIBLE] looking to see if you can replicate on somebody else's work? It's kind of silly. It is.

PROFESSOR: It is important but unglamorous, like many things.

AUDIENCE: Yeah, you usually want to be the person who discovers a new thing. So what they added was ways to improve the study from previous ones. For example, they mentioned that in previous studies, the tests that aims to produce covert or covert shifts of attention had different demands. Like, I believe there were different points in the vision, where there were different distances. And that could cause the different levels of activation in the brain.

PROFESSOR: Mm-hm.

AUDIENCE: Also, the previous studies disagreed on whether greater neuroactivity is cause by covert or overt shifts. Some argue that it's one. Some argue that it's the other. So they want to clarify that.

PROFESSOR: Mm-hm.

AUDIENCE: Thirdly, the studies differed on whether they were exogenously or endogenously driven. Like, for the majority of the previous studies, they were endogenous.

AUDIENCE: Yeah, [INAUDIBLE].

AUDIENCE: Endogenous tests.

PROFESSOR: Mm-hm.

AUDIENCE: But then there was this one study that found that covert and overt shifts in attention produced similar results. And that was with exogenous [INAUDIBLE]. So they wanted to figure out if it's possible that this is a confounding variable, that just the difference between endogenous and exogenous were resulting in different [INAUDIBLE].

PROFESSOR: OK.

AUDIENCE: The study itself-- each subject was presented with two [INAUDIBLE] tasks. And the MRI [INAUDIBLE] measured. One task was to perform an overt shift of attention. And the other was to [INAUDIBLE] covert.

And both of the tasks had one peripheral shifting of attention and one, like, maintaining central attention, that you're looking at the same thing.

PROFESSOR: Right, which would be a not shift. So they're comparing the case where you shift your attention from central to periphery-- to one where you're not shifting it.

AUDIENCE: Mm-hm.

PROFESSOR: So which of those would you say is the control condition, Danny?

AUDIENCE: The [INAUDIBLE]?

PROFESSOR: Right, yeah.

AUDIENCE: Yeah, OK, and then [INAUDIBLE] was that, like, the other ones, they only took in within subject variability and not between subject variability. So the results couldn't be applied to larger populations.

So in this one, the statistical analysis controlled for both the thing and between subject variability. And there was identical visual stimuli in both the attention shift tasks. So the task demands were very similar. And all of the shifts of attention were endogenous [INAUDIBLE].

PROFESSOR: Good. Anyone have any questions about any of the background information for this? If you do have questions, holler out. All right, who's talking about the experimental procedures? That you guys?

AUDIENCE: Yeah.

PROFESSOR: OK, experimental procedures, which means we're reading this out of order because I think it makes more sense to read about the methods before you read about what they found out, because otherwise you don't know what they did. And it doesn't make any sense.

AUDIENCE: So--

PROFESSOR: So, Natasha, start us off.

AUDIENCE: Yeah, so there were 12 subjects-- 9 guys and 3 girls. And they were from this [INAUDIBLE] population. And they were all healthy, with no neurological problems. And they had normal or corrected normal vision.

They had to sign a form to consent, and they got money for their participation. So for the task design procedure, there were two tasks that they did while lying in the bore of the scanner. What's a bore of the--

PROFESSOR: It's like the hole down the middle-- so while they're in the fMRI scanner.

AUDIENCE: And they were the covert shifts of attentions and the overt shifts of attention. So they had an LCD projector illuminate the projected screen. And the order of the tasks was [? counter-balanced. ?]

The stimuli was identical except for injections. Both had blocked fMRI design with two experimental conditions. One was peripheral attention condition, and the other was central attention condition. And the blocks were 12.8 seconds with a 12.8-second rest between them.

And they had a fixed A, B, A, B pattern. So there was like one was peripheral attention, and the next was central attention and [INAUDIBLE]. And there were 49 blocks, including the rests.

And the peripheral attention condition and central attention condition were each presented 12 times. And the covert and overt shifts were run in separate sessions.

PROFESSOR: Anyone have questions?

AUDIENCE: Oh, there's more.

PROFESSOR: Oh. [INAUDIBLE] at a time.

AUDIENCE: In both the covert and the overt shifts of attention, test subjects were presented with pictures displaying nice intros step-wise, rotating [INAUDIBLE] overlaid with 8 white circles surrounded by 8 white circles. You can see the picture on page 109. Stimuli were shown blocks of 12.8 [INAUDIBLE], with each block composed of 16 trials, and each trial lasted for 0.8 second.

The 9 white circles were briefly displayed for each trial. And single one of the 9 white circles were randomly chosen to be slightly smaller than normal. During blocks, the central cross was rotated 45 degrees every 3.2 seconds. During the task, the only difference between the peripheral and central attention condition blocks was that one arm of the central cross was [INAUDIBLE] during the peripheral attention.

In the peripheral overt attention tasks, subject were told to glance at the peripheral circle that the red arm of the central cross was pointing toward. During the peripheral covert attention task, subjects were required to press a button with their right index finger if the red arm point to the smaller circle. In the central covert attention task, subject were similarly staring at a central white circle. During the central covert attention task, subject were asked to press a button with their right index finger if the centrally presented circle was smaller than the normal ones.

PROFESSOR: Good.

AUDIENCE: And then the researchers then tried to measure covert and overt with just the attention results. I'm not sure how they measured it, but it was probably in seconds. And they measured the covert shifts attention by subtracting their covert peripheral attention condition from the covert central attention condition. And all this referring to the neuroactivation [INAUDIBLE].

PROFESSOR: Right.

AUDIENCE: And they did the same with the overt condition. And they hoped to perform this experiment on pretty advanced imaging and information technology [INAUDIBLE], including a [INAUDIBLE] imaging and a fMRI imaging system. And that's it.

PROFESSOR: Yeah, OK, so the subtraction paradigm they're using is pretty typical for these kinds of imaging studies. If you want to know which portion of the brain is activate, because let's face it, most of the time most of your brain is busy. It's doing something or other. Your visual portion of your brain is busy.

And if you want to pin down what portion of your brain is busy on some very small task, the model that's used is to give you two tasks that are reasonably similar but check the one difference you're interested in. So in this case, they're comparing an attention-maintenance task to an attention-shifting task, look at what portions of the brain have the blood flow to the brain for each of these, and subtract the maintenance-attention blood flow from the shifting blood flow. So only the places where blood flow in the brain is different in the shifting task from in the maintaining-attention task will show up once you do that subtraction.

So the goal here is that, like, because your visual system is going to be doing all of this stuff, just processing this white cross and white circles. But hopefully, that's going to be the same on both the maintaining-attention and the shifting-attention conditions. And so you can just subtract it out. And it'll go away because that's not really relevant to the shifting attention that they're interested in. So they get a good map for both types of tasks and subtract one from the other to see what's left over or what's different between them, if that makes sense.

Does that make sense?

AUDIENCE: A little.

PROFESSOR: So it's like if I am an fMRI researcher. And here's your brain. And I ask you to do something, and maybe if I ask you to maintain attention, and maybe there's lighting up here and some down here and maybe some over here, this is the maintaining attention, that central condition. And then I ask you to do an attentional shift condition.

And we find that, hey look, all of these same areas still light up, but this one gets even brighter right up here on the top. And oh hey, a little bit down here gets extra bright, too. And then if I subtract them, what's left over are the things that are different between the two cases, which is this little--

AUDIENCE: [INAUDIBLE]

PROFESSOR: I don't care-- extra bit. You guys can figure it out. There's this little extra bit up here and this little bit down here. And therefore, you conclude that these are regions that are used in shifting but not in maintenance.

AUDIENCE: You have to take the opposite [? of the whole ?] [? back ?] [? side. ?]

AUDIENCE: Shh.

PROFESSOR: Yeah, yeah, yeah, it's the wrong direction. You get the point, right? So that's what they're doing. They're comparing the level of activation in these two similar tasks and looking at where it's different in the case that they're most interested in, which in this case is the shift.

AUDIENCE: Or you can take the absolute [? value ?] [INAUDIBLE].

PROFESSOR: Yes, except not because really what we're interested in is both increases and decreases. OK, good, yes. And then they have a lot of babble-- they have a lot of information about the fMRI methodology that they use.

So they used a good fMRI. And they're scanning at about a 3 millimeter cubed resolution. And they're doing that neuroactivation subtraction. And they're looking for what areas of the activation is significantly different between the covert and overt attention shifting tasks.

Good. Anyone have questions about the methodology? Everyone feel like they understand what the tasks they used were and how they worked? If you don't, this would be the time to say something.

All right, moving along then. Results-- what did they find out?

AUDIENCE: OK, so for the results, after they did the differences in the highlighted areas, they wanted to find the areas of the brain that were activated during covert or overt shifts of attention. And they did that using two one-sample t-tests. I don't know [INAUDIBLE].

PROFESSOR: A t-test is a test of significance between two groups.

AUDIENCE: It's when you don't know the population standard deviation, and so you take the [INAUDIBLE].

PROFESSOR: Yeah.

AUDIENCE: And--

PROFESSOR: You've taken this more recently than me, haven't you?

AUDIENCE: [INAUDIBLE]

PROFESSOR: Stats is good. Everybody should take stats. I need to take more stats.

AUDIENCE: And--

AUDIENCE: Is that like math?

AUDIENCE: Yes.

AUDIENCE: In a very practical manner.

AUDIENCE: OK, bless you. And these one-sample t-tests showed that because of a large amount of activation in the frontal parietal system in the brain, that resulted in both covert and overt shifts of attention.

AUDIENCE: Right. And then they also took [INAUDIBLE]. So it looks like that, except they [? add the ?] intersection instead of the subtraction. And--

PROFESSOR: That's this diagram on the bottom of page 104, right?

AUDIENCE: Yeah. And then they found that the overlap was very extensive for that. And that meant that the covert and overt shifts of of attention used a lot of the same areas of the brain.

AUDIENCE: [INAUDIBLE] inspection of [INAUDIBLE] suggests that [INAUDIBLE] to be in areas [INAUDIBLE] shifts of attention tests is more widespread during the overt shifts of attention test than that covert shifts of attention test. And the results of these statistical tests showed no significant differences between the amount of [? neutroactivation ?] of central conditions or the overt [INAUDIBLE] shifts of attention.

PROFESSOR: OK. So, yeah, look at the pretty picture. So tell me about this pretty picture, somebody in that group. What do the different colors mean?

AUDIENCE: Wait, which one

PROFESSOR: Page 104, figure 1.

AUDIENCE: OK, so covert is red. And overt is green. And both is yellow.

AUDIENCE: Because red and green [INAUDIBLE].

AUDIENCE: [INAUDIBLE]

PROFESSOR: So yellow is everything that was involved in both tasks. Green is everything that was involved in the overt but not the covert task. And red is vice versa, the covert but not the overt task.

AUDIENCE: Yes.

PROFESSOR: OK.

AUDIENCE: And there's more yellow than red.

PROFESSOR: There's more yellow than red. There's more green than red, too, which I think is the thing that they think is more relevant here. So what do these results mean? What do they find out in this study?

So the Discussion section is the section in which the researchers basically are saying, so this is what we found. And this is why it's relevant and what we think it means. So any interesting things that people found reading through it?

AUDIENCE: [INAUDIBLE]

PROFESSOR: Yeah. So what is this premotor theory of attention?

AUDIENCE: It's the fact that, like, [INAUDIBLE] shifts of attention activate [INAUDIBLE] the brain and that, like, [INAUDIBLE] overt shift is the fact that, like, the eyes don't move, but it's [INAUDIBLE].

PROFESSOR: Right, which is how it gets that premotor name, that all of the important stuff is happening earlier in processing than the actual muscle command to move your eyes.

AUDIENCE: Didn't they want a test on monkeys to not support it because they tested the individual neurons, and they found that there are separate neurons-- the different neuron group-- for--

PROFESSOR: That's the study cited in the intro, right? No, or was that in here? Where was it? I'm getting my pieces mixed up. No, there it is.

AUDIENCE: It's on page--

PROFESSOR: Page 106.

AUDIENCE: 106, yeah, which it says it undermines the premotor theory of [INAUDIBLE].

PROFESSOR: Right. So that was a study looking in particularly at neurons in the frontal eye fields. And they said that, yeah, it looks like this area does have one set of neurons that responds to overt and one that responds to covert shifts of attention. They also point out that the superior colliculus has neurons that respond to both.

So there's probably at least a little bit of both going on here. But the evidence for premotor seems to be stronger than the evidence otherwise, that if not everything, then at least most of what's involved in an intentional shift is the same between the two. What else?

Also, on page 106 they talk about the study by Beauchamp, et al., in 2001 who were looking, comparing whether overt or covert shifts in attention gets more neural activity going on. And if you look at the table on page 105, you can get a pretty good guess as to which one seems to be activating more regions. What would you guys say, overt or covert? What do you think, Sara?

AUDIENCE: Overt.

PROFESSOR: Yeah, so they found that overt shifts of attention result in more activation than in covert shifts. And there's one other earlier study that had found that. But most of the studies so far had actually found the opposite, that you get more activation in covert shifts than in overt shifts.

And until they did this one, one of the things that was different is that the Beauchamp study in 2001 that they're comparing themselves to had used exogenously driven shifts of attention. So they'd actually blinked some kind of stimulus out where they wanted you to shift your attention to, rather than just directing it from where you were already fixated. And it was possible that when you're doing endogenous shifts, then covert gets more activation, and when you're doing exogenous shifts, overt gets more activation.

But then these guys were using endogenous shifts, right? It was centrally controlled. It was where the arm of the cross is pointing, direct your attention in that direction. And they still found that overt used more neural activation than covert. And this is interesting, and they don't have a really good answer for it that I could find.

AUDIENCE: Well, wouldn't it just disprove the theory that it has nothing to do with either endogenous or exogenous? Because before, weren't they doubting [INAUDIBLE]? Maybe it's because they found opposite results for the [INAUDIBLE] exogenous and endogenous. They thought it might have been the mechanism that caused each stimulus, that [? they think ?] that's why the results are [INAUDIBLE].

PROFESSOR: Or at least it's a piece of evidence against it. Very rarely will you see people saying, this one study proves or disprove something. So it makes it a lot less likely that that is what causes the difference. Yeah, but they're still trying to reconcile their study in which they're using an endogenously shifted-- an endogenous means of shifting attention. And they're finding more activation in overt shifts with all of these other studies that were also using endogenous shifts of attention and were finding more activation in covert shifts.

They consider the idea that it could be because they were doing them on two different fMRI runs. And for each one, they were doing that activation subtraction with the central condition from that task so that there may have been some fundamental difference in the central attention condition for the covert versus the overt. And so they would have subtracted more in the covert condition than in the overt, and thus were left with less activation in total.

And they consider this, and decide that it is unlikely and improbable for a lot of reasons, and just shrug and move on. They don't have a good answer for this. This is one of the things that's interesting about this study that they don't really have a good explanation for.

Free Downloads

Free Streaming


Video


Caption

  • English-US (SRT)