Flash and JavaScript are required for this feature.
Download the video from Internet Archive.
Instructor: Abby Noyce
Lecture Topics:
Levels of thinking, Broca's aphasia, Wernicke's aphasia, Historical Timeline, Right Versus Left Brain, Magarick Effect, Prepositional ambiguity, How Language is Learned
Lecture 18
» Download English-US transcript (PDF)
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
ABBY NOYCE: We're going to talk today a little bit about what we know about language and the brain. Particularly, today we're going to talk about what's involved in understanding language, what we know about how meaning is represented, all of that good stuff. Or some of it. And then tomorrow, we'll talk more about what's involved in production and when you actually speak.
We're going to see some patient descriptions today that have patients who have deficits in comprehension or deficits in production, or both. They don't always break out evenly. But the fact that you'll see patients who can understand but not produce, or vice versa, seems to imply that there is at least some distinction there. Good job. And of course, not on the one I want. There we go.
The first thing I want to talk about-- hey, look, that's where we were-- is levels of-- OK. When we're talking about language, you can think about language on a lot of different levels. You guys are listening to me, and you're hopefully parsing it getting something out of it. And this level of thinking about the actual meaning that you're extracting from a group of sentences is called discourse level. So the fact that you are hopefully getting ideas triggered by the sweet sound of my voice. [INAUDIBLE]
Working further down, if we look at a specific sentence-- so for example-- let's pick a sentence. Peter-- all right. There's a moderately complex sentence. It doesn't have any subordinate clauses, but it's got this lovely prepositional phrase at the end there.
So how is this thing put together? When people first started trying to make sense out of how language structure works-- for a while, people thought it was all about word order. If you've got a noun, then you're most likely to have your next thing be a verb. If your next thing is a verb, then you'll most likely to get another noun, or maybe a preposition. And word order falls apart pretty hard when you start trying to structure things like, that boy you saw yesterday put the poodle in the closet, or that boy you saw yesterday who was wearing a blue shirt put the poodle in the closet. And the word order stops being so useful a tool for figuring out what the different parts of the sentence are.
Syntactitians break this down into saying that-- let's see. We've got a sentence here which is composed of a noun phrase and a big honking verb phrase. So this is our noun. All of this is a verb phrase. Our noun phrase is a really basic one. It's just got one noun in it, Peter. And this verb phrase has a couple of parts. If you were going to break that-- if you were going to break put the poodle in the closet down into the important parts, where would you break it up? You probably wouldn't break it up between the and poodle, for example. That would be pretty silly. Where might you break it up?
AUDIENCE: The verb place, the noun, and the--
ABBY NOYCE: Preposition.
AUDIENCE: Yeah.
ABBY NOYCE: Yeah. So you'd say that this phrase has a verb-- quite right-- and a noun phrase, and this big, long prepositional phrase. So put. This noun phrase has a determiner and a noun, the poodle. And then this, in turn, has the preposition in, and another noun phrase. In the closet.
OK.
People who study syntax-- syntactitians-- think that what's going on when you parse sentences is that you go-- do this transition from the word order to putting together some model of the underlying structure-- the structure of which pieces go together, which pieces are modifying which other pieces-- in order to extract meaning from it. You can also talk-- on lower levels you can talk about language, you can talk about the individual words. You can talk about the morphemes that make up the words. So in a sentence like-- oh, let's say-- Josh. Picking names at random. To [INAUDIBLE].
AUDIENCE: Squeaking.
ABBY NOYCE: Yeah. There's apparently furniture on the move. CDs. All right. This is a sentence that has five words, but it's got more morphemes than that. So Josh is-- a name like that is a single-morpheme word. It doesn't break down into smaller components. But for example, this one has a root word and then this -ed ending that means past tense.
And of course, that's an ending that's still the same way in all kinds of different words, but there's a lot of different pronunciations of that past tense marker, depending on what kind of word. The past tense marker in washed is different than the past tense marker in listened, which is different than the past tense marker in hunted, in terms of what they actually sound like.
But that's one word that's made up of two morphemes. There's another one here. There's Joe, and then there's this apostrophe S that's marking possessive. There's CDs, and this S that marks the plural.
Morphemes can be smaller than words, but they still carry meaning. Sometimes morphemes can't stand alone. They can only exist tagged on to something, like those past-tense markers or those plurality markers.
And then morphemes are, of course, themselves made up of phonemes, which are the sounds that make up a language, which you guys talked about with Lisa yesterday, right? Different languages have different phonemes. English has a phoneme set of about 40 sounds. Hawaiian has a lot less. Some languages have more. English is kind of on the high end. Cool.
These are the things that go into it that you have to be able to-- when you are parsing language, when you're hearing something that people say and make sense out of it, you've got to be dealing with it on all of these different levels. You've got to be trying to make sense out of things. And if your ability to pull out the phonemes fails, you're not going to get anything higher up.
It's like listening to somebody with a really thick foreign accent. You almost have to-- when you first start listing them, you have a really hard time understanding them, and then if you listen to them talk for a while you start being better at following it and better at figuring out what they're saying. And this is partly because you're kind of resetting where you think the phonemes-- what you think the space for the phonemes can be. Your idea of what sounds can fit a particular English phoneme expand to include what this person is saying, and once you can pull out the phonemes, then you can build up the rest of it.
All right. A couple of classic examples of people whose language processing skills are broken. Paul Broca, in the 1850s in Europe, because all the cool stuff here happened in Europe. We got Phineas Gage, but they got the language kids.
AUDIENCE: Who's Phineas Gage?
ABBY NOYCE: Phineas Gage had a railroad spike goes through his frontal lobe due to an accident.
AUDIENCE: Oh, wow.
ABBY NOYCE: Yes.
AUDIENCE: And then he became all nasty.
ABBY NOYCE: Yeah, and then he became a really not nice person. All right. Broca presented to the [FRENCH] an account of a patient at the hospital there in Paris, who had been there for about 15 years after having had an attack. This is before times like stroke were really reasonably common.
And this was a patient, Monsieur Leborgne, who seemed to display more or less normal intelligence, who could-- if you asked him to do things, he could do them. If you asked him-- but he couldn't speak. If you asked him to point to the-- point to the example of a case in this, for example. He could tell you which of these pictures was a banana, or which one was a shoe, or what have you. But he couldn't speak. All he would ever say was tom tom tom, which is just a nonsense syllable.
And what Broca they discovered in 1851, after Monsieur Leborgne passed on, is that-- he autopsied him, looked at his brain, and said, OK, can we point to a spot that's what's wrong with this guy, and discovered that he had a lesion in the left frontal lobes. The neurons in that area had died, so there was this-- just a fluid-filled space in his brain about chicken egg-sized. This is a big hole in this guy's frontal lobes. On the left-hand side, right in here. You can see Broca's area marked on the diagram there.
And Broca looked at this and said, OK, so we've got this patient-- and he started collecting cases. He pretty rapidly started getting other accounts of patients who were similarly afflicted, who are neither paralyzed nor idiots. And he decided they needed their-- to have their own term for this. He coined the term aphasia. Remember, we've had our agnosias, and we will talk about alexias and agraphias. This corner is all about the Greek roots. So aphasia, this inability to speak, to handle language-- spoken language in particular-- correctly.
Broca started collecting cases of these patients, and one of the things he found was that all of his cases had were lesions on the left-hand side. All of these were patients with lesions on the left. None of them were patients with lesions on the right. And Broca says, well, everything we know about the brain so far seems to point to it being really symmetrical, so this is probably just coincidence. I'm not convinced that the left-hand side of this means anything. At this point, Broca probably had about eight or 10 of these cases that he was documenting.
Over the next five or so years, Broca kept collecting cases he got up to about 20. He found a doc-- the [FRENCH] acquired a document from a physician working in southern France, who had also been collecting such cases and had also documented 40 of them, also all with lesions on the left hand side. And Broca finally found his clinching piece of evidence that this really is a left-hand side, left-hemisphere phenomenon when he got a case of a patient with an equivalent lesion on the right-hand side who didn't show any linguistic deficit.
So Broca says, OK, clearly this is something in which the brain is not symmetric. The left-hand side is more important for language and for speech than the right-hand side is. And what Broca didn't really notice about this particular area-- nobody really noticed until probably close to 100 years after Broca was working on it-- is that patient Broca's aphasia speak slowly and hesitantly. They'll have a hard time finding words. But also, the words they use are mostly nouns, with the occasional verb, and they don't have any of the syntax markers thrown in there-- all of the stuff that signposts a sentence, tells you how the words are related to each other, what's a subordinate clause, any of that. They don't have any of this stuff.
Here's an example of an interview with a patient with Broca's aphasia, a former Coast Guard guy who had a stroke that seems to have wiped out Broca's area. You can see the hesitance, the searching for words, the difficulty in apparently finding the words he wants, but also that the sentence structure you'd expect to see in a fluent English speaker-- it's just not there. So you just get things like, were you in the Coast Guard? No. Yes, yes. Ship. Massachusetts Coast Guard. Years. That is not a fluent English sentence.
So Broca was documenting this stuff. Broca was determining that we speak-- that speech depends on the left hemisphere, especially the left frontal lobes. And about 15 years later, along comes this other guy, Carl Wernicke. Wernicke was also documenting aphasic patients, but he had managed-- he was collecting cases of patients whose aphasia was very different from Broca's. These patients spoke very fluently, unlike Broca's patients, who would stammer and pause and search for words. Words flowed freely with these patients. They could just go on and talk. But the words were not the right words, or they weren't even words, and so patients made no sense.
One of the things Wernicke points out is that these patients often were diagnosed as confused, and it took some doing to show that they had this specific deficit in language versus simply just being out of touch with the world. Wernicke's patients also have difficulty with comprehension. Remember that Broca's aphasia patients can comprehend at least basic language. Wernicke's aphasia patients can't.
Wernicke showed that these guys, instead of having lesions up here in Broca's area, in the lower lateral frontal lobe, Wernicke's area is somewhere-- depending on who you ask-- down here in the temporal lobe. Remember, this is kind area that does auditory processing. And what Wernicke also pointed out is that Broca's area up here is right in with this strip of motor cortex. The primary motor cortex is what, of course, controls, plans, structures actions.
So Wernicke theorized that what's happening in Broca's patients is that the motor representations of words-- the kind of connection between your idea of a word and the lip and tongue and mouth movements that are necessary to produce it-- that that second motor representation was broken in some way, so patients had a hard time producing words. And he said that for patients with Wernicke's aphasia the auditory representation of words was broken, and so they weren't really connected to meaning anymore. It was just the sounds.
All right. Want to see a patient with it? Here's another example. This is a patient with Wernicke's aphasia. Both these are from the middle of the 20th century. Howard Gardner collected-- wrote a lovely book on language processing and collected examples. You can see this is a patient who just kind of starts talking. And at first-- what brings you to the hospital? Boy, I'm sweating. I'm awful nervous. At first, this sounds like it might be leading into a reasonable answer.
And then it just goes. I'm awful nervous, you know. Once in a while, I get caught up. I can't mention the terror poy. A month ago, quite a little. I've done a lot well. I impose a lot well. On the other hand, you know what I mean? I have to run around, look at over Trevin and all that sort of stuff.
And what Gardner says is-- at this point, he physically interrupted the patient, put a hand on his shoulder and said, thank you, because he seems to have had the impression that this patient would have just kept talking if he hadn't interfered.
Again, there's some stuff in here that I'm pretty sure is not actually real words. The whole thing doesn't really make sense. But clearly, production or syntax, stringing words together into sophisticated sentences, don't seem to be difficult for these patients in the way that they are for Broca's patients.
AUDIENCE: [INAUDIBLE] recorded this?
ABBY NOYCE: I assume he recorded this.
AUDIENCE: That would be [INAUDIBLE] shorthand.
ABBY NOYCE: Some extraordinarily good shorthand.
AUDIENCE: [INAUDIBLE]
ABBY NOYCE: Yes. All right. This was in the middle of the 1800s, and this all went out of fashion in the early 20th century. This was behaviorism, and you just stalled on all of these guys. And there was some reasonable, some not reasonable criticism of what the diagram makers of the 19th century had done, the biggest of which was that Wernicke's-- the most valid of which was Wernicke's descriptions of this idea that there were two speech centers, and disrupting one led to one aphasia and disrupting the other led to the other aphasia, was that it didn't fit well with the symptoms that patients were actually suffering. For example, that thing-- remember that Broca's patients have difficulty with syntax. They don't have syntactical words. And what people showed in the '70s is that this also means that they have difficulty with oddly constructed sentences. So their of word comprehension is good. Their syntactical comprehension is not so good. So if you gave-- if you said to a Broca's aphasia patient-- oh, I don't know, the difference between the cat caught the mouse and the cat was caught by the mouse-- these are two very different situations, but Broca's aphasia patients won't parse that. They'll be able to pull out-- it looks like-- the two nouns and the verb, and they know what order they're in, and they'll guess what the most likely construct is. So they here cat and mouse and catch. OK, clearly this should be the cat caught the mouse. It is much less likely that a mouse caught a cat in any kind of useful way. So they get tripped up by things like passive voice, things like complicated sentence structures.
Another criticism of these diagram makers was that very few cases fit neatly into categories where patients would have just one set of deficits and no others. On the other hand, since most of these cases come from trauma of some sort, or stroke, or abcesses, very rarely are you going to have one little brain center knocked out. You're likely to see damage spreading into adjacent areas. And there was a lot of arguing about whether the anatomical evidence was as good as the diagram makers-- as Broca and Wernicke and their colleagues thought it was. So for a while this was out of fashion.
And then, in the mid-20th century-- come on. No? Did my PowerPoint just die? Apparently. So then in the mid-20th century, this-- in the mid 20th century, this started coming back in for a number of reasons. This is the information-- processing. OK, good. We are responding to keyboard again. Maybe, sort of, I think. Yes. Now we've gone back to having a window. No, and it's not where I can reach it. Is it F7? No, that was not what I wanted. What's the keyboard command for it? Anyone know it? No. F12 is doing something. Nope.
AUDIENCE: Just click on the bottom--
ABBY NOYCE: The problem is that it's a piece that's off the screen, so now it's like waaah. Come on. Oh, you fail. Fail utterly. We'll go with that. F5. OK. I can remember. Skip, skip, skip. There we go.
So this whole modularity idea comes back in in the 20th century. This is partly because of this information processing approach being respectable again, and it's also-- as people got started thinking about this again, they started looking for these cases, finding these cases, documenting patients with specific deficits. And one of the things they started finding in the '50s and '60s is, you'd see patients who had difficulty, for example, reading words for abstract nouns but not for concrete nouns, or defining words for abstract nouns versus concrete nouns.
But even finer than that, you'd see patients who could not identify fruits and vegetables, but could identify other kinds of things, or patients who could not name colors but could name other kinds of things. And this is getting us into this idea of, OK, how does this work?
One possibility is that in different-- you've got different tiny regions of your brain. And you've got one portion of your brain that stores color words, and one portion of your brain that stores shape words, and one portion of your brain that that stores names, and one portion of your brain that stores animals, and so on and so forth.
That's possible, but as the fineness of categories goes up, it seems unlikely, especially given what we already know and what we've already talked about how the brain seems to store information like memory information-- as this distributed pattern of activation throughout different parts of the cortex.
So how is lexical information stored in the brain? Jumping around. Linguists think of lexicons as having three important features. They say that you've got the phonetic information about the word and the orthographic information about a word-- that's how to write it, how to spell it-- and you've got the meaning.
It's pretty clear there's got to be-- you've got to be able to go from a word's meaning and come up with its pronunciation, or from a word's pronunciation and come up with its meaning, fairly easily. Spoken language works. In normal functional human beings, there is a two-way connection between the phonetic information and the meaning information.
There's some evidence showing there's also a direct connection between the orthographic information and the phonetic information. If I give you a word, you can write it down. If you see a word, you can read it out loud. Interestingly-- you'll see there's been a few cases of patients who, for example, can still read words, including words that are nonstandardly pronounced-- words with nonstandard spelling, because English has some of those-- but wouldn't be able to tell you what they meant, so that the link between the phonetic in the orthographic representation is still good but the link between either of them and the meaning is broken.
There have been cases of patients who, if you ask them to define a word by speaking to them-- tell them the word-- and they'll say, I don't know what it means, but if I write it down and then look at it, I can tell you what it means. All of these cases seem to show that there's links to all three directions between these three pieces of the representation of a word. And this is a model that's built from the linguistics point of view. They're really interested in the things that make it a word, as something that can be combined into a sentence, versus what a knowledge and memory person might say, which is, how do you connect that with all the other things you know about a concept?
Thinking a little bit more abstractly, and thinking a little bit more about what we know about how the brain stores other kinds of information, it's probable that when you think of a word-- think of the word hyena. You've got the phonetic component of it. You probably know how to spell it. And when you think about hyenas, you probably bring up all sorts of other associations, like Africa or savannas or The Lion King, or something that connects to your ideas of what hyenas are like.
If you think about apples, you probably come up with the really salient things. How do you know when you're looking at an-- if you think about apples in the abstract sense, you probably get all sorts of stuff. You get color. You get the shape. You get the texture of biting into an apple. You get the flavor of an apple. All of these are associations with the pronunciation apple that you can bring up.
There's a theory that says that for some of these categories-- and particularly for categories that you tend to see as specific deficits-- what kind of information is stored that is relevant to identifying things in that category is very specific. So for identifying fruits and vegetables, color and shape are probably really salient. Information about how this object moves is probably not so relevant. If we're identifying man-made objects, usually function is really-- it's this idea that function is important, but color is not so much so. For example, you can probably identify a chair no matter what color it is, but you might have difficulty identifying a purple banana. And so there's this theory that says that patients whose ability to represent color in their brains is decreased are then going to have difficulty making the connection between, for example, the word banana and a picture of a banana, because one of the pieces is not strong enough to bring up the other ones, because such a big piece of the meaning is missing. So I show one of these patients a picture of a banana, but their are yellow representation maybe isn't very strong. Their color module is damaged in some way. Then the pattern of activation isn't going to be able to bring up the word banana the way it would if the color representation was working. The leftover bits aren't strong enough to bring in that activation. Does that make sense? Do you want me to back up and go over it again?
AUDIENCE: Yes.
ABBY NOYCE: OK. I'm still wrapping my head around this. This was a new theory for me, so I'm not sure how clear it is. This theory says that-- we know that there are patients who have difficulty in naming, or difficulty identifying, particular categories of things.
So if I have a stack of cards with pictures on them, and I just show them to patient and I say, what is this? They'll say-- I'll show them a house. They'll say, it's a house. I'll show them an alarm clock. They'll say, it's a clock. I'll show them a cat. They'll say, it's a cat. I'll show them an apple. They'll say, I've got no idea. So they'll be good in most categories and fail at one specific category.
You'll see the same thing for animals. You'll see the same thing for man-made objects. You'll see the same thing for shapes or color names. There's very specific places in which this breaks. People will be good at most things, and unable to deal with one very specific category.
What we know about how you hold information is that information is stored as a pattern of synapses in the brain of some sort, where one piece of a representation can bring up others. Yes, Sarah?
AUDIENCE: Can they still see the color? Or were they just not able to identify it at all?
ABBY NOYCE: For patients who can't name colors?
AUDIENCE: Yeah. Would they still actually see it?
ABBY NOYCE: Usually-- how do you test it? I think that people have done it by testing with matching things. And I know that at least some of these patients can definitely still do color matching. They can say, this color is the same as that color, but I have no idea what the name for it is. Or the same thing with a banana. They can say that this picture matches that picture, and not this piece of broccoli over here, but I don't know what the names for them are. So they're still seeing them.
And you'll see things like patients who can, for example, not identify simple objects but can mime what they can do with them. So if you give them a key, they say, I don't know what this is, but this is what you do with it. Or a hairbrush. I don't know what this is, but this is what you do with it. And they might not be able to say what you do, but they can show it. So these patients aren't necessarily losing their knowledge about the world. They're losing, in some way, the connection between the linguistic, the phonetic information about a concept with everything else they know about the concept.
The theory says that these categories where you'll see these very specific inabilities to name things-- usually that category is defined by some property or group of properties. So for fruit color, and shape are important properties for identifying fruit. For animals, probably motion patterns and some amount of structural detail. Think about what makes a deer look different from an antelope. it's mostly that the antlers are different. Functioning adult English speakers do this all the time. But a lot of-- but you'll see patients who can't make those distinctions anymore.
And these guys' theory about why this happens says that what you would-- that if you can't-- I'm going to stick with fruit, because I keep jumping categories I'll get confused. If you can't identify fruit, then it's likely that the portions of your brain that hold shape information and color information are damaged in some ways, so that the shape-- if you're looking at the picture of the apple, what should happen is that the round shape and the red color and the stem on the top should-- that your representation of these in your visual cortex should then be able to trigger other information-- your representation of the sound apple, your representation of the spelling A-P-P-L-E, or representation of your memory of going apple picking in first grade, whatever. And that if your ability to represent color is flawed in some way, then-- maybe if you can still see the roundness and the stem, but you can't represent the red color the way that you used to be able to, maybe you had a stroke or something-- then the representation of the roundness and the stem without the color isn't enough to bring up the representations of the phonetic information and the spelling, and all these other aspects of the word apple, from the shape. Is that better? OK.
This is really tying into this idea that we've been looking at for a couple of weeks, where information in the brain is a pattern of activation of some sort, a pattern of neurons for different concepts. And this gets really hard to think about, especially when you start trying to think about multiple things in mind at the same time, and it's all on top of each other. And nobody's really come up with a good explanation for how that works. Clearly it does work. We know we can do this. You can think about apples and bananas at the same time, right? I hope.
So you get patients with these startlingly precise deficits. This is another patient from-- a 20th-century patient who had a small tumor in his left frontal lobe-- not quite in Broca's area. A little bit further and further forwards, kind of prefrontal. And he had surgery to have it removed.
And afterwards, what they found is that he had a little bit of-- and they were testing him afterwards. If you do surgery on your brain, you run a bunch of neurological tests on people to make sure that came out of it OK. If they have deficits, you want to know about this before you let them get behind the wheel of a car, for example.
And so they were testing him, and they found that in general, if you asked him to do activities with his right hand-- remember, right hand controlled by left frontal lobe-- that they were-- the right side of his body was a little weak, which is to be expected. You were futzing around in the left frontal lobe.
And they were asking him to compare-- writing with both hands, and he's a righty. So he writes with his right hand, and they had him write stuff with his left hand. And his left hand was-- you'd expect a right who's writing left-handed to be sloppy, but they also found that he was making strange mistakes. He wrote-- they asked him to write yesterday, and he wrote Y-O-N-T-I, and very weird substitutions like that.
And they started testing this further, and what they ended up finding-- and this is weird. Left hand controlled by right brain. We didn't do anything to the right hemisphere of this guy's brain. We didn't go there. And they started testing him further, and what they found was that if he could see items, he could identify them just fine. But if you had him put his hand into a box where he couldn't see it, and had him identify an item by touch, he could do it with his right hand just fine, but if you asked him to do it with his left hand, he couldn't name what these objects were.
One possibility is, he's just not recognizing them when they come in with the left hand. But if they watched what his hands were doing, then he would be looking at would be the salient features of the object, the things that would tell you what it was. So if he was looking at a key, he'd run his fingers over the little bumpy edges of the key. If it was a toothbrush, he'd feel the long handle and the bristles. He knew what the important parts of this were, but he couldn't name it.
And they said, OK, put it back down. Take your hand out. Can you draw it? Couldn't draw it with the right hand. Could draw it with the left hand, the hand that had felt it. They said, OK, can you mime what you would do with it? And he could. He could mime turning a key or combing his hair or brushing his teeth.
And if they asked him to say what he was doing, he'd mime turning a key and say, I'm erasing a chalkboard with this. Or he'd brush his teeth and say, I'm combing my hair with this. So clearly the action is right. He knows what this object is. He knows what you would do with it. But he can't say it.
AUDIENCE: Is this for his left hand?
ABBY NOYCE: Yes. This is tasks with his left hand, which is controlled by his right brain, or right motor cortex, which they didn't do an operation to. And the hypothesis that these researchers came up with, and which was eventually proven to be more or less right when they did an autopsy on this patient many years later--
AUDIENCE: [INAUDIBLE]
ABBY NOYCE: Yes. Just because you have a hypothesis does not mean you get to do autopsies on people-- was that during the operation, the patient's corpus callosum had been damaged. You guys know-- remember, the corpus callosum is that big tract of nerve fibers that goes between the two halves of your brain, so that the left brain and the right brain can talk to each other and be in sync and figure out what they're doing.
And not all of his corpus callosum had been damaged. But what it looks like is that the part that was carrying verbal and linguistic information was damaged. If his left hand is feeling this object, then it goes into right sensory cortex, right? Right cortex handles language for most of the population. But it couldn't get over-- right cortex does not handle language. So it goes to the right cortex, but in order to say what it is, you've got to get the left hemisphere involved. The corpus callosum is broken. The information can't get to the parts of your brain that produce language.
So this patient-- and the other thing you'll see that happens with patients like this-- if you say, what are you doing? Rather than saying, I don't know, they'll come up with some story about what they're doing. They'll say, I'm doing-- you heard him. He'd say I'm erasing a chalkboard. With my key-turning. No. But he would say-- because the brain will fill in something if it doesn't have good information to work with.
You'll see that in other cases, too. Who was talking about macular degeneration at some point? So you'll see that people with macular degeneration will often hallucinate, because they don't have any information coming into that central part of their visual field, and the brain fills it in. You see the same thing happening here. The brain likes to have something that it can do.
So this is another piece of information-- another piece of evidence for this idea that linguistic capabilities are located, really, in the left hemisphere, for most people. For 97% of righties, linguistic information is controlled by the left hemisphere, linguistic ability. It's not quite so high for lefties. How many lefties? Anyone in this room? Two? Not bad.
So for lefties, there was-- originally, when this hemispheric dominance thing came out, everybody said, well, maybe it's just hand dominance. So if you're a righty, then you would expect to have left hemisphere dominance, so left hemisphere does language. For lefties, therefore, you should see that the right hemisphere does language.
And people started actually studying this once you started getting like good brain imagery, where you ask people to do linguistic stuff and see which part of their brain is more active. And what they found is that in lefties-- 70% of left-handed people, it's still the left hemisphere that controls language and linguistic ability. And the other 30% is about evenly split between right-hemisphere controlled and language being really split between the hemispheres, where both of them are involved.
We will probably keep saying, this happens in the left hemisphere, but there's this big caveat here that says it's not necessarily like that for everybody. It's just like that for a big majority of the population, and we get sloppy in our terminology.
AUDIENCE: So if you're ambidextrous, [INAUDIBLE]?
ABBY NOYCE: It might be. You'll see often if-- I don't know if you guys have noticed. There's usually study flyers posted along the hallways here, because it's summer, and college students are looking for cash, and being a study participant is usually easy money. They're like, 28-day sleep trial, $5,000. And college students are like, OK, I'll go for that.
But anyway, one of the things you'll see with people who are studying-- doing imaging stuff with language, where they want to look at what parts of your brain are involved in linguistic tasks, will ask for native English speakers, which makes some amount of sense. You want to take this native language confound out of there. They'll also ask for people who are right-handed. They don't want to throw lefties in there because it will mess up their data. They want it to be as close as possible to a really clean data set. If I'm trying to average over 10 people, I want them all to be coming in with more or less the same configuration. Did someone else have a hand up? Yeah.
AUDIENCE: OK. Well, this isn't really [INAUDIBLE], but in some way it kind of is. I was at my optometrist's office, and my right eye is worse than my left eye. And I was asking my optometrist why that is. And he's like, are you right-handed? I'm like, yeah. He goes, that's why. But do you know how that connects?
ABBY NOYCE: Your left eye is better than your right eye, or the other way around?
AUDIENCE: Yeah.
ABBY NOYCE: Left eye's better. I don't know. I don't have a good explanation for that, because-- also because, remember, that each eye's input gets split, because it gets split down the middle of the visual field. So each eye's input goes half to one side and half to the other. But the muscles that control it are all controlled from the opposite side. So your right eye is controlled from the left side of your brain, and vice versa.
I don't know-- I'm one of those lucky kids with perfect vision who doesn't have glasses and stuff, so I don't know a lot about what effects it or makes it worse. But-- I was going somewhere with this. Something about-- there may be a connection between like how good your control of the muscles in each eye are versus how bad your vision in that eye is. I don't know.
This is-- because it also has new how the eyeballs are shaped and stuff, and how good-- but it's also-- the reason people get far-sighted as they get older is that-- you've got a lens, and you basically change the thickness of it in order to focus. So there's muscles that contract it and that stretch out. And over time, those muscles stop having as much range as they do in younger people. So this is why-- hit 40 or 50, and everybody starts needing reading glasses.
Almost all middle-aged on up adults do. I remember when my dad had to get them. I get my perfect vision from my father, who was always-- for his entire life had been very smug about, oh, I don't need glasses. My mom is nearsighted. And of course, he just refused to admit that he needed reading glasses, until it got to the point where he was like-- we were going out to dinner, and he's trying to read a menu way out there, because he couldn't focus on it any closer to his face. Nothing like being laughed at by your children to get you to go get reading glasses.
OK. Back on topic. Looking at what kind of deficit patients have tells us something, gives us some clue about how language is stored in the brain. You can think of this as-- we were thinking already about episodic memory, about memory for events. You can think similarly about memory for word meanings, especially for concrete word meanings-- words that refer to categories of things or names, being able to in turn bring up representations of the sensory pieces that go together with that, but also to be able to bring up these more abstract things like, the pronunciation of a word or other words that mean roughly the same thing-- synonyms-- or the spelling of a word. And all of this is still hinging on this idea for one part of a representation to activate others and bring them online.
All right. Oh, hey. Let's see if we get-- Do I have internet yet? Go, Firefox, go. This is what happens when I build this on one computer and play it on another.
Have you guys seen this before, the McGurk effect? Has anyone here seen this? This is a classic-- oh, shut up. That's what I wanted. Go. Can we go? Yeah. OK. This is a classic effect of how different things that are happening when you're perceiving language have an effect on it. What I want you guys to do is-- we're going to play with a bunch of times. We're going to pause. Shh. Are we done? Let me get you some sound here. And you're only going to, I think, get the sound off of this. But-- oh, come on. Unmute.
AUDIENCE: Is there a volume?
ABBY NOYCE: There is a volume thing over there, but my computer is not plugged into it. Let me see if this is good enough to work. Sound. All right. I want you guys to close your eyes and just listen to the soundtrack of this for a minute.
[AUDIO PLAYBACK]
- Ba ba, ba ba, ba ba.
[END PLAYBACK]
ABBY NOYCE: All right. What did he say?
AUDIENCE: [INAUDIBLE]
ABBY NOYCE: Not loud enough to hear it? Do I have an audio cable thing? I do.
AUDIENCE: I thought he said ba ba.
ABBY NOYCE: Yeah? Where's the sound out on this baby? Come on, baby. Where is it? [INAUDIBLE] No. I know you have a sound out. It's in the front.
AUDIENCE: His hair is definitely really nice.
ABBY NOYCE: Was his hair better than yours?
AUDIENCE: [INAUDIBLE]
ABBY NOYCE: All right. Listen to it again. Now we need the volume control. Not into the red zone. Nope. I don't know. We're not going to go with that. OK, we'll go back to doing it on the laptop.
AUDIENCE: [INAUDIBLE] bus yesterday when--
ABBY NOYCE: [? Nisa? ?]
AUDIENCE: Yeah. When she [INAUDIBLE] we had to put it on full blast, too.
ABBY NOYCE: You had to pull it all the way up? All right. We'll try it.
AUDIENCE: Oh, yeah.
ABBY NOYCE: My life as a techie makes me not want to do that. All right. That's all the way up. Way up. That's bad.
[AUDIO PLAYBACK]
- Ba ba, ba ba, ba ba.
[END PLAYBACK]
ABBY NOYCE: All right. What syllable did-- what did it sound like he was saying? What syllable?
AUDIENCE: Ba.
AUDIENCE: It sounded like bye.
ABBY NOYCE: How many people think they heard a B as in balloon as the first consonant? How many would heard they thought they heard a D as in donkey?
AUDIENCE: Wait, does [INAUDIBLE]? Because I could [INAUDIBLE].
ABBY NOYCE: It's not as well-- it's not the best dubbed version of this. Listen to it with your eyes closed one time, and it should be pretty straightforward what he's saying. And then it's different--
[AUDIO PLAYBACK]
- Ba ba, ba ba, ba ba.
[END PLAYBACK]
ABBY NOYCE: All right. What was that?
AUDIENCE: Ba ba.
ABBY NOYCE: Ba ba? All right. What most people hear with this is that it if they listed to it with their eyes closed, if they just get the soundtrack, it sounds like he's saying ba ba, ba ba, ba ba. If you watch it-- and it's better on the second two rounds than on the first one. The first one is dubbed a little weird.
[AUDIO PLAYBACK]
- Ba ba, ba ba, ba ba.
[END PLAYBACK]
ABBY NOYCE: Then what people tend to hear is da da, da da, da da. You heard L's?
AUDIENCE: I heard G's.
ABBY NOYCE: You heard G? What this is is-- here's a secret. The soundtrack and the video, not recorded at the same time. The soundtrack is of this guy saying ba ba, with B's. The video of him saying ga ga, with G's.
AUDIENCE: I like the D's.
ABBY NOYCE: Huh?
AUDIENCE: I like the D's--
ABBY NOYCE: Because if you mix-- if you combine these two, what you most people hear is a sound that's midway in between the guh and the buh, which is the da da, da da, da da sound. This is called the McGurk effect. It was documented, oh, geez, I don't know. 20th century some time. And this is an example of-- what's affecting what you hear is not just the sound coming in. Just watching this guy's lips moving with something that's inconsistent with the sound causes you to perceive something different. Back around to our big theme of what you perceive is not necessarily what's actually out there in the real world.
[AUDIO PLAYBACK]
- Ba ba, ba ba, ba ba.
[END PLAYBACK]
ABBY NOYCE: I think the middle one is the strongest D effect. I hear the G on the first one.
AUDIENCE: If I just look to the corner of the picture instead, I hear the ba.
ABBY NOYCE: Yeah, because you're not--
AUDIENCE: If you don't focus--
ABBY NOYCE: Right, because-- remember that you've only got good visual perception right in the very center of your visual field. If you off see the center, you can't see it so much. Anyway, that's a cool effect. It's one of the classics. I just wanted to show it to you guys. Oh! Skip, skip, skip. Come on. Skip one more. OK.
Shifting gears a little bit to talking about--
AUDIENCE: You can also say outside of a dog, a book is a man's best friend--
ABBY NOYCE: Friend, and inside--
AUDIENCE: A dog, it's too dark to read.
ABBY NOYCE: That's either him or Mark Twain, depending on whose book of quotations you look at.
This is what's called a garden path sentence. And this one is actually two sentences, where it leads you down one interpretation of what the sentence says. If you just hear, I shot an elephant in my pajamas, you envision Monsieur Marks there in his pajamas shooting an elephant. You think the proposition is modifying the subject, not the object. And the second sentence causes you to re-evaluate that.
AUDIENCE: I think of the sentence-- I kept picturing an elephant actually in his pajamas.
ABBY NOYCE: For some reason or another, you bring up the different interpretation of it, which isn't like a problem or anything. It's just perhaps a minority case, which is one of the things that makes this funny, is it causes you to do that double take.
Garden path sentences are an example of what's called structural ambiguity. There's two kinds of ambiguity you can encounter in parsing language. There's word ambiguity-- like, a word like watch can be either a verb meaning to look at something, or a noun meaning a little clock that you carry around with you in some way. Word ambiguity is only moderately interesting, I think. I think structural ambiguity is cooler.
Sentences with structural ambiguity-- or, for example, if I start out a sentence that says-- let's see. Oh, I know-- I have a headline. What is it?
AUDIENCE: [INAUDIBLE]
ABBY NOYCE: Where do you think this sentence is going? This is a newspaper headline that my Linguistics 101 teacher brought us, which is, "85-year-old Fed secretary to--" where it actually goes is, retire. Pause. Re-analyze. That's Fed as in short for Federal something or other. "85-year-old Fed secretary to retire." That's another garden path sentence there, where it sets you up to parse a bunch of earlier stuff one way-- at first you're thinking of this as subject, verb, object, direct object. And then you stop, and you have to go back and re-parse all of the earlier stuff. This is now adjective, noun, verb. And it takes a minute to do it.
AUDIENCE: Basically, those sentences have multiple syntactic maps.
ABBY NOYCE: Yeah. Exactly right. If you were breaking it down in a syntax tree, like we were doing with the poodle in the closet before, then there's two possible ways you could do it, and which of those you follow affects the outcome. So there's two-- and clearly, you don't hold the whole thing in your mind until you've gotten the whole sentence and then analyze the structure. Otherwise these things wouldn't be so jarring when they throw you the other direction.
There's two hypotheses for how this works. The parser hypothesis, the simpler hypothesis, says that you go with what you've got, and you pick whichever syntactic structure would be simpler-- or, more common, you've got weighted preferences for different syntactic structures-- and then that structure is checked against meaning as the rest of the sentence comes in. And when it's wrong, you go back and re-analyze. But this parser hypothesis says that as each new word comes in, you pick the best available syntactic structure at the time and just go with it.
And then there's this slightly more sophisticated ambiguity resolution hypothesis, which most people are leaning in favor of nowadays. And that might just be that it has this currently trendy idea of bottom-up and top-down processes in competition. And this is how-- a lot of our current understanding of how this works seems to be similar across different areas of knowledge, which is a good thing. It's nice-- it's reassuring to see the same processes in different parts of the brain. That seems like it should be what we should see, but it also leads to-- someone would be concerned that we're getting caught up in what's currently fashionable in science.
The ambiguity resolution hypothesis says that all the way through the parsing process, all the way through hearing a sentence, that the bottom-up information that's coming in can activate these different structural representations and top-down processes about context. If you already know that we're talking about-- if you read this on the business page, for example, you might be already biased away from the fed secretary to the lions interpretation of it, because you've got the context. So all of these different things can activate different representations, and whichever representation is activated the strongest is the one that you use at any given point in time.
AUDIENCE: Who said dogs?
ABBY NOYCE: Someone down there.
AUDIENCE: What?
AUDIENCE: [INAUDIBLE]
AUDIENCE: Oh, I thought it said dots.
AUDIENCE: I like dots. [INAUDIBLE]
ABBY NOYCE: I've never met any ferocious, meat-eating dots. I don't know about the rest of y'all. All right. So the other thing I wanted to talk about was a little bit about what we know about how kids learn language. Because clearly, just about everybody does learn language. And up until probably the middle of the 20th century, just how difficult this was wasn't quite appreciated. So language is clearly learned, right?
Kids learn whatever language they're exposed to. If you are-- if you grow up in a household with people who speak English, you speak English. If you grow up in a household with people who speak Spanish, you speak Spanish. If your parents spoke Cherokee, and you are adopted into a household with people who speak French, you will grow up speaking French.
It's not like there's this genetic component-- there's not like, there's a genetic restriction on what language you will speak. It's totally dependent on what you grow up hearing. There are lots of languages out there. They've got a wide variety of vocabulary. They've got a wide variety of grammatical structures.
And kids who are not exposed to language during this kind of critical development period don't learn it. Did I tell you guys about Jeanie on Monday? California 70s? Yeah.
AUDIENCE: [INAUDIBLE]
ABBY NOYCE: Yeah.
AUDIENCE: [INAUDIBLE]
ABBY NOYCE: So there's been a couple of cases of kids who were raised in more or less abusive situations and were not exposed to language for some period of time. So the case-- one of the better known cases is a girl named Genie who was from California whose parents basically locked her in a closet for 12 years, didn't talk to her, brought her food and water from time to time. Very abusive situation. Bad situation.
Was eventually taken out of that home and moved into foster care. But never learned language in the way that any kind of normal person does. Like, I think she developed a vocabulary of like 150 words. Most adults have somewhere between 50 and 100,000 words. Never developed syntax or complicated sentence structure or any of that.
So there's a critical development window for being exposed to language. No. She's too old.
AUDIENCE: Is that why like, foreigners from America like, are like, in like their teens or younger, their accent usually [INAUDIBLE]?
ABBY NOYCE: Yeah, or people who come to America and they're like, in elementary school, will just learn English really well and really fast. Yeah there's a critical window for it. Yes?
AUDIENCE: I was reading about language evolution. And--
ABBY NOYCE: Ooh! A fun topic!
AUDIENCE: Yeah. And there was this thing about, like, about [INAUDIBLE] I think that we have in like, ancient times, I think this [INAUDIBLE]. But there was like some kind of ruler who thought that all languages were like, evolved from like, one original language. So you [INAUDIBLE] say a similar thing, like, made someone stay like, away from anyone, like, an d to see if they would develop--
ABBY NOYCE: Some kind of like proto, super original language. Yeah. Did it work?
AUDIENCE: No.
ABBY NOYCE: No. Yeah. So there's been a couple of-- yeah, so evidence like that shows-- was what made people say, OK, clearly language is learned. And for a long time, people thought it was just like, learning anything else. Learning to cook or be a farmer or any of the things that young people learn to do pre the Industrial Revolution for the vast majority of human history. OK.
The things that young people learn to do post Industrial Revolution are very different than things the young people learned to do 200, 300 years ago. Right? OK.
AUDIENCE: [INAUDIBLE]
ABBY NOYCE: So in the '50s, Chomsky-- this guy Chomsky, you all might've heard of him-- pointed out, said, looked at this and said, that can't be right. And he pointed out two things. He said that, OK, so if you speak a language, then you can produce-- basically, you can produce an infinite number of different sentences using the vocabulary in that language, recombining it in different ways. Every day, you probably produce sentences that you've never heard before in your life. Every day, you hear sentences that you've never heard before in your life. Every day, you probably produce sentences that it's reasonably likely nobody has ever produced before.
So there's this-- so he said, OK, so this is like, infinite. And he said, and little kids, like language learning age kids, two or three-year-olds, can do this. Learn to do it. And they want to do it really, really, really fast.
Like, so most kids start talking between a year and 18 months old. Somewhere in there. And language development is one of these things that ranges really widely when it first starts showing up and everybody pretty much evens out by first grade. So if you got a two-year-old cousin who isn't talking yet, don't freak out.
But most people start-- most kids start talking between about a year and 18 months of age. And they'll have like, single word utterances, right? Anyone here hang out with like, just starting to talk kids ever? At some point? Siblings, cousins, neighbors. And they'll say things like, mama, and bottle, and up. Right? They'll use these-- and mine.
Little kids will say stuff, single word utterances, to get something they want. And kids usually, around six months after they start doing that, will kind of tend to graduate to two or maybe three words utterances. So they'll say things like, juice good, or want ball, or mama mine, or no. They get no real early on a lot of the time.
And somewhere right after that, like between two and three, they go from like, these two words structures to sentences. Like, yesterday we went to the park. Or I want the thing. Or-- and they'll-- there's this shift. And once they produce sentences, within the next year, they're producing sentences with subordinate clauses.
And so Chomsky said that the way the kids can learn this is just too good. That it's not possible or reasonable that kids who like, still can't manage to operate a spoon without dumping things all over themselves, can master language. And so Chomsky said that in some way, humans have to have a universal grammar, some kind of genetically determined module in your brain that is wired to look for language. To look through the different kinds of input coming in and pick up the linguistic bits and slot them into kind of certain preexisting patterns and learn it.
This is one of the things that's been kind of an ongoing kerfuffle in evolutionary psych for the last 40, 50 years. Kerfuffle, like a squabble. Like lots of people making lots of noise. Sometimes by writing nasty papers at each other. This is academics. It doesn't usually get into actual yelling matches. I've heard some very pointed questions at conferences, though.
AUDIENCE: [INAUDIBLE]
ABBY NOYCE: Yeah. He's also like, a liberal political guy, like, who does a lot of stuff on that world. So people-- it took me a while to figure out that that guy and the linguistics guy were the same guy. They are. He's actually here at MIT, Chomsky is.
AUDIENCE: For real?
ABBY NOYCE: Yeah. I mean probably not now. It's 7:00 in the evening. But yeah, he works here. There's a lot of cool linguistics folks here and at Harvard.
AUDIENCE: [INAUDIBLE] Can we question you? Please.
ABBY NOYCE: So Chomsky said the speakers of a language, OK, so that when you speak a language, he said, you have to know its vocabulary. And learning vocabulary is probably pretty straightforward. You learn a word, you learn its meaning. This is just a matter of sticking this stuff into your brain.
And it's not hugely improbable to think that the sort of-- humans are good at all sorts of ways of finding patterns, finding meaning, building kind of representations of category knowledge and all of this other stuff.
And it's not to far off to say that we're probably wired to be good at finding word-- matching word phonetics to meaning information, that there might be a critical development period where you can just soak this stuff up. Kids learn like, 10 words a day between the ages of three and six. Their vocabulary just skyrockets.
More interestingly, if you're speaking a language, you also have what Chomsky calls its generative grammar. A generative grammar is all of the rules for how you can put things together and how you can put kinds of words together in English. So English standards of grammar says that nouns-- that subjects should go before verbs, the verb should go before objects. Propositions should go after the verb and before the thing that they're-- the noun that they're-- that is the object of the proposition.
All of these are part of the generative grammar for how you build English sentences. And of course, nobody thinks about this when they're building an English sentence. Because if you are a native English speaker, you can do this by, this sounds right. This does not sound right. If I give you a sentence that says something like, Johnny cat sat mat, you go, what? It doesn't make sense. It doesn't sound right. You can tell.
So kids learn both the vocabulary and the generative grammar. And they do it-- they're clearly not just doing it by things that they have heard. They're learning the rules. So you'll hear little kids saying, they say things like, we go to the park. They probably did not hear that as an exemplar from one of the adults in their life. They're probably-- they've learned the add an -ed to things to mark the past tense, and they're over applying it.
They'll say-- they'll say things like, we went on an airplane and it flied. So that they over regularize. They take-- or they'll talk about-- if I have one mouse, I have one mouse. I have two mouses. They know how to make plurals. They haven't always learned the special cases. So English is full of things that aren't--
AUDIENCE: Yeah, what do people say besides mouses?
ABBY NOYCE: Mice.
AUDIENCE: Oh.
ABBY NOYCE: So what's going on here? So Chomsky says that humans have a universal grammar. That we have a language ability in our brain that limits, because the possible of like, all imaginable grammars is just too big. So there are limits to what sorts of languages-- what sorts of grammatical structure a language can have. And so kids who are trying to learn a language, kids who are trying to take the input of their environment and slot it and figure out how the structure that's underlying it, have a head start by knowing what some of the limits are. And so they know things.
And so Chomsky proposes that things like, so-- English has a lot of rules. You say that subjects go before verbs objects go after verbs you have propositions that go before the noun that they're describing.
In Japanese, all of those things are the other way around, right? Verbs go before subjects, verbs go before objects. Japanese has post positions. And Chomsky proposes--
AUDIENCE: Actually verbs generally go-- subjects then object then verb.
ABBY NOYCE: Then verbs. It's subject, object, verb, Yeah. And it has like, post positions instead of prepositions, where it's the noun and then the thing that's describing its relationship to the rest of the sentence.
AUDIENCE: [INAUDIBLE]
AUDIENCE: A bit. I know a bit of a lot of languages.
ABBY NOYCE: Yeah, when I was being a bit more of linguistics geek than I have been lately, I knew a lot-- I had a much better sense of what tended to be what word order.
But anyway, one of the things that Chomsky points out is you tend to see patterns. So words that have-- English is-- languages that are like English the subject, object, verb-- subject, verb, object, tend to also have prepositions, not post positions. Languages that are a subject, object, verb, like Japanese, tend to have post positions. So you can kind of get one big master switch that can go to one or the other of a few basic word orders, that then tends to direct all of these other things.
How can this be genetically coded? Well we're not 100% sure of that. There aren't any really solid answers to this. But the best one is that this is one of those critical development things where you've got to have input of a certain kind at a certain stage in order to allow the brain to wire itself up properly. Remember we talked about what happens with vision if you don't get input from one eye for a period of time? All of the cortex that might originally have been going to get input from that eye switches allegiance and delivers its input to the other eye. Or we talked about if you only see edges in one orientation, it doesn't develop.
So there's some kind of innate-- and this is not in any means a 100% agreed upon idea. This is a point of controversy, a point of ongoing dispute in the linguistics and like, language acquisition world.
Oh I know what else [INAUDIBLE]. OK so-- what do we have for time? Christ. OK, we're good.
So Hawaii. So there's this idea that universal grammar gives you some kind of structure in which to slot the input you're getting. And this makes up for the idea that the input is not good enough to give kids enough-- to alone give kids enough information to figure out the grammar of their language. That there's got to be some other information-- the input coming in doesn't give kids enough information to understand their [INAUDIBLE], to figure out the grammar of the language. There's got to be some extra help coming just from people's innate abilities.
So this should be even more extreme when the linguistic input has no consistent grammar. When the linguistic input is very poor. It's not just like the level of kind of poor that most people have.
So Hawaii was-- let's see. Hawaii was annexed to the United States in 1898. And slightly, about 20 years earlier than that, Hawaii's economy had really taken off. There'd started being free trade between the US and Hawaii. And so you got, as you often do when you see an economy take off, you got a pool of immigrants from all over the Pacific coming in.
So you had Americans, you had English speaking people. You had folks from China. You had folks from the Philippines. You had folks from Japan. You had folks from, native Hawaiian islanders. And so you got all of these people with different linguistic backgrounds. And what you tend to see happening is that they develop a, not quite a real language. But what's called a pigeon for communicating in.
So pigeons tend to have really restricted vocabularies, very few words. They tend to have very simple syntactic rules. They often don't have, for example, a fixed word order the way a lot of, not all, but a lot of languages do. They don't have any structures for like, recursion, for subordinate clauses. And speakers of a pigeon often, there aren't like, rules about what words are obligatory or are not obligatory. So you can like, leave out parts of a sentence. It's not like English.
So for example, in English, if you want to talk about the weather, you've got to say, it's raining. It in that sentence doesn't really refer to anything. It's just that you can't have raining without the subject there before it. English requires this. Anyone here take Spanish in school? How do you say it's raining in Spanish? Huh?
AUDIENCE: Esta lloviendo.
AUDIENCE: Lloviendo.
ABBY NOYCE: Yeah but you don't need the-- you don't need the esta, right? You can just say the verb part. Isn't Spanish one of those languages?
AUDIENCE: Yeah, you don't need a subject.
ABBY NOYCE: Yeah you don't need the subject part. You can just say whatever that verb is.
AUDIENCE: [INAUDIBLE] like, [INAUDIBLE]
ABBY NOYCE: Yeah.
AUDIENCE: [INAUDIBLE]
ABBY NOYCE: Yeah, OK. So Spanish doesn't require subjects in the way English does. So the words-- the rules for these are different. English requires more pieces of the sentence. So pigeons often don't have like, this sort of, this part of it is obligatory. You can leave out whatever piece-- whatever pieces you think might make it clear-- might already be clear to the person you're talking to.
So if I was coming into the store, you know, and I was trying to buy bananas, you might, in English, you might go up and say, I want bananas, right? If you spoke a pigeon, you could see people just saying bananas. You know? If you walked into a store and said bananas, like, if you walked into Laverde's at the student center and said, bananas, they'd probably look at you a little bit funny. They might get you your banana. But this would not be like, a normal conversation in this language.
AUDIENCE: [INAUDIBLE]
ABBY NOYCE: Yeah?
AUDIENCE: People tend to leave out as much as possible.
ABBY NOYCE: Yeah. I don't know what the obligatory word rules are in Japanese. At all. You probably have better sense of this than I.
AUDIENCE: They can leave out as much as possible. They leave out the subjects, objects. And sometimes they don't have a verb if the verb is obvious. And--
ABBY NOYCE: Cool.
AUDIENCE: [INAUDIBLE]
ABBY NOYCE: All right. So when you get a mixed pool of immigrants like this, they tend to develop a kind of a common pigeon, sort of a mini language. It's not a full language. It doesn't have the capabilities that a full language does. But you can get by in it. So what happens then is if you see kids who are brought up in this polyglot community, where there's lots of languages being spoken. But the only common language, the only language they might share with their neighbors or with their kids at school, is this pidgin. Probably not at school if it's Hawaii in 1898, let's be honest.
So what happens when kids develop in this is that they take this pidgin language and they expand the vocabulary. So you get something up to what, you know, a real language, a language that's been around for a few hundred years even looks like. And they'll-- it'll start having this kind of syntax. And so this is called a Creole. A Creole is a language that develops from a pidgin, but has all of these markers of real languages and real syntax. Where are my examples? So, where is it?
So yeah. So Creole gets used in some other contexts too. But in this case, this is for kinds of languages. And you'll see Creoles in a lot of places. You'll see Creoles-- and that are based on different languages. So Hawaiian-- the Hawaiian creole is based on English. There are Creoles that are based on French in a lot of like, parts of the world the French once colonized.
There's a lot of creole languages that are spoken in the Caribbean. If you look at like, the history of these former sugar plantation islands. Look at like, the slave trade and a bunch of people taken who didn't have a common language, all dumped together in one place and forced to work together and communicate with other people. You'll see these pidgins develop. Kids grow up in this community and then they'll take that pidgin and expand it. Go. Go back. All right.
AUDIENCE: So if you could take French, be able to speak French, bonjour, would you be able to understand French creole?
ABBY NOYCE: Probably not. Or you probably-- like what I find is that with English based creoles, if I'm reading them, like, I'm reading somebody's gloss of one, I can usually figure it out. If I'm trying to follow somebody who's speaking it, it's really hard.
So it was somewhere between when I was in high school and in college, I worked on a farm stand. And the guys who, they're like seasonal labor for like, all of the farms in my parts of the Southern New Hampshire, are Jamaican. I don't know. There's this whole pool of Jamaican workers who come up to-- who come here for the summer, work in the farms. And they go home in like, November.
Talking to the Jamaican guys, they're seeking something that is based on the same English language that my language is based on. But it's not the same. And it's hard to follow if you're not paying attention. So we all slow way down for each other.
AUDIENCE: I think their creole is-- cause it's more [INAUDIBLE] on structural, like the way they structure their words, like, they'll kind of stop halfway.
ABBY NOYCE: Right. And also like, the phonemes are different. Like their phonemes have shifted somewhat. Anyway.
AUDIENCE: [INAUDIBLE]
ABBY NOYCE: Yeah, I don't know. It's been a while. I don't know if I could pull out what I think are the defining features. But I'd recognize it if I heard it. But it was definitely like-- we just kind of would be like, wait stop. Can you say that again?
AUDIENCE: [INAUDIBLE] but like, after you hear [INAUDIBLE]
ABBY NOYCE: To some extent.
AUDIENCE: That's how [INAUDIBLE]
ABBY NOYCE: Yeah.
AUDIENCE: I have a lot of Jamaican friends, and I speak like that. And I do kind of understand them most of the time.
ABBY NOYCE: You might do more of it than I did. I know that, because this was like, you'd have just kind of these quick couple sentence interchanges. Like, because there was the store crew, which was mostly high school kids from town. And then there was the farm crew, which was all these Jamaican guys. And these were like, separate castes who didn't interact a whole lot. It was a little weird. But that's kind of my immediate example of interacting closely with somebody who's speaking something that is based on English, but is not.
It's a functional language. It's got its own rules. It's not like it's English that's wrong. It's a different language, and it sounds just enough like English to kind of trip you up.
All right. So where does this syntax come from? So we know that these kids start out speaking this pidgin, this language that doesn't have syntax, that doesn't have a good vocabulary. And if you get kids growing up in this community, they have a language that's regularized, that has a fixed word order, that has syntax. What happened?
So one possibility is that the various donor languages, like, these kids are probably interacting with their parents enough to grow up speaking whatever language their parents speak. And are they just bringing these rules over to the creole? But a lot of people have done a lot of analysis of this, looking at the creole syntax versus the donor languages, the potential languages that make up the community.
And what they found is that they're really not the same. The rules that govern the syntax in the creole and the rules that govern the syntax in the other languages are different. So Derek Bickerton, who's a linguistics guy who did a lot of this work on Hawaii, in particular because the creole in Hawaii developed recently enough that you could compare people who are kind of in that first generation of people who grew up speaking the creole versus people who are immigrants at about the same time, and can actually compare their language use.
He did this in the '70s. So looking at people from the early-- people who were probably in their 70s then. And so what Bickerton claims is going on here is he says that Creoles-- and if you compare Creoles around the world, in terms of their syntax, they're more like each other than they are like any other group of languages. And Bickerton claims that what you're seeing when a Creole develops is you're seeing kind of the default switches for a grammatical structure. The default settings.
So with the input-- without any kind of grammatical input coming in for this language, these kids default to this particular set of rules and go by that. So if you think of the grammar switches like, universal grammar is taking the input and trying to figure out which-- more or less which option it maps onto. There's got to be a default setting for that switch. And that this is what it is, it's what you-- the structure you get when you speak a Creole, when you have a Creole.
So how the brain separates sounds. These guys are interested in what's called both auditory streaming processes and also in concurrent sound segregation. So when, in the world, there are always lots of noises going-- there are usually lots of noises going on at the same time. We know this in class because we usually have the window open and traffic on Mass Ave. is noisy, right?
So you guys have a real time trying to segregate these two-- you've got two streams of sound coming in. You've got me babbling, which hopefully you want to listen to, and you've also got all of the noise out there. So you're trying to pick out one of these things from the other one. And these guys are interested, in particular, in the, not just what kinds of cues people use, but really in what kind of the neuroscience underlying this ability is. What parts of the brain seem to be involved, what kinds of things they do.
So we're doing the usual thing where some folks presented. I also asked everybody to read the whole thing and be prepared to ask questions of your classmates. Sarah and Wayne, I guess you guys get to follow along. And if you have questions, feel free to sing out. I'm not quite evil enough to give you a section and put you on the spot with it. But you should ask-- try and ask questions.
So first up, what cues do listeners use to segregate sounds that Zachariah and Jess. And you guys also get that big box on 467.
AUDIENCE: I'll start.
ABBY NOYCE: OK.
AUDIENCE: So auditory streaming is the ability to follow one speaker over a period of time [INAUDIBLE] other sounds. And to study it, they play two tones with different frequencies.
ABBY NOYCE: Ooh, ooh, ooh. Want to hear one of these? I found the website that they use for this. So, bonk, come on. I know you switched. We're very proud of you. So here's a galloping one. So these are models of the two frequencies. So when the frequencies are close together, you get a rhythm that sounds like this. Whoa, what is it doing to my like, video app? Anyway. But you hear how it's like, da-da-dot, da-da-dot. OK.
And if you compare that to-- it's the same timing. But the frequencies are further apart. All right, don't look at it. So the [INAUDIBLE].
AUDIENCE: [INAUDIBLE] they don't mean the rate at which [INAUDIBLE]?
ABBY NOYCE: No. They mean the frequencies of the two independent tones. So it's not nearly-- yeah, the pitch of the two tones. So the further apart you split them, the less strong that galloping effect is. If you put them even further apart, and that one isn't terribly far apart, you'll even hear-- I want to see what these guys have. You might-- you'll almost hear it as just two concurrent streams. And I want to see if this one is further apart than the previous one. So that was an integrated one versus the same.
So here the-- so that one, you can parse either way, as either be-de-deep, be-de-deep, or one tone going beep, beep, beep, and one going bip, bip, bip. And you can pull it out into two separate streams. All right. Sorry Jess. Go ahead.
AUDIENCE: In another study, they found that two sounds with the same frequency can be listened to separately. But when they start at the same time and stop at the same time, then you can't tell. Um, also, the pitch also helps people to differentiate between the two sounds. [INAUDIBLE]
ABBY NOYCE: So pulling out some of the cues that people use to split them out. So possibly what kind of cues-- the neural stuff we'll talk about later in the paper should be looking for. Cool. Zachariah.
AUDIENCE: [INAUDIBLE] summarize the box.
ABBY NOYCE: You're on the box. All right. Tell me about the box.
AUDIENCE: It mentioned that one way that a listener can kind of determine if a person's right in front of you versus like, [INAUDIBLE] is by the [INAUDIBLE] whether the energy of how they get open ears are equal or if they're different. So, like if the person's right in front of you, then both ears-- well they should get the sound with the same amount of volume. If the person's over to, say, the left then your left ear gets it [INAUDIBLE]. So they did an experiment where they had a listener come with seven headphones. And they played the exact same-- I think the exact same sort of pattern into each ear. And the person couldn't really-- I guess couldn't detect if, [INAUDIBLE] like the sound was coming from any particular direction.
ABBY NOYCE: OK.
AUDIENCE: And then, what-- and then, in part C, it shows that one of the waves is inverted so that where the other one is having its peak be-- we'll say where the left ear perceives a peak in energy, the right ear perceives a bottom value. And then, the person's able to tell, like, it's coming from a certain direction.
ABBY NOYCE: OK. Tell me about these two graphs.
AUDIENCE: Um, so, the first graph, [INAUDIBLE] I believe the blue represents-- sorry, I [INAUDIBLE].
ABBY NOYCE: It's OK. Take a sec.
AUDIENCE: OK, so, for the [INAUDIBLE], the horizontal dashed lines represent the point at which they-- the range away from the mean [INAUDIBLE] outside of those two lines, they'll say, OK, the subject is now responding to the signal. And it shows that when the sounds are inverted, so one's playing one way, and the other's playing a different way, the subject will tend to respond to the signal earlier at a less frequency than in the other case when--
ABBY NOYCE: At less volume, right? Signal level and decibel somethings. Yeah.
AUDIENCE: Volume. And then, whereas in the curves, the subject has to hear it at a higher volume for when the two waves are [INAUDIBLE] energy levels.
ABBY NOYCE: Right. So they're playing it a tone like a single frequency tone, right, like a beep. And they're also playing like a white noise over it. And so what they're showing is that when these two things are in sync, when they are in phase at both ears, so that the beep is on the upside of a sine wave at the same time in both ears, then they're graphing the response of neurons in the inferior colliculus. And the blue line is the in phase, and the red line is the out-of-phase. And so if you look at graph D, right, you can see that in response to the out-of-phase condition, the red crosses that line sooner at a lower volume than for the blue. You've got to make the tone much louder relative to the noise in order to distinguish it when they're in phase. And then E is the same thing for a neuron that responds the other way, the neuron increases its firing rate for both out-of-phase and in phase signals. Yeah, that's done in honest to goodness Guinea pigs. You can't stick electrodes into people. But there's a-- the evidence seems to be that this is pretty much what happens with human neurons. Good, cool. So there are neurons that respond to pulling a signal out of a noise situation.
Who's up? Neural and cognitive bases of auditory streaming. And that's Naaman and Natasha, and Vladamir if he was here, but he's not. So you guys are on it.
AUDIENCE: OK. So basically, most neurons are respond to different frequencies of sound. And so it basically refers to the [INAUDIBLE] on the first page where, 1A, where there's two tones that are played. It's like the same type of [INAUDIBLE]. So when they're very-- when the difference between the tones are [INAUDIBLE] together, then the neurons that respond to one tone will respond quite a lot of to the other tone as well. When they're really far apart, then it will respond weakly. And if they're in the middle, it will respond a bit. And it said, if you speed the sequence up for an intermediate-- for an intermediate difference, then your neurons will tune to one of-- your neurons will tune to the tone. So it would basically kind of like, it would reduce, like, response to one of the tones. And all the tones would be more-- it would respond more to one of-- like A and respond less to B. In my example. And, OK. [INAUDIBLE] explain the [INAUDIBLE].
ABBY NOYCE: No, no, no, no, no, no. Yeah.
AUDIENCE: So that's [INAUDIBLE] continuity illusion, which is, basically, if a tone is played in-- or say a slide is being played. And then slide stops and then replaced by some other noise, but there's no gap between the original sound and the interrupting one, then even if the original sound isn't being [? played ?] behind the interrupting one, your brain will kind of just fill it in for you. And-- [INAUDIBLE] understand like the other part.
ABBY NOYCE: All right, so you've got a tone glide, right? And a noise that is masking a gap. So a tone glide is like, a sound that goes whoop! Or woo. Or something like that. It just slides right down. And if you take out the middle of it and put just like a burst of white noise, like [STATIC SOUND] over it, then what you'll hear usually, depending on a few things about it, is that you'll hear that the tone glide keeps going kind of behind the white noise. You'll hear that you're hearing both. And so they were looking for neural correlates of the continuity illusion which is neurons that would keep-- that would respond to the tone glide and keep responding behind the noise masker.
And they say there isn't a lot of evidence for this. And the thing with the behavioral response is when you hear that tone glide, that tone continuity, it depends on the noise that is masking it having part of that noise being in the frequency region that would have been in the glide that you think you keep hearing. If that made sense. So that the white noise mask has got to have frequencies, it's got to include energy in those frequencies. And if it doesn't, if it's missing those frequencies in the noise, then you won't hear the continuous tone behind it. So it depends on properties of the masking sound.
And so they talk about one study, Sugita, Sugeeta, I don't know. Who studies-- who is looking for neurons that respond to this. And theirs does not depend on the frequency-- the noise having the right frequency region. So it may or may not be a neural correlate of this region. And there's another one who thinks they have found it because they do have that frequency content.
AUDIENCE: What does it mean by [INAUDIBLE]?
ABBY NOYCE: All set? All right. Is it your turn to talk?
AUDIENCE: Yes.
ABBY NOYCE: OK.
AUDIENCE: So they were trying to see if the streaming was occurring. So they decided to use an [INAUDIBLE] So that measures neural responses to sound sequences. And they're looking for the mismatched [? activity ?] which-- so if you hear [INAUDIBLE] sounds, which is a short sequence of sounds, then [INAUDIBLE] oh yeah, they presented it in the sequence of other [INAUDIBLE].
ABBY NOYCE: Right. So what would an example of like, a sequence of standards with one deviant sound be then?
AUDIENCE: They're [INAUDIBLE]. So is that like--
ABBY NOYCE: That's the-- so they're using for like a stimulus here is like beep beep beep boop beep beep beep beep. Or you've got a string of sounds that are all the same and then one is different. And then what they're looking at, remember EG is measuring electrical activity in the brain, right. So this mismatched negativity is a negative wave. So it's the electrical activity in the brain goes-- has a distinct negative pattern at a certain point in time after that deviant tone is presented.
AUDIENCE: All right. And then, they asked the patient, or, yeah, [INAUDIBLE] to identify a deviant. And they thought that if they measured the MMN to a deviance, they could see if there's a string. So they found that the MMN one [INAUDIBLE] successive tone, so the [INAUDIBLE] like, continues.
ABBY NOYCE: Right.
AUDIENCE: Suggested that there was streaming when-- when was that 100 MS?
ABBY NOYCE: Milliseconds.
AUDIENCE: Milliseconds. So but when the SRA was 750 milliseconds, they thought that the [INAUDIBLE] should be weaker and they found there's no MMN. So thought that these differences in the MMN was more differences in streaming. So they thought that this was occurring outside their focus of attention.
ABBY NOYCE: OK.
AUDIENCE: And then more recently, they thought that the MMN could be affecting eye attention. So they're not sure that people can be like, [INAUDIBLE] actually ignoring their instructions and they're actually listening to sounds. But [INAUDIBLE].
ABBY NOYCE: Yeah. Fair enough. Questions? OK. Ladies?
AUDIENCE: [INAUDIBLE] OK. So they talked about like, affects of streaming on [INAUDIBLE]. So this guy Jones and his colleagues had like, a bunch of people try to recall like, visually presented [INAUDIBLE] while listening to a stream of tones. And they had like-- OK, so the [INAUDIBLE] was disrupted only by tones that changed over time. So like, if they listened to this one stream that would have like, two different beeps, like, alternating. Or not alternating. I don't know. So if they, like, if it changed, they would be like, oh well, that's distracting. But like if they're just like one tone that was like repeated over and over again, they could just kind of like tune it out. And it wouldn't really both them. But like, it also like, well like, if it listed two different streams, like, each with their own repeated tone, it still doesn't bother them, even though [INAUDIBLE].
ABBY NOYCE: OK.
AUDIENCE: And OK. And [INAUDIBLE]. OK. And then [INAUDIBLE]
AUDIENCE: Then the, for manipulating attention during the [INAUDIBLE] of streaming, the main focus was that a tone sequence has a similar stream, like, beep, boop, beep, like what we said. After several seconds, it could eventually turn into two streams. So you could hear like, it separating, and it would be like, beep beep beep. And then boop, boop, boop-- like two frequencies. And in one condition, or test, there was a 20 second sequence of repeating ABA triplets. So two different frequencies were heard. And this was presented to subjects' left ears. And they had to tell the researchers how many streams they heard. And then in the second test, the subjects performed a series of tasks on a series of noise bursts in their right ear during the first 10 seconds--
ABBY NOYCE: So what kind of stimulus do they mean by a noise burst?
AUDIENCE: Like a sudden?
ABBY NOYCE: Yeah like, you know, like white noises. So it's like a random assortment of-- it's like [STATIC SOUND]. It's just a static noise is usually what they mean for that. So this isn't just like noise in the casual use of like a sound. It's like a particular kind of thing.
AUDIENCE: So this was presented in their right ear during the first 10 seconds of the sequence. So then they switched the tones to the left ear. And they had to make their own opinion on what they heard. And the researchers found that the streaming was as if the-- it was the start of the attended sequence, like, I don't know what that really meant.
ABBY NOYCE: Did you guys notice when we listened to the galloping streams, if you listen to the A and B one. I'll play it again. And I'm going to apologize for the way the screen's going to get all wonky. It starts out, and you can almost hear--
So sometimes with stuff like this, it will start out sounding like a gallop, like beep beep beep. And it will eventually, as you keep listening to it, start to separate so that you eventually-- later, you hear it as two separate streams, a low stream and a high stream. And I don't know if anyone perceives that. But it's one pattern that you tend to-- people tend to see with these kinds of tones. And so they're saying-- so that's what they're looking at. So having subjects and they're comparing this kind of. So when you first hear it, you hear it as one stream. And you later split it into two streams. And they wanted to see if they-- so if they play just the tones, then they ask people what they perceived. And they said, OK, we hear this one stream at first and later as two. And then they played them the tones at the same time, asked them to spend 10 seconds thinking about noise bursts in the other ear and then shift their attention to the tones. And the tones had been there all along. But they weren't attending to them because they had to do this other noise discrimination task.
and then they wanted to see if streaming-- and their theory was, if streaming happens without attention, than it should sound like it would've 10 seconds into the control condition. But it didn't. It sounded like when they first started hearing the tones. So they're saying that this means attention is not happening. When you're doing-- that streaming is not happening when you're not paying attention to it. Does that make sense? OK.
AUDIENCE: And then in a later study, researchers found that streaming can be disrupted even by non-auditory competing tasks. And the results show that attention can have a strong influence on auditory streaming. And some streaming can occur without full attention. But it can be strongly affected when subjects have to perform a demanding task.
ABBY NOYCE: OK.
AUDIENCE: Yup.
ABBY NOYCE: Can you paraphrase that in your own words Jen?
AUDIENCE: So even though auditory streaming can be affected by attention, some streaming can occur without your full attention. But the streaming can be affected while you're doing like, a really-- a demanding task that's competing with your senses, I guess.
ABBY NOYCE: Good yeah. Cool. Non auditory areas and auditory streaming.
AUDIENCE: Well this book basically said that like, [INAUDIBLE] is not [INAUDIBLE] like auditory processing could still play a role in streaming. Some people who had [INAUDIBLE] can't see things on your left side so well. Like, they also had reduced streaming for like, the sounds in their left ear. And, like, the same thing happened when like, different [INAUDIBLE] one specific area. And it also said like, the intraparietal sulcus, the IPS, [INAUDIBLE] with perception of two streams. So yeah. So it's [INAUDIBLE] IPS like changed depending on-- OK, so like, they would play, like, they would have beeps. And then sometimes, they would play, like, I guess one stream of the same beat or like, two streams of the same beep.
ABBY NOYCE: It was-- I think it was all the same sequence. But your perception of whether it was one stream or two separate streams would change. So you'd either hear it as a gallop or as a high stream and a slow stream. And this sometimes can be, you know, like looking at a Necker Cube, and it'll pop back and forth between-- Necker Cube is that wire frame cube, right, that can look either way, depending on how you're looking at it. You know what I'm talking about?
AUDIENCE: Oh like the boxing--
ABBY NOYCE: The boxing cube that's just the outline and it can pop either this way or that way. Right. So this can be like that. Where you can either hear it just one stream or as two streams. And sometimes, it'll spontaneously shift back and forth. So and they're looking at whether-- and they're saying that activation in the inner parietal sulcus, the IPS, depends on what perception people are having.
So they found a piece of the brain who's activation actually matches with like, self-reported perception of one or two streams. Which is cool. Does that make sense? Sort of.
So they're just playing beep boop beep boop, or probably as a gallop. So like beep beep beep, beep beep beep, beep beep beep. And people can hear that either as one galloping stream-- da-duh-da. Or as two separate streams, one going beep, beep, and one going beep-beep-beep-beep-beep. Even though it's the same input, right?
And so what they found is that-- so what you might expect to see is that, so there's some neurons that would respond only to single but not to two streams, or vise versa, something in your brain that actually changes depending on which way you are perceiving it. And they found one.
So that the pattern of firing in this particular part of your brain changes, not because the input changed, but because how you're perceiving it changed. Or probably causes how you're perceiving it to change, might be a better way of thinking about that. That it's picking out a different kind of pattern. Yeah. Brains are like pattern recognition machines. It's what we do. OK.