12: Intelligence: How Do We Know You Are Smart?

Flash and JavaScript are required for this feature.

Download the track from iTunes U or the Internet Archive.

Related Resources

Handout (PDF)

The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license at MIT OpenCourseWare in general is available at ocw.mit.edu.

PROFESSOR: The midterm marks is sort of an interesting dividing point in the subject material of the course. It's not an absolute division by any stretch of the imagination, but it's a division. Up to this point we've been talking about things that we could measure of various varieties like your memory span. You can remember 7 plus or minus 2, color names, you can see these colors, you can do this, you can do that-- in these cases the you has been a sort of a plural you and the data point of interest has been the average data point, the mean data point, what's normal, what's standard. We measure people's ability to memorize color names, the average of that we've decided is going to be 7. They'll be some variation around that, but it's the 7 that's been interesting. That turns out not to be the case in the same way when you're talking about something like intelligence.

Particularly, intelligence testing-- announcing that on average people have average intelligence is kind of boring. What's interesting about intelligence and what is interesting about things like personality is the variation around the mean and the explanation of that variation. And that's what we need to start talking about now. Oh by the way, while it is of course, by definition the case that on average people have average intelligence, they don't believe it. If you ask people, do you think that you are more intelligent, less intelligent or about of average intelligence compared to the rest of the population you'll get some distribution around that, too. But you'll find out that the average perception of intelligence is that we're all above average. This goes for a variety of other questions like, how good looking are you? Above average, below average, or average? Well, we're all a little above average there, too. It turns out that there is one group that gets the answer correct, that if you ask this group of people on average-- you brighter or dumber than average?-- they'll come out in the middle. You ask them, are you cuter or uglier than average-- they'll come out average. Anybody know who that group is? [UNINTELLIGIBLE] Hand, we need a hand. No hands, nobody who cares to speculate-- yeah, yeah. Yes, you with the computer there.

AUDIENCE: Kids.

PROFESSOR: No, I don't think so. I don't know what the answer is. Certainly it's not true for parents of kids-- all of whom know their children are above average.

AUDIENCE: Depressed people?

PROFESSOR: Yes, it's the depressed. Depressed people have an accurate assessment of their own intelligence and good looks. In fact, it has been seriously argued that part of what keeps us undepressed is an unrealistic assessment of the world. We're smart, we're good looking, we're going places. If you knew the truth we'd all be depressed. But that's the topic for another day. Like when you get the midterm back. Oh, I shouldn't have said that. But the midterm doesn't depress us actually, it provides a certain amount of lighthearted merriment for us you'll be happy to know because people do come up with some great answers to stuff.

So, as I say, we've been interested mostly in the mean value of measurements to this point. Now we're interested in the variation, the distribution of those points for a wide range of measures. If you measure a whole population of people and you count the number of people-- whoops-- who fall into each bin, you know, have a bunch of bins here-- different scores on a test, let's say. You'll get one of these bell shaped or normal curved distributions. The questions that we're interested in here are questions about, where does that variability come from? The variance of a distribution is the sum of the squares of the difference between the mean and the data points. So, got the mean, got a data point-- take that distance, square it, sum all those up and divide by the number of observations. That's the variance and the square root of that is one standard deviation away from the mean, so this would be-- and those units of standard deviation are sort of the yard stick for how far away you are from the mean of a distribution. So if you take something like the SAT, for example, the SAT is scaled in such a way that the mean is intended to be at about 500. At least that's where they started and each hundred points is one standard deviation away.

What that means, for example, is that if you've got 700 or above that the number of people getting 700 or above on a properly normed SAT type test is about 2.5% of the population-- actually, that line would be at 1.96 as it shows on the handout if you want the 2.5% point. But roughly speaking two standard deviations above the mean gives you about 2.5% of the population. Three standard deviations above the mean, so that 800 score gives you-- I can't remember-- it's a very small percentage of a normal population above the mean. That's at least the ideal for something like an SAT test. Works well for the SAT 1 Math and Verbal kinds of things. They are more or less normally distributed. The place where it's a disaster is things like the SAT 2. How many of you took the SAT 2? Their are two versions of it, right? There's the hard version, the easy version. How many of you took the hard version of the SAT 2?

INTERPOSING VOICES]

PROFESSOR: What? Oh, sorry, the math. There are two versions of the math. I got my jargon wrong because they used to be called achievement tests, they're now SAT 2s, right? Is it still AB and BC or something?

[INTERPOSING VOICES]

PROFESSOR: Well, whatever it is-- I don't care what it is. This is-- whoa, I'm losing my glasses now. Getting too excited here. The problem is I took the easy version. I took the easy version of it because I looked at this distribution and I realized that the whatever it is-- the fancy version of the SAT 2 actually has a distribution that looks like this, which is to say everybody who takes it gets an 800 on it and those of us, mathematical incompetents who were going to score-- I don't know, 760 or something, we were going to come out in the fifth percentile and not from the good side of the distribution. So I took what was then the achievement test because I knew that-- the math 1 achievement test-- because I knew that I could score in the upper end of the distribution. The other people taking that were people who couldn't add and subtract. So anyway, it worked for me.

Where does variability on tests like this come from? There are lots and lots of potential sources of variability. One important point to make at the moment is to note that things that cause variability across groups are not necessarily the same things the cause variation within groups. Let's give a silly example, if you grow a bunch of tomato plants and you vary the amount of fertilizer you put on them-- more fertilizer, bigger plants, right? So you can have something that shows you that you can account for some of the variability in the size of the plants from the fertilizer. Now if you grow some tomato plants and some redwood trees you get very different heights too, but the source of variability between tomatoes and redwoods is different than the source of variability within tomato plants. This becomes relevant in more subtle and interesting ways when you start talking about variation within population groups and across population groups on a measure like IQ or intelligence. So IQ is just a number on a test, but it's supposed to be a measure of intelligence. What is it actually measuring? Well there's a certain circularity that's popular in the field, which is to say that intelligence is what IQ tests measure and what do IQ tests-- well, they measure intelligence, but it'd be nice to be a little more interesting than that. A little more interesting than that is to note that you can divide up this sort of intuitive idea of intelligence in various ways. One of the interesting ways to divide it up is into so-called fluid and crystallized intelligence. With IQ tests being a punitive measure of fluid intelligence, fluid intelligence is the sort of set of reasoning abilities that let you deal with something like abstract relations. It is said to have reached its mature state, its adults state by about age 16 or so. So you guys have pretty much leveled out on that. Crystallized intelligence is more the application of knowledge to particular tasks and can continue growing throughout the life span. A particular example would be something like vocabulary. Your vocabulary is not fixed and with luck will continue to grow as you age and the idea that the IQ is picking up intelligence. Tests that are IQ like are more tests of fluid intelligence than they are of this crystallized intelligence, at least that's the intent. What is it that is this fluid intelligence?

One possibility is that when you talk about somebody being mentally quick that that's literally what you're talking about. That what differs between people who score high on these sort of tests and people who don't is simple speed of response. And it is in fact the case that simple reaction time is related to measures of IQ. People who bang a response key more quickly in a reaction time experiment are also people who score higher on IQ tests. Not a perfect relationship, not even a hugely strong relationship. And in fact, we probably want to look for something a little more subtle than that in understanding what intelligence might be. More interesting are claims that it has something to do with the constellation of operations we were talking about in the context of working memory earlier in the course. These sort of executive function-- the desktop of the computer of your mind kind of functions. How much stuff can you have up there and how effectively can you manipulate it? So one possible correlate would be something as simple as digit span or in this class, color span. How many color names can you name? I go red, green, blue. You say, red, green, blue, that sort of thing. Again, related. It tracks along with intelligence, but not all that well.

People like Randy Engle in Georgia have worked on creating tasks that they think capture this working memory executive function aspect of it better and that's-- I think, I put on the handout this notion of active span tasks. These are tasks that are like the color name task, but a little more complicated. So the sort of thing that you might do if you were in Randy Engle's lab would be something like this-- what I'm going to get you to do is I'm going to read you a list in words and you're going to spit them back to me in order. And the measure that's going to relate to something like an IQ test score is going to be how many you can get back in order. But I'm not just going to read you the names. What I'm going to do is give you a pair-- an equation and a word on each presentation. Either on a computer screen or I can do this orally. So I might ask you to verify, is it true that 2 plus 3 minus 1 equals 4? And at the same time you see the word uncle. And so your job at the time is store uncle and verify the equation.

OK, next one would be 4 minus 3 plus 5 equals 6 and the word would be fish. So-- no, that one doesn't sound right. OK, now what were the two words? Uncle and fish. That's good. But if I was to do this without the long song and dance and go up to 4, 5, 6 of these, you would discover that you started losing them. And the number that you can hold while doing these calculations at the same time turns out to be a more powerful predictor of things like IQ scores. Again, not perfect, but getting closer, perhaps, to what the underlying substrate of what we mean by intelligence might be. That it might have something to do with how well you move things around in that mental space that we were calling working memory.

Another possibility, not necessarily unrelated, but another possibility is that it has to do with the degree to which your brain is plastic-- not in the plastic kind of sense, but plastic in the modifiable sense. That maybe the ability to change and modify the structure of your brain is the neural substrate of intelligence. In any case, whatever it is, it is a useful predictor of a variety of things. It's a usable predictor of performance in school, which is in fact where it started. Binet in France-- that's B-I-N-E-T, but it's French, so it's Binet, in France, started intelligence testing as a way of seeing which kids might need help in school. It is a predictor of a variety of a things. So more IQ points, higher lifetime salary. More IQ points less of a chance of a criminal conviction, less of a chance of teen pregnancy. So it's related to stuff that you'd like to know something about and you'd like to know as a result where it is that the variability comes from. Part of the class of statistical tests for where variances come from, the effort to parcel it out you will have seen in a variety of the papers that you read, probably-- statistical tests called ANOVA. It stands for Analysis of Variance. It's part of the statistical armamentarium that allows you to take the variance apart and say, some of due to this and some of it's due to that.

Now this is made simpler by the fact that variances are additive. So if you have two sources of variability that they add in a nice direct way. So if we've got the total variation and we are in a psychology course one way to divide up the variance, at least theoretically, is into variance that's due to genetic factors-- our nice nativist component and variance that's due to environmental factors. In principle, if this is a good way to think about the variance we should be able to go in and do experiments that allow us to see the genetic variance and the environmental variance and see how they add up to make the total variance. What you will often see in papers on intelligence and actually, widely in the popular literature on the genetic component of pick your favorite function. Chance [UNINTELLIGIBLE], what is the genetic component to male fidelity or something like that. You'll often see discussions of so-called heritability usually written with a big H. Heritability is simply the-- whoops-- the genetic variance over the total variance, which in this story would be the environmental variance plus the genetic variance. There are difficulties with thinking about-- we're withdrawing conclusions from that that we will come to shortly, but the first thing I want to do before going on to that is to talk about this sort of data that are brought to bear to understand variability. Because a lot of this is so-called correlational data and it is important to turn you into educated consumers of correlational data, so that's why you have this lovely second page of the handout.

Correlation is simply a way of describing the relationship between two variables measured on the same subject. So let's take the silly example in the upper left there. If I took everybody here and measured your height in inches and measured your height in centimeters and for each person-- now each person generates a data point-- those data points had better lie on a straight line, right? Those are two measures that are really seriously correlated; one with the other. If I know inches, I know centimeters. Correlation coefficients are calculated, so you calculate the line that fits the data. And then you calculate how strong your correlation is by looking at the distances of data points away from that line. In this case, all the data points lie on the line and that leads to a regression value, regression coefficient-- usually written as little r of 1.

A correlation coefficient of 1 means if I know this variable then I know this variable perfectly. Now those are really boring data. Nobody spends a lot of time working out correlation coefficients for that. You work out correlation coefficients for data where there's some variability. That's the whole point here. So the second one is height and weight-- actually, taken from a subset of a 900 class some years ago. If you measure height and weight you now get data points that are clustered in a broken hunk of chalk around some line. If you calculate the regression coefficient now it's going to be something less than 1, but still positive. So greater than zero, less than 1. What is it actually on the handout? Like 0.78 or something? The relationship between height and weight is pretty strong. So if I know that your 6 foot 5 I don't know your weight exactly, but I can make a nonrandom guess about that weight. On the other hand, if I know your height-- let's plot, OK, we still have height there. So let's plot last two digits of social security number against height. It is my firm belief-- I don't know this to be true not having collected the data, but it is my firm belief that those data are just a random cloud of spots, right? I don't think there's anything about social security number that's related to your height, I hope not.

So, if I know your height I know squat about your social security number. That is a correlation of zero. Correlation can go below zero, but below zero is not worse than zero, it just tells you the direction of the correlation. So let's go back to the inches-- OK, we'll stick with height. So this time I'm going to measure-- get everybody here-- I'm going to measure their height and I'm going to measure the distance from the top of their head to the ceiling, very exciting data collection. I'm going to get orderly looking data, but this time it's going to look like this right? I've just changed the slope. That's going to give me a correlation in this silly example of minus 1. It's a perfect correlation, but the direction of relationship is the other way. If I know distance from the floor I absolutely know distance from the ceiling. It's just that as one goes up the other one goes down. Again, that's a really boring example. A more interesting example is on the handout, which is if you come into my lab and we do a reaction time experiment and I plot your average reaction time in whatever the task is and I plot your error rate, what I will find across individuals is data that are noisy, but look like this. The faster you go the more errors you make. It's known as a speed accuracy trade-off in dull literature, which you will notice has an interesting set of initials, so SAT gets written about in my trade all the time, but has nothing to do with standardized tests, it's a speed accuracy trade-off. But it will produce a negative correlation. I think I put one of those on the handout, too. What's the correlation there?

AUDIENCE: Negative 0.52.

PROFESSOR: OK, so negative 0.52 meaning a pretty good relationship, but going in this negative direction. You will often see in papers r squared rather than r. r squared of course is always positive. So r squared here would be what, about 0.26 or something? The reason people use r squared is it turns out that that gives you the percentage of the variance that you're explaining. So if you've got a correlation of 0.5 you can explain a quarter of the variability. If you know one you know about where a quarter of the variability would come from in the other variable in this case. Now it should say on the handout, oh actually I think I put the answer on the handout this time too. I'm not sure it's really the most important thing to know about correlation, but the thing that gets lost in discussions in psychology all the time, certainly in discussions of intelligence all the time about correlation is that correlation does not tell you about causality. It is extremely tempting, it's not extremely tempting in these examples, right? Does inches tells you about centimeter? Yeah, but not because inches cause centimeters-- that's stupid. But it's really tempting to infer directly from nothing but correlational data, to infer causality.

I put fertilizer on the field, the tomato plants grow bigger, so it I plot amount of fertilizer in yellow now-- cool-- against height of tomato plants I presumably get data that look like this, some nice positive correlation. I may have a notion, I may even have a correct notion that the fertilizer is the cause of this change in the height of the tomato plants, but just these data don't tell me that. I would need more. I'd get exactly the same data if I flipped the axes, right? Height of tomato plants, fertilizer. Get exactly the same data. The height of the tomato plant causes how much fertilizer I put on it. No, that would be stupid. You have to impose a theory on your correlational data. The correlation's just math. It's not by itself a theory. It can be used to support causal theories, but it is not by itself a causal theory. I'll try to point out later where this runs into trouble.

OK, so we can get correlational data. Correlational data are very important data in the study of variability in intelligence, so you might get data like the following: let's plot parent's IQ against child's IQ. What you'll get is another one of these clouds-- I think it actually says on the handout that the r value is about 0.5 for that. So it is the case that if you know the parents IQ you know something about the child's IQ. And the goal is to figure out why, where's that relationship coming from? The obvious temptation of course is to think that the answer is, well, the parents have good genes or bad genes-- they have some amount of intelligence coded into their genes, they pass it onto their kids. To understand why it's not completely trivial recognize that parental wealth is correlated with child's wealth not because there's money coded into the genes. It is true that the richer your parents are the richer you're likely to be, but the causes-- you do inherit that, but in a different kind of non-genetic sort of way. As I say, the simple story has been to try to partition the variance into a genetic component and a environmental component. The simple version of that leaves out an important piece of it, which is that if you do an analysis of variance-- everybody should do an analysis of variance sometime because the computer will do it for you on your data now. Everybody should really do one by hand-- back when we were young we had to do them by hand, which took a long time.

But anyway, when you do that, suppose you've got two variables, like a genetic component, an environmental component. An analysis of variance will tell you-- this statistical test, think this much of the variance is due to this and this much is due to this. But it'll also give you an interaction term to say that the genetics and the environment might interact in some fashion. Interestingly that term never gets introduced into these calculations-- doesn't quite get introduced in these calculations of heritability, but let me try to give you a feeling for why that's important. Let's move off of intelligence and ask about a sort of a personality variable. How anxious do tests make you? You've now taken a bunch of MIT tests, how anxious do you get? Well, let's develop a little variance partitioning theory here. Let's not develop that theory, let's develop this theory. Whoops-- oh this looks like a great theory. Get rid of that theory, that's got too much fancy stuff in it. We will have a simpler theory.

So the simple theory might be that the total variance, the total anxiety is a function of the variance due to you-- to your personality-- are you an anxious person or an unanxious person? And the variance due to the class. You know, if you're taking introduction to clay ashtrays-- make you that anxious and if you're taking advanced thermonuclear chemical integral thermodynamical bio-something or other and you skipped all the prereqs- yeah, it makes you a little anxious. And you could try partitioning your variance into these two components. But it turns out that's not where the action is. The action in anxiety about tests is all in the interaction term, which we can call you crossed with class interaction term. Because how anxious you are depends on how you are doing in this class. You-- know I'm sure it matters if you're basically a nervous person or not, but you're going to be nervous in intro psych if the midterm's coming up and you never cracked the book. You might be less nervous in calculus because you're good. Your neighbor might have memorized Gleitman and forgotten to show up in calculus and have the flip anxiety, the anxiety is dependant on the interaction. And those interaction term, I mean, you will almost always get a nice nod to the notion that of course we know in something like intelligent that the environment and genetics interact, but we will typically not get much more than a nod at that. You then get simple minded statements about just how heritable something is. Let me give you an example from the intelligence literature from a new and I gather somewhat controversial study-- controversial in the sense that there are other people who claim that the methodology is flawed and that they don't believe the results. But this is new enough that we don't know how it shakes out, but it makes the point about the importance of interaction terms here.

Suppose you calculate how heritable intelligence is, but we're going to do it separately as a function of, oh here, let's introduce a little more jargon. SES stands for socioeconomic status. It's the fancy way of asking how much money and other resources you've got. If you look at high SES kids and ask, how much of a genetic component is there to IQ? You get estimates, as I recall, it's something like 0.6, 0.7 something like that. A relatively high number for that H, that heritability number. If you look at low SES kids, in this particular study it was dramatically lower, about 0.1. What's going on there? Could it possibly be the case that the genetics are operating differently at the low end of the economic scale than at the high end of the economic scale? I mean, this implies that if you plot something like parent's IQ against kid's IQ-- one of the ways of looking for a genetic contribution-- that at the low end of the economic scale that the data look like a cloud and at the high end they look closer to the line kind of data. That's very odd.

What really seems to be going on is an interaction between environment and genetics influencing this heritability thing. What's going on? Well, here look at this equation and ask yourself what happens if you drive the environmental variance to zero? Well, you're going to drive this up towards 1. It's just going to be the genetic variance over the genetic variance. It may be that for the point of view of intelligence or what's measured on IQ tests, that by the time you get into the middle class the environment is largely homogeneous. Everybody gets fed, everybody goes to school, everybody has books, and everybody has access to medical care. Most people come from family situations that are at least not vastly problematic. At the lowest end of the socioeconomic scale that's not true. It's not true that everybody has the same access to resources and this environmental factor makes it much larger with the result that this measure of the apparent heritability, the genetic component if you like, seems to get bigger. So you see how the interaction can end up being important.

All right, let's go back to this example of parents and kids across the population as a whole that relationship gives you an r squared value of about 0.5. What does that mean? Well, we don't know really because while you share some genetic material with your parents, assuming you're not an adopted child-- we'll come back to that in a minute-- if you share genetic material with your biological parents, if you were raised with your biological parents you also share a lot of environment with them. The two factors co-vary. In order to sort of separate this out what you want to do is you want to get the two factors to vary at least somewhat independently. Now the little table on page 3 of the handout gives you one effort to do that. Let's look at sibling pairs who vary systematically in the amount of genetic material they share.

So identical twins are genetically identical and they have very high correlations between their IQs, a correlation of around 0.9. Fraternal twins-- same sex fraternal twins I should've added-- have a correlation that drops to about 0.6, which tells you that environment, well, they are only as related as standard siblings. And so you reduce that genetic component, you reduce the correlation. Siblings, age different siblings- standard brother sister, brother brother, kind of pairs have a correlation of 0.5 and if you have unrelated children raised in the same family that correlation drops to about 0.2. So that's certainly indicates a genetic contribution to IQ, but it's not entirely clear what's going on because again, environment and genetics are co-varying; one with the other. Identical twins get treated more similarly, they have a more similar environment than fraternal twins. Fraternal twins by virtue of being the same age are treated more similarly than age separated siblings. And typically, adopted children are treated somewhat differently. Children who have been adopted into a family have experiences that kids born within the same family don't have and vice versa. So again, there's more of a difference there. So environment and genetics are co-varying, so it's not a clean experiment.

One way to do the clean experiment is to take a jumbo jet full of identical twins at birth, put little parachutes on them, fly around the world and push them out at random. Come back 18 years later after fluid intelligence has reached its asymptotic level, collect all your twins and see what the correlation is. This is, for a variety of reasons, a difficult experiment to actually do and so we don't have the data on that. But there is a literature on identical twins reared apart. It's not common, but it does happen. It happens when either both twins or one of a pair of twins gets put up for adoption. The one of a pair of twins thing may sound a little strange, but this happens for instance, typically at the lower end of the socioeconomic scale when you're thinking, oh my goodness. I think we can just barely manage this kid when he or she is born and there's two of them. And so one of them gets put up for adoption. It's rare, these things are rare. But it does happen, and a variety of efforts have been made over the years to collect data on such. Some of this has less to do with IQ than with the sort of general personality variables. There's a whole lot of amusing, though god knows what to make of it, literature on identical twins who only meet their twin-- didn't realize they had a twin until they're adults and then they meet and oh my god, they both married women named Gladys and they both have dogs named Gerald Ford or something like that. That's a little strange. It's a little hard to believe that [? Gladys-ness ?] was wired into the genes, but weird coincidental things happen.

The identical twins reared apart stuff is again, not perfect data. Because for instance, twins that are separated at birth are sometimes separated by going off to live with aunt so and so, who lives in the next valley or something like that. Is that really separated? Hard hard to know. Oh, the place where you do these studies-- well, there's one thing on the handout about the Minnesota twins study because those guys are at Minnesota, but the place you want to do these studies is Scandinavia. Not because they take your twins apart a lot in Scandinavia, but because boy, do those guys keep records. You get yourself some idea that you want to know where all the twins who were separated at birth are in Denmark and there's somebody in the Danish bureaucracy who could get that answer for you. I mean, in the U.S. bureaucracy you're lucky if they can figure out-- oh, I don't know-- where your 300 tons of the Iraqi explosives are or something like that, but in Denmark they'll know where your stuff is. The previous remark should not be construed as being paid for by any particular political campaign. It's just what bubbled into my fevered brain here.

But in any case, when you do the identical twin reared apart study, people yell and holler about these, but the best of these data look like a correlation of around 0.7 or so. Significantly lower than the 0.9 for identical twins reared in the same family, but clearly indicating some sort of a genetic component to what is being measured by IQ. So I think it's been a very hot topic for a variety of reasons, many of them not well formed to ask how much of a genetic contribution there is to IQ. And it's been an answer where the answer that you're looking for is heavily driven by your politics as much as by your science. In part, well look on the handout. There are a bunch of pitfalls to this idea of heritability. Jumps down to number 3. Number 3 is perhaps the reason that it's been most politically loaded. There are people and there are groups whose IQ, on average, are lower than other groups or other people. If you believe that there's a strong genetic component to IQ and you believe that this item 3 here, that things that have strong heritable components are largely unmodifiable then what you're saying is that if you've got low IQ that's sort of your tough luck. There's nothing much that we as a society could do about it. This was the thesis of-- actually, it's been a thesis of a repeated series of books.

The most famous recent one is a book called The Bell Curve by Herrnstein and Murray and the basic argument ran like this, there's clear evidence for heritability of IQ. IQ is not very modifiable. IQ is correlated positively with good stuff and correlated negatively with bad stuff. And that this country is an IQ stratified country. Another way of putting it is a meritocracy. It's not that you get born the Duke of Cambridge or something like that, you get to rise to the top because you've got this great IQ that gets you to MIT, that gets you the good job and then you become President of the United States or something like that. Interesting. Or you get this great IQ and you come to MIT and you hang around in the infinite corridor making [? boingy ?] noises. That's different.

Anyway, if you take-- I don't know, that 4 points or something-- you take those 4 points and you add them up, what you get to-- somebody over there take a little walk down the hall and ask [? boingy ?] to cool it. Thank you. What you get to is the notion that there are some people who are going to be at the bottom of the stack in America. And that's just the way it's going to be, you know? They're going to be the criminals, they're going to get pregnant, they're not going to make any money and that's just tough. You know, there's nothing much to be done about it. I'm oversimplifying the book. It ought to be fairly obvious that I disagree strongly with that thesis. But I should say, it's not a stupid book. If you're interested in these issues it's worth reading the book in the way that you know, if you're on the left politically it's worth listening to right-wing talk radio to kind of sharpen your brain and if you're on the right politically it's worth listening to left-wing talk radio to sharpen your brain as long as it-- well actually talk radio's probably a wrong example because it really is pretty stupid; most of it. Herrnstein and Murray aren't stupid. I think they're wrong, but they're not stupid.

In any case, let me give a non-intelligence example for why this pitfall of thinking that these things are fixed and unalterable just because they're inherited. Did it work? Oh, you weren't the person looking for-- it's not the same guy. Oh, this is a lovely change blindness thing, right? It's perfect. A guy went out, the [? boingy ?] stopped, the guy came back. Unfortunately I guess, the [? boingy ?] guy has killed the guy who went out. This is really sad. We could send another one out, but where was I? OK, so things can be inherited, things can be inherited related to intelligence and nevertheless quite changeable. PKU is a disorder where you are unable to metabolize one of the basic amino acids and the result is it produces zero toxins. It munches up your brain-- kids with this disorder, it's a genetic birth defect type of disorder-- kids with this disorder before it was understood were condemned to severe mental retardation. Once it was figured out what the problem was you could prevent the severe mental retardation by controlling diet. Basically, by controlling an environmental factor. You take the precursor out of the diet, you don't produce the neurotoxins, you don't produce the mental retardation. It doesn't mean that the disorder was any less heritable. It's a birth defect. Are you the guy? Is he the guy? Thank you. What was it?

AUDIENCE: It was just someone walking through the hall making noise. He left.

PROFESSOR: OK, he's not going to need medical care or nothing, right?

AUDIENCE: He's OK.

PROFESSOR: OK, good. No, I was worried that you might have worked him over or something. Thank you for taking care of that in any case. So you can have something that's clearly genetic in origin, where an environmental change might make a difference. PKU is simply a dramatic example of that. Let me just take a look. OK, so I've already hit the pitfall 2, the notion that the interaction term might be important. The evidence for that might be this low socioeconomic status, high socioeconomic status influence on the apparent heritability of it. And I really also already hit that first pitfall there. High heritability might just mean that there wasn't much variation in the environment in your particular experiment. If you grow all the tomato plants in exactly the same field with the same water, the same sun and the same fertilizer, there will still be some variability of course. You'll get a heritability score that will be near 1, but that's because you've driven this term down to zero or near zero. And that doesn't prove that the environment is unimportant. So the fact that there is a significant genetic component doesn't mean there is not a significant environmental component.

OK, take a quick stretch, make [? boingy ?] noises.

AUDIENCE: [UNINTELLIGIBLE]

PROFESSOR: That's good. Thank you.

AUDIENCE: Hi, we actually decided that we were going to go over the midterms in class.

PROFESSOR: Yes, of course. Go ahead.

AUDIENCE: OK, but my class is right after this.

PROFESSOR: No, go ahead. Absolutely. No, this is in case there are all those society for neuroscience TAs. No, by all means. I want to cover for them, but not to discourage you.

Looking at the time and looking at my notes I realize I'd better get cooking here. So let me cook here. What I want to do is to tell you a little bit, well, let me frame this. Since I've already introduced the Herrnstein and Murray book, the most controversial piece of the Herrnstein and Murray kind of story is when you start getting to group differences and you say, look, in America at the present time the average African American IQ score is about 10 points lower than the average white American score. That's data. And what you care to make of those data is very important from a policy point of view. Herrnstein and Murray were making the argument that since genetics has this big role and these are things that are unalterable that the fact that blacks in the U.S. also, on average, make less money and so on. You know, hey, man, that's just science and genetics. And the fact that we're a meritocracy and good things like that, live with it. What I want to do first is to describe a couple of what are really sort of amusing historical anecdotes about the history of intelligence testing drawn from Stephen Gould's book called The Mismeasure of Man, I will answer the question on the handout already. What's the point of these amusing stories?

The point is not to make fun of folks operating 100 or 150 years ago. The point is to make us take with caution our understanding of similar answers that we're getting today. So for example, who has a higher average IQ: men or women? How many vote that it's men? How many vote that it's women? How many vote that it's equal? How many vote that I ain't touching this because I smell a political question when I can-- how many vote, I don't vote? That's the wrong answer. At least, next Tuesday. This ad paid for you by the league of women voters.

Just stepping aside here, it's sort of a pain often to vote if you're an undergraduate, right? Because your way away from whatever district you supposed to vote in, but go and vote. If for no other reason than nobody polled you and you can screw up all those polls that claimed that they knew the answer-- which they don't know-- to the election one way or the other by voting because when they called your parents you weren't home. So anyway, but you should vote. Oh, yeah, so men and women-- the answer is that men and women have the same average IQ. But the interesting question is why that's the case? Why is it the case?

Well, back at the beginning of the 20th century when IQ tests came from France to America, how do you build an IQ test? What you do is you make up some questions that you think might be reasonable measures of intelligence, you give them to a bunch of kids because this was originally a school testing kind of thing, and then you sort of ask, are these sensible questions? Like, do the kids who the teacher thinks is smart, do well on them? And you work from that. And so they're working on standardizing the original tests and on the original drafts of the test women, girls were scoring about 10 points higher than men. The guys literally, who made up the test knew that this was wrong. Now it is to their credit in the early 20th century that they knew a priori that the correct answer was that men and women were equal. It would have been no big surprise at that point to have them figure out that men should have been scoring higher, but they knew that the answer was that men and women were equal. And here's what they did, you do what's known as an item analysis on your test. You look at all the questions, you say, oh look, guys did much better on this question than women did. Oh look, women did much better on this question than guys did. You know what we're going to do with this question? We're going to throw it off the exam. You know, we're making up the exam as we go along and so they tooled the exam to get rid of that 10 point difference and subsequent tests are standardized against the older tests. That's why men and women have the same IQ. Yes?

AUDIENCE: How do you factor motivation into this? Aren't little girls more focused in general--

PROFESSOR: Oh, there's a whole lot of-- you know, I don't know when it is that guys finally get focused. And I don't have much direct experience with this having nearly had 3 unfocused guys myself in my family. I always wanted one of those focused girls who just-- anyway. There's all sorts that stuff like that. That's a whole other course. But what they did was they made the difference go away by manipulating the test. Now these tests were hardly the first efforts to study intelligence. Actually, in an interesting circular movement, these days there's great interest again in looking at the brain and saying, is there something about smart brains that's different from less smart brains? We can do that with the renewed interest comes from the fact that you can now go and look at brains in walking, talking people. The previous boom in looking at brains was in the 19th century. The reason Binet developed his test was that looking at brains was only good at autopsy. You're not going to figure out if your kid needs help in school by cutting his head open and looking at his brains any good. In the 19th century it's just not a really practical solution, but people like Paul Broca of Broca's area-- famous French neurologist, spent a lot of time looking at the brains of their deceased colleagues and others to see what made people talented. Broca's doctrine, translated from the French, but quoting is that all other things being equal, there's a remarkable relationship between the development of intelligence and the volume of the brain. He was interested in the size of the brain. To a first approximation he's got to be right.

One of the reasons there are no chickens enrolled as MIT undergraduates is because little chicken brains just don't cut it when it comes to surviving at MIT. So having quantity of brain really does make some difference. So what he did was to look at the sizes of brains. And his conclusion was that in general, the brain is larger in mature adults than in the elderly, in man than in women, in eminent men than in men of mediocre talent, and in superior races than in inferior races. This reflects very much the biases of 19th century European, but in fact that's sort of the point. It's not to say ooh, Broca was an evil, nasty man, but you should be a little suspicious when your science places you at the pinnacle of creation. That should be a little warning bell that goes, oh, the inferior races by the way were for Broca, were the Chinese, the Hindus, that's subcontinental India, and blacks. I can't remember if Jews made it onto his list or not. But in any case, it was sort of the 19th century collection of not white males. But how did Broca get to this conclusion? Well, he weighed a lot of brains and what he discovered was in fact, it's true. That female brains are on average a little lighter than male brain, on average. All right, so? Chicken brains are lighter than your brain, chickens don't go to MIT; female brains are lighter than male brains, but he also found that French brains were lighter than German brains. Broca was French. So he wrote,

"Germans ingest a quantity of solid food and drink far greater than that which satisfies us. This, joined with his consumption of beer makes the German so much more fleshy than the Frenchman. So much more so that the relation of brain size to total mass far from being superior to ours seems to me, on the contrary, to be inferior."

Well, what he's doing is not stupid. He's correcting for body size and there's a very interesting graph that you may have seen someplace or other. If you plot body size weight against brain weight-- I can't remember if it works just for mammals or for sort of everybody, but anyway-- let's claim it's mammals, you get a very tight correlation. All the way up from mouse to whale. Except that humans are up here somewhere. They're off the curve. So the argument is people always-- a lot of people think-- whales, they must be these philosophers of the deep, they got a huge brain. But nobody quite understands what they're doing with that great big brain, but they do lie on this function. Humans have a very big brain relative to their body size. So this is the point at which some-- typically, woman-- in the class is supposed to raise their hand and say?

AUDIENCE: Females are smaller.

PROFESSOR: Yeah. Females are smaller than males, so if we correct for body weight-- well, Broca wasn't stupid, he knew that. And he wrote, "we might ask, if the small size of the female brain depends exclusively on the small size of her body. We must not forget, says Broca, that women are on average, a little less intelligent than men. We are therefore permitted to suppose that the relatively small size of the female brain depends in part on her physical inferiority and in part, on her intellectual inferiority." Now those 2 quotes from Broca are taken from completely different publications. Even Broca might have noticed if you put them right next to each other that there's a certain conflict there. Again, the point isn't to say, Broca was a mean, nasty, stupid man. There's no evidence for any of that, but there's clear evidence that what he thought he knew ahead of time was influencing how he was reading his data. The brain weighing endeavor sort of fell apart because it didn't work all that well. You know, people like the scientist who gives his name to that unit of magnetic whatever it is and Gaussian curves and stuff like that turned out to have a small brain. Had a lot of wrinkles in it though. Lenin, the Communists were always big on carving up the brains of their ex-leaders. Lenin was reputed to have a cortex-- we all have a cortex, its got 6 cell layers in it-- Lenin's allegedly had 7. Again, the point of these stories is not to say that we're smart and they were all stupid and silly people, the point is to say that people tend to impose their ideas on their data as well as imposing their data on their ideas and you've got to keep an eye open for that.

In the remaining time let's ask a bit about this question about, suppose we really do think that more IQ is better. Is there any evidence that we can get more? You know, is there a way to get more IQ? The answer appears to be yes, though we're not-- some of it's kind of mysterious. The mysterious bit is on the handout on the back page I see. Well, on page 4 at least as the Flynn effect. Flynn is a political scientist, James Flynn is a political scientist sitting down at the University of Otago at the bottom of New Zealand, which nobody had ever heard of until the "Lord of the Rings" movies, but you don't get further away unless you go to Antarctica. But what he did was he took a look just at raw IQ statistics, initially in the Pacific Rim countries in the period shortly before and since World War 2-- couple of generations worth. And what he found-- well, it's summed up in the title of his paper-- massive IQ gains in these countries.

So for example, back a decade or two ago when the Japanese economy was booming, people liked to point out-- people given to these group IQ arguments liked to point out that the average Japanese IQ was about 10 points higher than the average American IQ and the reasons they're eating our lunch is because they've been breeding with each other for years, you know? And they're just smarter people than we are and we're all doomed to serve the Japanese forever. Well, that argument has kind of gone away since they went into a 10-year recession or something. But the more interesting point, from our point of view, is that apparently they got smart really fast because before World War 2 the average Japanese IQ was about 10 points lower than the U.S. IQ. There is no genetic story that makes that work. And this happens in Pacific Rim country after Pacific Rim country. That after World War 2 IQs just go way up and then subsequently, it turns out what's happening all over the world in fact, it had been mapped in the U.S. because IQ tests keep getting renormed. The average IQ in the U.S. keeps drifting higher and then because average IQ is defined as 100 you renorm the test so that the average is 100 again.

Flynn wrote a wonderful paper, which I didn't cite on here, but if anybody wants send me an e-mail, where he was having fun with these statistics proving that if you used these statistics literally you would conclude that our founding fathers were all congenital idiots and were probably too dumb to walk across the street if you just extrapolate back. Something very odd is going on. It's not clear what it is. One of the thoughts had been that it was simply nutrition. Pacific Rim nutrition got a lot better after World War 2 and maybe better food makes better brains. That seems to be true, but there's counter evidence. For example, there's no dip in the Dutch average IQ for the cohort that was in the vulnerable part of early childhood during World War 2 when the Germans systematically starved the Netherlands. There's no dip in the IQ, so it's not clear that the food story works. There are other stories that say that what's going on is it's something about a change in the culture. That the world culture has become this much more information intensive culture that supports and nourishes the sorts of things that are measured by IQ. That the sorts of talents that you needed when you were a subsistence farmer, you needed some smarts to make yourself survive, but they weren't the sort of thing that were getting picked up on IQ tests and that now your life builds IQ points and something-- it's not clear what's going on. What is clear is it's not genetic. It's just too fast to be a genetic change. Now that's at the level of groups, can you change individual IQs? And the evidence there is yeah, you can do that, too. And the classic experiments come from the 60s. Oh, what is he today? He's a rat.

All right, so we're going to take rats. Sort of a homogeneous population of rats and we're going to randomized them into 3 populations. Population 1 goes into-- well, let's stick with socioeconomic status because that's what they were trying to model. This is the low SES group, which meant in rat case that they lived-- looking like ducks or something-- they lived all alone in an isolated cage with nothing to play with. The medium group lived in what was sort of the standard lab cage of a couple of cockroaches. A couple of rats, not much action. And then there was the high SES rat group or the enriched group, who lived in a sort of a big group rat daycare center. You know, with lots of cool toys to play with and the Habitrail thingy and stuff. You know, a lot of good, cool stuff there. Let them grow up in these environments, test them on little rat IQ tests, cognitive tests for rats. These guys do not do as well as these guys. Look at them at the end of the experiment. I mean, the real end of the experiment when the rat is now dead and you discover these guys have brains that on all sorts of measures look better than these brains. Thicker cortex, more synapses, bigger brain. Clearly, the enriched environment was having some sort of affect. These data, back in the 60s, were part of what motivated Head Start. The notion that kids in low socioeconomic environments had lower IQs than kids in high socioeconomic environments, they could be brought along to the same higher level by putting them in school earlier. Putting them in a preschool enrichment situation. And it worked. That if you went-- [UNINTELLIGIBLE] I need a spot-- if you took low SES kids who plateaued out at this level and you took them and put them in Head Start they went up to this higher level that was the same level as the high SES kids.

Well, what are Herrnstein and Murray talking about then when they say you can't change it? Well, Herrnstein and Murray said, yeah, that was very nice. Look what happened when these kids hit middle school and high school. And the answer is [UNINTELLIGIBLE], they went right back down while the high SES kids stayed up. Herrnstein and Murray concluded that what this is, it's like a rubber band. Yeah, you can stretch it a bit, but when you let go it just snaps back to where it was. The alternative view is yeah, it's exactly like a rubber band, but why did you let go? When you let go you drop these kids back into a lousy inner city school rather than this enriched state and they fell back to where they were. The effects were transient. You want to-- sort of a silly example-- think about diabetes. So Diabetes has a similar sort of graph except it's a little more stark, no insulin you're dead by your 20s. With insulin you're not dead. OK, so here we're in the not dead state. Let's say, OK, well you're done with that insulin stuff now, we stopped giving you insulin. Oh, look, they're dead now. There wasn't any point to giving insulin. Well, that's a stupid conclusion. In Diabetes obviously, isn't that the conclusion is you better keep giving insulin. And you could argue that the same thing is going on here. That we know that if you want more IQ points, putting people in better environments works, but you gotta keep them there.

I will tell you one more factoid and leave it at that. In adoption studies, if you adopt kids into typically, when adoption moves you from a lower to a higher socioeconomic status because that's just the typical reason why kids would be put up for adoption, it doesn't go the other direction, typically. So most kids are moving from low to high when they're adopted. Who does their IQ correlate with? The answer is it correlates with their biological relatives because there is a genetic component to IQ. What is their average IQ, however? Their average IQ is the IQ of the environment that they are now in. So their IQ is, on average, indistinguishable from the biological kids in the same family. How can that be? That doesn't sound like it ought to work. But let me just draw you a quick picture and then you can go off and think about it.

So here are these pairs of siblings. And their IQs-- let's take pairs of siblings-- their IQs are correlated with each other. Now we're going to take 1 of each of those pairs and that kid's going to get adopted out. And that kids going to move, as a result, to a higher socioeconomic status. That doesn't change the kid's genetics any. And so, the kid still has a IQ that's related to his brother or sister, but it shifts all of-- so this is the kid who's in a low socioeconomic state and stays there. This is a kid going for low to high. What happens is that the low to high move pushes everybody up. So the cloud of spots just moves up. The correlation is the same. The average IQ here is higher because going from a low to a high socioeconomic status gives you IQ points. If you really thought that what you wanted to do as a society, if you really believed more IQ points mean more good stuff, there's a clear enough way to do it. It becomes a social policy issue about how you end up doing this, but it seems perfectly clear that it is possible to do that.

I'm covered in chalk.