Lecture 8 video

Flash and JavaScript are required for this feature.

Download the video from Internet Archive.

Previous track

Video Index

  • Review of last class
    Review of Gödel's incompleteness theorem. Justin answers a students question.
  • Emergent Properties
    Layers of meaning, example of computers, from the phsycial transistor level to operating systems. Example of neurons in the brain.
  • Human Consciousness
    Evolution and human consciousness. Excessive self reflection may not be evolutionarily beneficial. Maslow's heirarchy of needs. Justin answers student questions.
  • Class Wrap-up and Discussion
    More on emergent properties. Modeling particles using a few simple physics rules. The robustness of the human brain. Justin and Curran field questions from students, and show more computer simulations.

» Download English-US transcript (PDF)

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

JUSTIN CURRY: All right, welcome back everybody. First of all, I want to apologize to the virtual world out there that we didn't get last lecture on tape. So I want to do a quick recap. Honestly, I mean, no more than two or three minutes on what happened. And that's why I put all the effort in putting it up on the board first.

Remember, we're dealing with a formal system, here. And here, I'm just going to denote this formal system as F. Recently, we've been interested in typographical number theory, TNT. And we carried out this process of encoding symbols of F through this Godel numbering process, essentially giving equals a number 555 things like this.

So we have now symbols of F corresponding to a small subset of all natural numbers. We then have this correspondence, this actually exact mathematically precise isomorphism, between strings of F and subset of all numbers, which is the Godel number of some string. The strings that we have over there.

We then take our rules, axioms and rules of F-- remember, these were formal, recursive operations on strings in our formal system. And we arithmetize them. We said, OK, we're going to let the process of induction be equivalent to taking 10 times blah, blah, blah, blah, blah, this number, blah, blah, blah, blah, divided by blah, blah, blah, Chinese Remainder Theorem, blah, blah, blah.

And we're going to actually be able to do the same simple shunting we would do with the MIU system or with the typographical number theory system, and just do it in the form of numbers. And this is why, with Godel's Incompleteness Theorem, we need formal systems strong enough that encompass number theory in order to do this process.

So we then have strings which are statements of F, which correspond to numbers that are F producible. We then kind of do this leap outside of the system. We have meta F, right? And we're taking statements about strings of our formal system. And these correspond to-- this corresponds to number theory, taking statements about numbers which are F producible, which are f numbers.

An example of this was in number theory, say, 641 is prime or 6 is a perfect number, and we just denote this property, perfect number. And we can give it-- we can make it a property with a free variable. And he's like, is 5 a perfect number? No. Is 6 a perfect number? Yes.

We can create a new property called primness, which corresponds to provability. And then we essentially then have this problem of determining whether a particular string as a theorem of F is equivalent to establishing whether or not a number is prim. So I wanted to quickly recap, going back over here to the essential steps in Godel's proof.

So we arithmetize-- just take that as it is. Arithmetize-- we give symbols, numbers, and we give rules for inference and deduction, operations on numbers. So then we make this property of probability, we equate it to this exact precise isomorphism, which Godel discovered, to a property of primness.

We then turn the operation of quining-- so quining, when preceded by itself in quotations, yields a full sentence. Of course, we could play around with this. Like, snow is white, snow is white didn't mean anything. But when we fed that operation, that function itself, into that variable for-- when preceded by itself in quotations yields a full sentence. When preceded by itself in quotations yields the full sentence created a full sentence, which was itself self-referential.

And it was a fixed point, which, remember, is when you have a function-- which was in our case a property of quining or it could have been just simply multiplying by 2x. A fixed point is something which when you feed it into your function, you get the same thing back out. So we took quining into an operation on numbers.

Our fixed point of quining gave us an inherently self-referential sentence. And then we were able to describe-- not directly spell out, but describe a formula which felt like this statement here-- when fed its own Godel number, yields a non-prim number. And when fed its own grow number, yields a non-prim number. Which describes this big thing called G.

And G is essentially the reason why number theory is incomplete. It's what breaks the mathematician's credo, this idea of true if, and only if, provable. And with the production of G, we were able to establish that although all provable things are true, not necessarily all true things are provable. And this was a very important idea. It's the take home message of the course.

We connected this briefly to the halting problem, which is just this idea you can't have a magical machine which you can take in a program and an input, and it'll tell you whether or not it'll terminate. Just like you can't have a magical machine which says, give me the Godel number for some statement and number theory, and I'll tell you whether or not it's true or false just by detecting whether or not it's prim or not prim. These things are inherently impossible and undecidable, and this was kind of the shaking of the foundations of number theory, which we wanted to do.

But that's all I wanted to say about this. I want us to move on, and now think about more enjoyable things. Because number theory, great. I wanted more just the take home message of this idea of having things which are true, but not within our system. Right? And the idea that we have to jump outside the system.

Yes, Atif?

AUDIENCE: Do you feel like something where we can have a formal system, then we get all the Godel statements, then get former systems in which those [? hold. ?] And so by doing, that would generate more formal system and then get their true statements about them that are forcing that system for it to be consistent. Do this-- all of the systems, you can come out wit. Can you like, find out where they create all the mathematics theories?

JUSTIN CURRY: Right, so essentially this idea of going ahead and giving Godel numbers for our axioms and our rules of inference, and then just being able to essentially generate all of mathematics by just feeding it to a computer and let it carry out these operations and numbers. Right? So we could produce a computer which would produce provable things in that direct fashion, but it'd be incredibly inefficient. Right?

Like, one of the remarkable things about humans and the human intellect is that we're able to essentially jump several nodes down the tree, and not work-- we don't think on the level of formal deductions on operations of symbols. Right? And also, once again, that's the whole point of Godel's Incompleteness Theorem, is that we can't produce all mathematics.

AUDIENCE: But you can certainly try to direct statements that are almost similar to what he says. And then gets systems in which those statements, which are causing some of the [? system ?] code for that system. And then try to explore that other system to see which mathematics it gives you.

JUSTIN CURRY: OK, so is the idea then that we essentially take our G here, and we let that be essentially an axiom of a new system, and then--

AUDIENCE: Just like get a system, in which that thing can be derived to be true. So just speaking about another system, by another system which--

JUSTIN CURRY: Right, exactly. So essentially, we would have to-- it's almost trying to make our system complete by considering a system where G is derivable. Right, exactly. No, I mean, that's totally key. And mathematicians gave exactly the same point after Godel did this.

But what Godel showed is that even if you were to tack on G as part of your new system, you could then create an analogous G prime, which was incomplete in that system.

AUDIENCE: I know, I know that's the problem, but since this [INAUDIBLE] is [INAUDIBLE] all of the mathematics. If we can just find them, and create new systems that sort of support them, then we can have other mathematics that are not covered by the other system. So it's just like a way of exploring the mathematical landscape.

JUSTIN CURRY: Well, no, I mean, yeah. We're doing that all the time. Right? But it's not necessarily that we can just start out and somehow get the complement of provable state, of not provable state very easily.

I mean, that's essentially what mathematicians do. That's why mathematicians never go out of a job, is that we fundamentally need to think about and create new concepts and make new definitions. And then explore the deductions of our new assumptions and axioms, which weren't covered by the old system involving number theory. We need to add new stuff on there that can't be done in a mechanical process or requires human intelligence.

But, I mean, that's good. If you want to talk about this a little more, we should do it after class because we've got a lot to pack in today. But let me know if I haven't answered your question fully. We'll do it afterwards.

So I wanted to quickly digress. I kind of need to take a stand back and say, OK, well Godel found this really nice thing. He found this isomorphism, essentially, between PM-- Principia Mathematica-- and the things it was describing. So he was able to get the system to talk about itself.

But I think that always begs the question, what if he hadn't discovered this isomorphism? What if he hadn't discovered this kind of link, this analogy? And this actually relates to a story that Curran and I have recently.

And Curran was coming up to Boston. And he says, all right, I'm coming to Kendall soon. All right, great. So I kind of jokingly texted back. I said, OK, have space suit, will travel. Right?

So Curran texts back. He goes, ha, ha, ha, excellent. So I'm like, OK, great, he got my joke. And then I went and I was like-- so we met up and I was like, how did you like the joke? And he was like, yeah, I thought it was funny.

And I was like, well, you got the reference, right? And he was like, no. What are you talking about?

Well, I'm like, well, Have Space Suit--Will Travel is a book by Robert Heinlein. And he was like, no, I've never heard of it. I just thought you were like putting on this metaphorical space suit. And that you were going to go meet me at the T, and that's why you could travel.

And I was like oh! Ha, ha, ha. I think that's funny that you thought I was funny, even though you didn't get my joke. And then so we kind of talked about this for a little while. And he's like, well, so what's the book about?

And I said, actually, I don't know. I haven't read it. So then I Wikipedia-ed it today.

And I was thinking, well, actually at the end of the book, the main character gets a full scholarship at MIT. So there is like a third layer of meaning, which neither of us knew. And it was just this level of isomorphism, which we didn't know. But everything seemed to be operating just fine.

So I think it's kind of interesting you have this idea that, what if we had just gone straight forward with PM, without knowing any kind of analogies-- this ability for the system to self-talk? Like, what would have happened? Would have mathematics just marched on fine, thinking it was complete and everything? I just think that's kind of an interesting idea.

But I want to give a quick signpost of what's happening today. How many of you guys read chapter 10, the levels of description of a computer system? Or at least started to read? Great, excellent, fantastic. Nobody read the wrong chapter, which was chapter 16.

That's a good chapter but, it's little advanced in that it uses a lot of concepts from chapter 13 and 14, which really help hit home the idea of Godel's proof. And you should read in your own time. What I want to talk about today is this concept of emergence and emergent properties coming out of simple descriptions. Right?

We've been kind of hitting this home all the time, but today's kind of our last day to just really try to show complexity and show how its emergence out of a simple building items, out of simple building items-- sorry. And, really, what this chapter here starts with, and why we-- I mean, why do you think we talk about computer systems? Does anyone have an idea? Why do you-- did anyone find it interesting? Sandra.

AUDIENCE: [INAUDIBLE] there just seems [INAUDIBLE] and break them down.

JUSTIN CURRY: So we use computers in the same way that we think, in terms of breaking down--

AUDIENCE: Yeah, and that's how keep [? coming around ?] and fix it [? logically. ?]

JUSTIN CURRY: Right, exactly. So go ahead.

AUDIENCE: So to base it off relating to your system [INAUDIBLE] programs. Is it based on the way we think, or is it in [? alphabet? ?] [INAUDIBLE].

JUSTIN CURRY: All right. Let me make sure I understand what you're saying. So are you asking, are the programs that we program just based on how we think? Like, essentially it's just our idea-- hey, we want the computer to do this.

AUDIENCE: Yeah, and how it's universal.

JUSTIN CURRY: And how its--

AUDIENCE: [INAUDIBLE]

JUSTIN CURRY: So how its computation is kind of universal. Is that what you're getting at? OK, yeah. I mean, so that kind of leads back to a fundamental idea of a Turing machine. Right? Although we subjectively have this own approach to how we'd solve the problem, right?

It can ultimately be reduced to an algorithm, a set of instructions, which is universal. It's as universal as mathematics, right? So pretty much anybody in any language should understand it. Right?

Is that kind of what you wanted to say? OK, Navine-- oh, sorry. Atif.

AUDIENCE: Could we just be an implementation of a Turing machine?

JUSTIN CURRY: Say again?

AUDIENCE: Could we be just an implementation of a Turing machine?

JUSTIN CURRY: Could we just be?

AUDIENCE: Yeah.

JUSTIN CURRY: Like, us as humans be implementations?

AUDIENCE: Yeah, and the motions are like different ways of talking about some part of a change.

JUSTIN CURRY: OK, all right. So you're getting at a really interesting idea, here. This idea that humans, in our thoughts, are actually best modeled by a Turing machine, right? And I don't know if you've done this reading, and maybe that's why you're being clever, and asking this question, but have you heard of someone named Roger Penrose?

AUDIENCE: Yeah.

JUSTIN CURRY: OK, so Roger Penrose is a very prominent mathematical physicist at Oxford, right? And he also wrote a pretty interesting book called Shadows of the Mind, in which he suggests that Godel's Incompleteness Theorem and the halting problem has application to human intelligence. But what he argues is that humans are fundamentally not Turing machines, but that computers are. Right?

And that's why artificial intelligence, in his mind, is impossible, is because a machine is always limited to these kind of constraints, which we've cooked up here today and through the halting problem. But humans, obviously, aren't. We can meta think. We can meta meta think, and we can meta meta meta think, always jumping outside the system. Right?

But we're never ever bound to the same constraints of a machine. Now, I'll warn you. Penrose's ideas are--

AUDIENCE: Who constrains the machine?

JUSTIN CURRY: Who constrains the machine?

AUDIENCE: The ones who--

JUSTIN CURRY: Logic. Logic, right? The fact that all a machine can do is not, and, copy, jump-- basic assembly code, right? It is just a Turing machine.

AUDIENCE: But what can a human do?

JUSTIN CURRY: What can a human do that's not a Turing machine? Good question. We can think emotionally. We can believe-- we can double think.

Humans, all the time, believe in both P and not P.

AUDIENCE: Yeah, but it's like-- humans aren't like [INAUDIBLE] systems. I think we think we're one thing, when we're really many things. That can also be contradictory. It's like a bunch of little machines going together, like communicating with each other. They can contradict, but they can agree.

JUSTIN CURRY: OK, so you've got essentially this kind of Society of the Mind viewpoint-- you know, Marvin Minsky's book-- where human intelligence is best modeled by a bunch of little actors, right, that almost vote on certain beliefs. Right? And I mean, it's a good question.

You also won't have to figure out, one, how do the actors work and how the actors make decisions? But two, do you really think it's just a majority rules inside your brain? Like little guys, little homunculus up in your head, voting on, no, I think today we do believe in God. And no, I don't believe in God.

And then, all right, everyone raise your hand if you do. If you don't. OK. Today, we don't believe in God. Right? I mean, is that really how human intelligence works? We don't know.

But that's the important thing about studying layers of systems, layers of description. And that's what this chapter is all about. So I want to give someone else an opportunity to explain, essentially, why do we study computer systems and trying to understand intelligence, trying to understand the mind? Does anyone have an idea?

AUDIENCE: Well, it's so we can-- from things that we think we know we can do, you can try to make a comparative similar things in some way, just to maybe suggest maybe this is how we are sort of the same thing.

JUSTIN CURRY: OK. So possibly, right? I'm not sure. But here's what I kind of came up with. And it's my motivation for why I think people might want to study computer science. This will kind of launch into a whole little lecture. A what do you want to do with your life kind of thing.

I'm not going to try to solve that problem for you, but we can try. So what I think is cool about computers is that back in the day-- if you want to go all the way back to Babbage and these guys in the late 1800s-- they were just playing around with gears and things like this. So I mean, they were working with just physical level. Right?

So we had this physics governing everything. And he was playing with gears and wheels. But what really makes up today is we have transistors and capacitors.

We have electronic things for doing really small computations. Like Babbage was thrilled when he could just get a simple calculating machine. Like a-ha, wow! Now you can do logarithms. This is amazing.

But he was programming at this level, right? His levels of description are almost completely based on gears, wheels, and then what became, in the 1900s-- early 1900s-- transistors and capacitors. But then we had this, I would suppose, this kind of revolution in how we do things. And we got up to the level of, to speak in modern terms, motherboards and video cards, and things like this.

And we went ahead and started abstracting entire chunks of hardware to do very specific, highly designed things. And we just started playing around with those chunks. Like, a-ha! Now I don't have as much lag in my video games. I can have a new video card. Pop that in.

So we no longer have-- I mean, how many people, at least popular computer users, ever worry about the transistors and capacitors in their computer? Like, oh, no. It doesn't happen.

And then we have this kind of next step up of-- and this is really where the realm of computer science lives is software and operating systems, which is going to live on top of this, even. And this is amazing. Suddenly, we're dealing with things which don't really exist. Right?

We're kind of executing spells. When Curran gets up later today and he hacks away on his computer-- type, type, type, type. He just puts in these little images that show up on the screen, and then he hits an enter button. And then it suddenly executes magic, and it exists in a different realm.

But it all boils down to what's happening on this level of transistors and capacitors. And the ability to actually tell you-- if you went to somebody and you said, "Hi, my name's x. Can you tell me how Windows works, in terms of capacitors and transistors?" They would just kind of go, [GROAN] do you want to enroll in school for the next four years? Like, sure.

And that's really the project here. But what's interesting is that there's a conceptual framework we have for thinking here, which is very close to how we hope to crack the problem of the mind. And that's the idea that we have neurons and neurotransmitters-- things ultimately govern transmitters. Sorry, I ran out of space, here.

We describe things in terms of cable theory, like we're just sending messages down an insulated cable. We have this kind of level of description for the brain at this very small level-- and this is already research worthy. But then we have a bunch of biologists and such, and people who are interested in neuroanatomy, talking about, oh, what about the amygdala?

I don't know if I'm spelling any of this right. The hippocampus. I mean, your front cortex, your cerebral cortex. And I'll just denote by cortices. I don't know if that's the correct pluralization.

And then somehow, out of these larger structures, where you abstract away the details of what's going on with the transcription in this cell, and instead you're just thinking about, how is my hippocampus doing today? We then all the way enter essentially an isomorphic level of software in [? OS, ?] which is the psyche, the mind. And I'll go and put it up here-- soul.

And this is fundamentally why artificial intelligence researchers are found in the computer science department. It's because we've had a lot of guys who have gone from this level of dealing with very clunky physics to doing really souped up stuff on larger physical entities-- motherboards, video cards-- and then getting up to the level of software systems and operating systems. But I wanted to take a quick poll.

Where do your guys' interests lie? On this kind of stack, in both camps here, but especially focusing on the brain and going into the mind. What do you guys find most interesting?

This is going to really reveal a lot about who you are and what you want to do. What do you find most interesting in these three boxes. If you had to vote for one box, what would it be?

AUDIENCE: [INAUDIBLE]

JUSTIN CURRY: You want two boxes? I can't give you two boxes. You've got to decide. I mean, otherwise, you're going to be in school forever, right? All right, two boxes, but that means you automatically have to go to school for seven years after undergrad. All right?

OK, go ahead. Give me some votes. I want to see some hands. Who likes studying molecules, neurons, neurotransmitters, maybe even basic physics?

OK, Sandra, Ders, Maya, Rishi. OK. Who's kind of the inherent biology here, who likes amygdalas, hippocampus, set of cortices? All right, I got Atif.

All right, now who's interested in kind of the psyche, the mind, the soul? Who wants to actually sit in an office and deal with people's problems rather than-- or maybe not. I warn you. You don't want to deal-- OK, so got Navine, I'm sorry, I forget your name, again.

AUDIENCE: Vivian.

JUSTIN CURRY: Say again?

AUDIENCE: Vivian.

JUSTIN CURRY: Vivian. Vivian, Maya, Atif, and Felix. OK, cool.

AUDIENCE: I have another one to add.

JUSTIN CURRY: OK, go for it.

AUDIENCE: Who wants to understand the commonalities between all of these levels?

JUSTIN CURRY: And we're getting a lot of hands, there. All right, so who wants to understand the commonality between all of these? All right, and this is-- Curran and I put together this list. And he'll show you when he comes up and gets the projector rolling.

But it's this idea of we've found ourselves working with problems in the systems which are so complex that we can only really occupy ourselves with certain layers of abstraction. But the interesting thing is that we can abstract, right? Just like when we're modding out our computer, we don't ever have to really care about transistors and capacitors, right?

And similarly, when I'm playing around on MATLAB, I don't ever have to worry about-- well, sometimes I have to worry about memory use. But I don't usually have to worry about any physical details. Yes?

AUDIENCE: You can't have one without the other, right?

JUSTIN CURRY: Right, exactly. As Sandra says, you can't have one without the other. Very important, right? Very, very important is you do have a tower here, right?

But what's scary maybe-- maybe scary-- is the idea that some of these things are universal and are independent. Atif?

AUDIENCE: [INAUDIBLE] you can have the physical things that the soul organizes [INAUDIBLE].

JUSTIN CURRY: OK, so definitely. These arrows only go one direction. You can have the physical thing without the mental, right? We have plenty of trees and things who don't have Windows Vista running on them. Go ahead.

AUDIENCE: What do the arrows represent?

JUSTIN CURRY: What do the arrows represent? Do you have to ask me that question?

AUDIENCE: No, but-- well, I don't know if it's true with the arrow. But there was maybe [INAUDIBLE].

JUSTIN CURRY: All right, so that's actually a very good question. That's a very good question. So what do the arrows mean?

So the simple answer would be they indicate, I guess, derivability or dependence, except the dependency is kind of going the other way. Right? Motherboards and video cards depend upon their underlying transistors, capacitors, but the arrow is kind of pointing up them towards them.

It's this idea that you can build things out of it, right? And this is an important idea, is that once you've got a basic amount of tools available to you, you can kind of start building upwards. Right?

Like once you've got-- and this is, I think, very interesting, and it's a good concept of study for languages. So when you're learning Spanish or any other language, what's perhaps the most useful phrase that you will ever learn in Spanish?

AUDIENCE: Hola.

JUSTIN CURRY: OK, hola. Why? Why is that useful? You can say hi. Big deal.

What if you want to learn more Spanish? What's the best phrase you can learn? And I'm sorry, I don't know many other languages.

AUDIENCE: [INAUDIBLE].

JUSTIN CURRY: OK. OK. Como se dice. Excellent. How do you say this? Right, in some way, with those three words, you've opened yourself up to an entire world of a language. Right?

Como se dice-- oh, libro. OK. Como se dice-- bolígrafo. Oh, wonderful. Fantastic.

So suddenly, you have this prompting for the outside world just by you learning three words. And you could kind of learn the entire language. And that's an important thing.

Once you've got some basic elements, some building blocks of words and vocabulary, you can immediately then consider this lofty layer of what happens when you put this word-- and so let's go to A, B, C. So what happens when we have just A together, B by itself, or we could have AB, or we could have AC, and then BC.

We already have this idea of once you've got some things, you can take the power set of them, and get-- oh, I'm sorry. I'm forgetting something crucial. And you suddenly have a whole layer of complexity just by considering the meta level beyond your first layer here.

So you learn a handful of Spanish words, right? And then just by considering their combinations and permutations, and ability to prompt the outside world, you can suddenly feed into a whole another layer of abstraction, another step up. It's the same thing with here.

So say you've got one transistor. OK, good for you. But suppose you have a hundred of them, right? You can start doing stuff, suddenly. And you can start doing stuff and have things which emerge, which are greater than just the individual parts.

I mentioned briefly this idea of ant societies, of ant societies exhibiting these kind of emergent properties, which are greater than any single one ant. And this is an important idea, especially when you even think about humans. Like, humans always do this. One person has an idea and another person has another idea, and then you can kind of consider, what's the interface of these two ideas? And you've got another idea.

Like, you can suddenly build things out of these simple pieces. And I suppose the merit of studying this is not only do you understand when can you forget about the details, abstraction, but what can you do, given the pieces that you have? so I think this is kind of an important piece.

Tied into this, though, is this idea of levels of description. How do you describe something based on what level of even ourselves? I'm sure many of you ran across the part in chapter 10, where Hofstadter talks about the paranoid and the operating system, and he talks about this program, Perry.

And he talks about the situation. And he kind of plays out-- and I'll go ahead and elaborate stuff that wasn't in the book. But I think it's an interesting thought experiment.

So how many of you would identify your bodies as part of yourself? Sandra, Navine, Ders, Atif, Felix, Rishi. OK. So wait, you guys didn't raise your hand, right? So you don't identify your bodies with yourself.

OK, so Atif changed his mind. So for those of you who did--

AUDIENCE: Well, is the brain part of my body?

JUSTIN CURRY: Is the brain part of your body? Yeah.

AUDIENCE: OK, then yeah.

JUSTIN CURRY: OK, so the brain's part of your body. Anyone want to change the votes? OK, fine.

If I were to ask you, how's your leg feeling today? How many people would think that's a normal question? OK. So we've got plenty of normal questions.

AUDIENCE: [INAUDIBLE]

JUSTIN CURRY: All right. So suppose I then asked the question which Hofstadter asked, is like, so why are you making so few red blood cells today? Is that a normal question? Yeah, go ahead, Sandra.

AUDIENCE: Yeah.

JUSTIN CURRY: You think it's a normal question?

AUDIENCE: Yeah.

JUSTIN CURRY: Do you have control over how many red blood cells?

AUDIENCE: No.

JUSTIN CURRY: But they're a part of your self.

AUDIENCE: It's really a part of me.

JUSTIN CURRY: It's part of you. So why can't I address your red blood cell count as part of you?

AUDIENCE: Well, it's inside of you.

JUSTIN CURRY: It's inside of you, right? But--

AUDIENCE: [INAUDIBLE] very [INAUDIBLE]

JUSTIN CURRY: OK, so you're accepting that as part of the logical consequence of the things you've said. All right. Well, good for you, because you're being consistent. Does anyone else suddenly feel like I've said something awkward?

AUDIENCE: Maybe we shouldn't even say, how does your leg feel today.

JUSTIN CURRY: Maybe we shouldn't even say, how does your leg-- so if we never address--

AUDIENCE: [INAUDIBLE] deciding which is which?

JUSTIN CURRY: Right. So I suppose, I don't know, Curran comes in and he's just got two black eyes. And I'm like, Curran, you look terrible! And he's like, that's just not an acceptable question, because I'm referring to his physical appearance. Right?

AUDIENCE: And it's respecting his physical feelings, his physical things, so he's almost internalizing that picture and saying, oh, yeah, that's me.

JUSTIN CURRY: OK, so you're saying Curran's self-model is inherently dependent on his body and his interaction with his environments. Right? So then when I say, you look terrible, he used his own internal model and looks at himself as being terrible. So then that does make sense. But then what about the red blood cell question?

AUDIENCE: Red blood cells-- I mean, usually you do not have kind of internal model of your own red blood cells.

JUSTIN CURRY: Yeah, right. You don't normally have an internal model of your red blood cells, right? But suppose you did. And suppose that-- and this is kind going to be one one of the take home interesting consequences of this thought experiment is what if you had internal access to everything that was going on?

AUDIENCE: That would be terrible.

JUSTIN CURRY: OK so everyone's going, oh, god, it would be terrible. I mean, why would it be terrible? Like, I'd be trying to walk across this room--

AUDIENCE: Because some choices we make are not necessarily the right or appropriate ones.

JUSTIN CURRY: Yes, some--

AUDIENCE: [INAUDIBLE] red blood cells stop me [INAUDIBLE].

The first chapter identified what's making the decisions, that perhaps the conscious thing doesn't make decisions. But some, the decisions it's aware that are being made, it ascribes to itself.

JUSTIN CURRY: OK, repeat that last line one more time.

AUDIENCE: The decisions that it's aware are being made, it ascribes as being made by itself.

JUSTIN CURRY: OK, so you make certain decisions and you think that you're making those decisions.

AUDIENCE: I always ask, who's making the decisions? My neurons or myself.

JUSTIN CURRY: All right. And I really wish I could have passed out the dialogue, like who pushes whom around the cranium. One of Hofstadter's other dialogues. And you're right.

And I wanted to read this dialogue, but I don't know if we'll have time for today. But the prelude and ant fugue gets this idea of symbols, of this kind of larger, emerging groups of neurons and little internal representations of the outside world. And to a certain extent, they make up who we are.

Our internal models are the ones that then collectively make decisions. But on the other hand, we then have times where the emergent thing then manipulates our internal models of the world. So suppose you're sitting in this class one day, and I'm telling you, well, the universe actually isn't deterministic. It's actually got this probabilistic framework, the quantum mechanics.

So then you go, wow. And then you change your base little mental model of the world. So I mean, there's this kind of weird interaction with hierarchies. So this is an interesting point, because here, there is very little interplay going down this way.

But you know what? It does happen, right? I think one of the best examples is your cell phone. Because you have this SIM card as part of your cell phone, which actually-- yeah, exactly. I mean, you take it out and it might not be operational.

But you could also try putting it into another cell phone and it might not work. And that's, in part, ascribed to this higher service, which is getting beaten back down. There is a weird interplay of levels, of software, and hardware going on in here.

AUDIENCE: So suppose we have like-- the brain really had perhaps your lower level neurons, and neurons talk about those neurons that talk about those [? sort. ?] You're not having this crazy software view. Maybe the consciousness that's not even the physical, is actually attracting that [INAUDIBLE], like making changes to the physical world. It's like, the physical world is making changes to the physical world.

But those higher level neurons are sort of the ones representing the abstractions. The abstractions don't really exist, but it's like these neurons are taking something from the lower level neurons. And they're seeing the essential processes of them, and then they're [? controlling ?] them back. So you are mostly your higher level neurons.

JUSTIN CURRY: All right, so maybe you are mostly your higher level neurons. But I think an interesting example-- and we're getting into details of the brain, which I'm not sure how it really works. But for example, with the ant societies, no ant is higher than another ant. All ants are pretty much created equal.

Yeah, say again? Except for the queen. But still, there is this myth. There is this myth that the queen ant was sitting there with her hyper brain, and just doing-- all right, tell forces A through B to go to sector G and attack, and then bring over the leaves here.

No. The queen ant doesn't do that. I mean, it's just like all the other ants, and then it's probably three neurons and can't do much with them. But it's the interaction of these very simple things which can produce complex behavior. Yes, Atif.

AUDIENCE: I just realized something. The thing is you're not even aware of your own thinking. And we often think that we're very self-reflective even though we're not.

JUSTIN CURRY: All right, yes. We are-- we do think we're self-reflective.

AUDIENCE: And even reason is beyond us. I mean, think, how many times have you ever reasoned ever since you were a [? child? ?] Most of the time, you've felt. You don't even know where the words are coming from. You don't know where your concepts are coming from. So how can we say we are highly self-reflective if we don't even, most of the time, reflect towards our own thoughts, our own feelings, and whatever? You know?

JUSTIN CURRY: No, I mean-- right, exactly. Like, we tend to just kind of assimilate data about our outside world. And especially the mental models passed on to us from our parents, that immediately start telling us and training us and printing us, really, like [? geese ?] from our first day to do certain things.

And we can rarely break out of those loops, out of those systems. But that's an interesting idea, right? It's this idea that what happens when we can't really scrape away at the boxes anymore? Like, we try to-- I mean, really, what happens-- I mean, why is psychology so difficult?

Why is solving consciousness so hard? Because we're kind of trying to take out an eye ball and peer back out at us. And they're like, all right, what am I? And I remember having this discussion with one of you after class.

It's this idea that-- so evolution really designed us to do certain things. And I suppose one reason why humans took off with these abnormally large forebrains was because it was a good method for survival. We suddenly started getting together in little groups and planting things in the ground. And suddenly, we had a really stable source of food.

Or we would trick mammoths and run them off into cliffs, and have them fall on spikes that we designed and sharpened ourselves. And suddenly, problem solving was a really, really important thing for evolution by natural selection. People who had had these problem solving capabilities were selected favorably. But it's really nice to be able to run mammoths off cliffs and plant things in dirt.

But what about the guy who is at the bottom of the pack, thinking, what am I? And then the cheetah [GROWLS] attacks, as he's caught in these thought loops about what am I?

AUDIENCE: [INAUDIBLE] because it's always like a crazy philosopher.

JUSTIN CURRY: Right, I mean, the philosopher was--

AUDIENCE: I mean, you first get problem solving, average, maybe perhaps above average a little bit. And then you get these monsters like [INAUDIBLE] that come along [INAUDIBLE]. Or something like all the way over there. We jump way too fast. We don't have this infrastructure to keep yourself from all of these enemies. You've got to slow down a little bit.

JUSTIN CURRY: I mean, I think that's exactly-- you do have these kind of figures which just seem-- I mean, Archimedes for example. If you want to talk about a guy who-- the way one of my lecturers at Cambridge used to put him, Dr. Piers Bursill-Hall, is he would describe these people as nine dimensional, hyper-intelligent beings. And their bodies were just-- they're a four dimensional protrusion into our world.

So I'm like, that's because they were practically aliens in terms of their intelligence. And we just had no idea what they were doing. And that's almost dangerous, right? I mean, we tend not to like or really go favorably with people who seem really out there.

And we kind of have a sad history of persecution. Humans aren't always very nice to the above average. I was bringing up this idea of having intelligence and having these problem solving capabilities, but that this ability to look back at ourselves and be reflective as not being very good for evolution.

And I've actually heard it suggested that our brains are actually evolved not to really solve the problem of consciousness, because we're spending too much time-- well, because it wasn't favorable in terms of evolution to actually be able to consider deeply and understand what's going on internally, having an internal model of all your thoughts. We don't have a transcript of why did I make that decision. Like, well, in sector A, this happened and this neuron fired on that and this one. And that is this, right?

We don't have access to that. That's, in part, because evolution said, we don't want to give you access to that. We're going to put this together in a nice little rat brain, and you're not going to have access to those thoughts. You're just going to have their consequences. I mean, it's a troubling idea. Sandra, you were going to say something.

AUDIENCE: Is there any advancement in human evolvement besides logical thinking [INAUDIBLE]?

JUSTIN CURRY: Is there any way to advance logical thinking, to essentially advance humans?

AUDIENCE: Human advancement through thinking. Maybe there's [INAUDIBLE] better. I mean, we did evolve from [INAUDIBLE] simple [INAUDIBLE].

JUSTIN CURRY: Right. To an extent, we are right now, in this classroom. And the fact that most of us, including myself, we have bad vision. Like, if I were really stuck out in the field, I wouldn't have a chance to sit and look over books. I'd be fetally hunting and not succeeding, and dying.

But our society has reached a level of technology and health care, where people can go on and we don't have to worry about base survival needs. And I don't know how many of you ever heard of this, but Maslow's Hierarchy of Needs. Anyone heard of Maslow? Have any of you guys done debate in high school? Do you even have debate teams or classes?

They don't have these programs here? Gasp! All right. So Maslow said we have this hierarchy of needs before we can do anything. And at the top here, he had essentially transcendence. I'm actually blanking on the exact word which he used. This is embarrassing.

But either way, I want you to Google this. Abraham Maslow. We have Hierarchy of Needs. I can't spell either. Is there an "a" here? No. It's just R-- H-I-E-R, Hierarchy of Needs. Thank you.

And we have this-- self-actualisation! Ha-ha! Ha-ha! Here we go. And we have food, water, shelter. And this is really what humanity spent most of our early development, just staying at this level.

CURRAN KELLEHER: Here it is.

JUSTIN CURRY: Ah, fantastic. And he roughly describes this as physiological needs. And then we have safety. So we have security of body.

I'm just going to go ahead and write safety in here. Pull that out there. Safety. Love, belonging. Esteem. We have to at least think somewhat positively of yourself.

And then we get to self-actualization, which Wikipedia describes as morality, creativity, spontaneity, problem solving, lack of prejudice, acceptance of facts. And really the argument-- and a lot of people use this in a scary, slippery slope-- say, well, look, we're going to protect you by constant 24 hour surveillance. Because before you guys can even think of doing anything up here, you just need to be safe in your homes. You need to be safe from terrorists.

So before you can even do any of this, you've got to have this. Which is funny, because we tend to do things which would infringe on inherent properties of self-actualisation, like freedom of speech, freedom of thought, et cetera, et cetera. You can, in some ways, say that humans have been trying to climb up this ladder.

And that even us as individuals in our lives, our parents provided us with food and shelter, and security and safety. And good parents should give us a feeling of love and belonging. And this then enables us to do things like go to college and think about Immanuel Kant and other philosophers , and stuff. Yes, Atif.

AUDIENCE: Couldn't like [? had been ?] under 24 hour surveillance sort of take away our feeling of safety?

JUSTIN CURRY: Right, so the argument goes both ways. We could also not feel safe by being surveyed. But this is a debate for policy makers, not necessarily for me right now. But you're right.

I mean, Sandra, I'm glad you brought up this point of this idea of humans trying to advance. And then it's kind of sad to say, but most of us here in this classroom are really occupied with-- we're all up here in this triangle. And we're worried, well, what about logic? Well, is computer science going to lead us to-- and we're worried about this very tippy peak, which is enlightenment.

Whereas there are plenty of people out in the streets today who are worrying about these things, they're not worried about is Godel's Incompleteness Theorem going to give me my final understanding of consciousness in the brain. I mean, just think about how far of a level that is, and even think of how far of a level we have historically come as people in escaping the brutish and harsh natural selection on this level. And we've now been engaging in a selection on this level.

And as you guys watch In Waking Life next lecture, you'll actually get a really pumped up philosopher who will talk, on this film, about this idea of a neoevolution and the neohuman. What happens when evolution starts no longer acting on terms of genes and security and safety, here? Because, we as humans, we don't usually worry about this anymore. Like, now evolution is happening on a cultural level, on a memetic level, for those of you who know what memes are.

But we're going tangential right now, and I want to pass things over to Curran, who's got lots of goodies for you, once again. And go through this idea of climbing up pyramids, and this interplay of levels and descriptions. As such, you guys get your five minute break while we reorganize things. And see you all back here.

AUDIENCE: So what are some consequences if those [INAUDIBLE]? Unbalanced?

JUSTIN CURRY: So what are some consequences if those things aren't balanced, right? So what happens if you suddenly don't feel safe? OK, well, I think some clear consequences, based at least on this model, is the idea that if we don't feel safe, we're not going to want to pursue anything higher. We're definitely not going to feel loved if we don't feel safe. And we're not going to feel-- we're not going to have any kind of self-esteem if we don't feel loved.

AUDIENCE: How about [? Hitler? ?] I don't think he felt [? sad ?] when he was doing it.

JUSTIN CURRY: OK. So you're right. There are these few kind of crazy characters in history which have been ultra paranoid, not felt loved, and just been essentially schizophrenic, right? Look at Isaac Newton. If you want to talk about a not nice man, he was a very, very evil man.

In fact, as his, I think, 32 years as head the Mint, where he's responsible for giving people their lives back-- because in those days, clipping coins was punishable by hanging. And he would never, never clear anyone's sentence. I mean, everybody who went up for coin clipping got hung underneath Newton's--

AUDIENCE: What is clipping?

JUSTIN CURRY: So back in the day, coins were actually silver because the metal itself had inherent value. So what people used to do is they would clip little chips of silver from it every time they came into possession of a coin, and they'd have a pile of silver. And you could turn it into another coin. So you just had another coin spring from nothing.

Of course, you can imagine, you would have to write letters to the warden of the Mint, saying things like, oh, I'm so sorry Mr. Newton, sir. I clipped only two coins and it was so I could buy an extra loaf of bread for my family. Mrs. has another bun in the oven. I've got three kids I need to feed.

And this guy is probably illiterate too and struggling with writing this letter, right? And Newton's like, [LAUGHING] hang him. So he wasn't a very nice person, but he was brilliant.

AUDIENCE: He was very [? extra ?] [? apathetic. ?]

JUSTIN CURRY: And yeah, he kind of went higher up this pyramid than any of us can really hope to. So yeah, there are some defects in this model.

AUDIENCE: [INAUDIBLE] didn't you actually say morality was one of them?

JUSTIN CURRY: Yeah.

AUDIENCE: So you can sort of get rid of it?

JUSTIN CURRY: [INAUDIBLE] you don't need morality. But I mean, ideally, we'd have this kind of-- now, you have to also remember Maslow was a philosopher of the '60s and '70s, and believed that enlightened people would be loving and happy, all the time. You want this, Curran?

CURRAN KELLEHER: Yeah.

AUDIENCE: But can you actually say Newton was enlightened?

JUSTIN CURRY: No, I don't think you can. But if you want to talk about the qualities of intelligence and problem solving-- even when he was old and people were trying to solve the problem of the brachistochrone, the shortest-- what shape slide would enable, when you drop a ball down it, to go from point A to point B in the least amount of time?

Like, this was being circulated through European and British mathematical journals for years. Nobody really came up with a good solution. So somebody finally passed it off to Newton.

He comes in at like 65, and he just kind of goes, [SCOFFING] don't insult me with these easy problems. Stood up that night and solved it in a couple of hours. I mean, that's just the kind of guy that he was.

AUDIENCE: So would you say that he let anyone off?

JUSTIN CURRY: I don't think so. I don't think he ever let anyone off for 32 years of being under the Mint. So Curran, do you want to talk about this?

CURRAN KELLEHER: You can start talking about it.

JUSTIN CURRY: OK. All right, so this is actually-- this gives you an idea of how Curran and I plan for lectures. Actually, this the only time we've ever used a white board. So we had this idea of-- I wrote this question on the board for Curran.

I said, if he had to administer a test to decide whether or not-- and it kind of runs off the edge-- someone was destined to be a computer scientist, what would it be?

AUDIENCE: [INAUDIBLE] computer.

JUSTIN CURRY: It had to be a computer. [LAUGHING] Yeah, I guess it got clipped somehow. But either way, just by forcing you guys to vote by those boxes, I went ahead and had this-- I was trying to get at what levels are you guys most interested in?

And I talked about if you're a physicist, and you're sitting in high school biology, someone might tell you about organs and things like this. And if you were a physicist, you would be like, OK, great, whatever. Tell me more.

And they'd be like, OK. Well, these organisms are made out of tissues and cells. And the physicist would say, ah, fine. Yeah, what else? Well, those are made out of these big proteins and these organic molecules. The physicist would be like, OK.

And those are made of chemicals, which are then made of atoms. And then they'd be like, ah, now you're starting to make me interested. And so then a physicist really occupies themselves with pushing downwards on this level as far as possible-- particles, forces, quarks, strings. So you can almost characterize it as a process of reductionism. And this is even true when you're thinking about large things like supernovae, galaxies.

You're trying to reduce these large scale phenomenon in terms of explaining them in fundamental forces-- f equals ma, things like this. But then what about-- I forget what's the statistic, but I want to say something like 50% plus of all people who-- for undergraduate majors, do humanities, things like law, history, creative writing, things like this. If you're one of these people, you don't ever even look below this line.

You're really caring about what happens here, right? What happens when you take your basic building blocks of say fear, hunger, desire, and selfishness, and you take linear combinations of them. And you're like, well, I'm going to give like 75% fear with a little bit of desire. And that's going to be a new emotion called apprehension.

And then as you kind of build up here, you start going with levels of description starting on the levels of emotions, and then building up the things. Especially if you're a literature major, interested in what your language can do. So then we keep asking, so like, what's an engineer? I mean, who here thinks they like engineering?

OK, great. So you guys see, or you might see, everyday materials. You're kind of like your own little MacGyver. Like, give me a toothpick, a rubber band, and a piece of dough, and I can pick somebody's lock. I mean, that's just what MacGyver does.

And you're always-- or I hate to generalize, but you're interested in how you can take base, ready, efficient materials around you and then create practical, effective solutions. And you're always kind of working in this domain here, maybe even going from small projects to city wide ones. So then still we ask, so what makes a computer scientist?

And I say that really, fundamentally, a computer science is a lot like a mathematician, who doesn't really explore the physical world and its layers of abstraction, but instead live in kind of a Platonic world. And their atoms are sequences and series and logics and grammars and calculus and algebra and geometry. And you get to play around with these things. But I would say that inherently what makes a mathematician a computer scientist is looking at just patterns, and kind of abstracting away any details.

I was really sad when I stopped really being interested in biology, because I used to love biology. I'm like, oh, isn't this great? Look what evolution produces.

But really, once I learned about the theory of evolution, I realized, well, that pretty much explains it. You have a couple of roll of the die, you have genetic mutation, phenotypes, selection, repeat. Right?

Once the theory was there in front of me-- well, I'm glad Darwin solved that problem. I don't have to worry about it. And suddenly, biology became less interesting to me, just because what, I guess, ultimately boils--

AUDIENCE: [INAUDIBLE]

JUSTIN CURRY: Say again, Sandra?

AUDIENCE: [INAUDIBLE].

JUSTIN CURRY: Right. I mean, the theory was there. And that kind of I guess puts me at this theorist and philosopher, is that I don't care about the details. And people spend their entire lives like studying the action of this protein on thing x and y, right? But I don't really care about the detail, just the conceptual process.

But I don't want to bias any of you guys. I mean, everybody has their own take at things. And even Curran and I had differing opinions about what was interesting and what you guys find interesting. So I want to turn things over to him.

CURRAN KELLEHER: So something that was really cool, that we noticed about these, is the universality of things being on lower levels, and things emerging out of those lower levels into higher levels of things. Like electrical engineering-- like he said, transistors and stuff. You have all these different layers of things that can happen to software, with neurons, and the brain, and thoughts.

And with biology, DNA and RNA and replication, and whatnot. Reproduction, and then that leads to evolution, which is an emergent property. So I'm going to show you some programs that I wrote, that exhibit some emergent properties.

So this is sort of a physics simulation, but it's discrete. It's not continuous. And physics is modeled as continuous integrals and stuff like that. So here, I'm sort of doing integration, but it's discrete.

So you have this sort of weird error that makes it not exactly like physics, but it's still really cool. The blue and the red things have different charges. And there are coulombic courses acting between them. So I'll just play with it.

Coulombic forces is like plus and minus, opposites attract. And when they're the same, they repel each other. And so we can change all these properties. And the color reflects how charged they are.

So these things, it's really weird. Like, this would never happen in real physics. Like, where's this strange energy coming from?

And I think it's because it's discrete. It's discretized and not continuous, but it's still pretty cool, pretty interesting. Yeah, repulsion. And now it's stable.

What I want you to keep in mind in looking at all these is that with every one of these examples, its very simple rules are pretty simple rules that govern the behavior of each individual one. And the rules that govern one of these particles, one of these balls, is no different than the rules that govern all the other ones. So these simple rules that are local lead to global phenomenon, emergent behavior.

This is what emergence is all about-- things at higher levels emerging from simpler things on lower levels. So for example, crystallization in nature is an example of an emergent property. So here is a set of presets, a set of settings that leads to crystallization. It's tuned to crystallize.

JUSTIN CURRY: And see, that's an interesting thing to point out is that we have this level of description, which you described. We call this a crystal. We don't say, it's the arrangement range when particle I acts on particle J, and has the following charge. Like, that level of description is too fundamental. But this higher level description of a crystal is much more convenient, right?

CURRAN KELLEHER: Yeah. Yeah, Atif.

AUDIENCE: Even if we [? do say ?] crystal, we may need exactly the same thing. It's like we're just dodging around exactly what the thing is.

CURRAN KELLEHER: You said that when we say crystal, we're not really saying a thing. We're saying-- we're sort of beating around the bush, right? Saying--

AUDIENCE: Yeah, it's like, OK, so we have these crystals. Their composition is of different particles that usually attract each other to make certain easily discernible shapes.

CURRAN KELLEHER: Sure

AUDIENCE: That's the definition of crystal. Once you see it as it is, it's like the set of all these different configurations of matter, such that there's probably holes. You just like-- you're just giving it a name [? for some description. ?]

CURRAN KELLEHER: Exactly. You're exactly right. So we say crystal, we mean a very high level thing. And this is what I mean when I say low level and high level. Low level means describing the exact way that the particles interact and different things that they do.

But crystal is a higher level. And so the lower level details could be different. Crystals form out of all kinds of different substances in nature. And this-- we can call it a crystal, because it sort of resembles the crystals in nature. That it has this regular structure.

But yeah, you're right. It's sort of glossing over all the details. It's existing at a higher level. It's a higher level of description.

AUDIENCE: So it's like is this mathematical [INAUDIBLE] category theory. So you can say, OK, we have these different fields of everything. But what happens over here can be mapped to exactly what happens over there. When you're describing things as crystals or whatever, you're just like [INAUDIBLE] speaking category theory language that is just like normal speech.

CURRAN KELLEHER: Justin knows more about category theory. I have no idea about category theory.

JUSTIN CURRY: Oh, I mean-- one, wanted to just to tell you to be careful. But two, yes. You're kind of right. And you're just looking for general features and systems, and then ascribing to that universality.

Exactly. I wanted to point out really quickly that what I wrote up there on the board-- f ij equals k and qi qj over r squared. I mean, that's just your rule of attraction for coulomb. [INAUDIBLE] And this is how-- and we have this level of description for the interaction between two particles, but what happens when we have n particles?

Suddenly, the equations become really hard to solve. But you start getting really interesting geometric behavior, which is easier to describe up top. But we're going to hit your idea later, because you're going to see examples where we see a similar concept to the force.

But there's no forces going on. We'll talk about that traffic flow, things like that. We don't actually have cars ramming into each other, but we still get the same behavior. But we'll save that.

CURRAN KELLEHER: This is another set of parameters that leads to this droplet forming. It's pretty cool. And here's the n-body problem being simulated. Oh, there it is. So it's just things orbiting around each other, based on pretty much that equation that he wrote on the board.

JUSTIN CURRY: Yeah, except you could replace charges with masses, as qi and qj represent charges [INAUDIBLE].

CURRAN KELLEHER: Something like that, yeah. But that's it.

AUDIENCE: They have some that's unpredictable, right?

CURRAN KELLEHER: Yeah, this is--

AUDIENCE: Actually, it is predictable, but you have to [INAUDIBLE].

JUSTIN CURRY: It's deterministic, not predictable.

CURRAN KELLEHER: It's deterministic and not predictable. It's chaotic, because it becomes nonlinear. Is that correct, Justin?

AUDIENCE: So the only way to know what's going to happen is [INAUDIBLE].

CURRAN KELLEHER: Yeah. And even then, you're approximating. It's not continuous. So you're going to get an approximation of what might happen. Yeah, you can't really solve the n-body problem.

AUDIENCE: So what if you've got your mind, right? So what if-- if it is in different bunch of problems, even in that thing and you're only talking about particles. Those are simple stuff. But when you have minds, you have a bunch of these steps going off together.

CURRAN KELLEHER: When you have what?

AUDIENCE: When you have minds.

CURRAN KELLEHER: Minds?

AUDIENCE: Yeah.

CURRAN KELLEHER: Yeah. So when you consider a mind to be one of these particles in society, interacting with all the other minds around it, this is called agent based modeling. This is agent based modeling. Flocking is when you have--

AUDIENCE: But the problem is for the particles, you have different rules. But for societies, different than minds themselves, are hard to predict.

CURRAN KELLEHER: Yeah, exactly. So you're saying if we model society as this agent based model, where each agent consists of a mind, it's even more impossible to predict because the mind itself is not really predictable. The mind itself is emergent out of the things that comprise it. So yeah, I mean, that's--

JUSTIN CURRY: It adds an interesting idea of going backwards, starting with behavior and then trying to figure out how the [? world governs ?] it. So it's kind of like, how did Newton figure out this Law of Attraction? He started from the behavior and tried to deduce or infer, better yet, what the law between two things is. And [INAUDIBLE] but cars, and I guess even with people, is we don't know the people. We don't know the interaction between two people, but we see the overall [? behave. ?]

CURRAN KELLEHER: So here's another version of that program, where you can get these molecules to form, which is really fascinating, these stringing molecules. So it's another kind of emergent behavior. See, there they go. It's totally a molecule. Check it out.

And the rules are pretty much the same. I don't know exactly what particular rules they are. And here it is in 3D, also.

So if we wait for a minute, these molecules are going to form in 3D. So I mean, emergence is really a cool concept. So here is this molecule in 3D.

AUDIENCE: Could it be that the world is constantly describing itself in different levels of description?

CURRAN KELLEHER: Could it be that the world is describing itself in levels, different levels of description?

AUDIENCE: Yeah, because you do have that, and then you also have something higher that [INAUDIBLE] something like that. But you can also describe it as a bunch of different particles [INAUDIBLE]. You can also describe it as somebody actually looked at that thing itself.

JUSTIN CURRY: Yeah, it was doing the describing.

AUDIENCE: Yeah, it was.

CURRAN KELLEHER: Yeah, I mean, it's what Douglas Hofstadter calls a tangled hierarchy, where it's not really a hierarchy. Because things on the lower levels are related to things on higher levels, and can have influence back and forth. So just like you said, the world is constantly describing itself, interacting with itself between different layers, different levels. And it's just like--

AUDIENCE: Can we say that the higher level also-- can it point us to the lower level, or can it-- it was just lower to [INAUDIBLE].

CURRAN KELLEHER: So you said, can we say that the higher level influences the lower level? Or does it always go up? Well, no, it definitely does not go only upward, because think of software. If I run this program, this next program-- which we'll just watch for a while-- this program itself is controlling the transistors and whatnot that's operating on--

AUDIENCE: But it's not really controlling the transistors.

CURRAN KELLEHER: It's not really controlling it?

AUDIENCE: No, no, no, no. Look, look! You've got the code. You put it into the RAM. There's something that's represented by different electrical activities.

And those electrical activities are computed and they're different. Like, hardware is [INAUDIBLE] much like different hardware they had here. And then it had [INAUDIBLE]. There's no higher thing [INAUDIBLE].

CURRAN KELLEHER: Yeah, it's all one. It's all one thing. It's just, we think about it in terms of high level and low level. So you're right. I mean, the hardware is controlling itself.

But it's based on what we put into it, and what we say-- tell it to do at a higher level. So I mean, the software that I wrote, which came from my mind, it's at a higher level, in a sense, than the hardware. Like this projector that's actually projecting pixels.

So that's what I mean when I say the higher level things influence, affect, control, even, in this case, the lower level things. It goes both ways.

And if a transistor were to crap out right now, it would control the higher level things because it would just stop working.

JUSTIN CURRY: It's constant causality, I guess.

CURRAN KELLEHER: Causality goes--

AUDIENCE: That doesn't make any sense. OK, how can a software program control hardware?

CURRAN KELLEHER: So this software program is running on hardware. And it's controlling this projector. It's controlling the hardware, physical--

AUDIENCE: OK, is it telling you what to do or is the hardware telling itself what to do?

CURRAN KELLEHER: The hardware is telling itself what to do, but only after I've told it what to do.

JUSTIN CURRY: Right, but even if we were to remove Curran, and we just had the software executing by itself, you're right in the sense that it's always just the hardware. It's just the hardware, right?

The software doesn't really exist in this ethereal realm. Right? Software is still just what's going on on the level of transistors and stuff.

It's just that we as humans use these levels of description. And we talk about the software as its own entity, even though it's fundamentally still just caused by the interactions of electrons and transistors, and such.

CURRAN KELLEHER: So here's another example with some traffic flow. So traffic flow is a perfect example of emergence, and what it means. So like these little bars, it's just like one lane of traffic that's just going. And it repeats itself.

Each one of these cars is like an agent. It's a lower level of description than the waves that we'll see right here. So say there's a red light, right. So the traffic gets backed up a little bit, and then the light turns green again.

And then they go. And so you see this thing on a higher level. It's this wave, which is propagating back. So we say the wave is a thing, and we can describe it as an entity. But it's at a higher level than the cars themselves, even though it's comprised of the cars, it affects the cars, and the cars affect it. It's the same thing with software and hardware.

JUSTIN CURRY: I still don't get it.

AUDIENCE: You still don't get it?

JUSTIN CURRY: I guess it's really just an issue of reductionism versus holism, right?

CURRAN KELLEHER: Yeah, which is exactly-- So you feel like everything can be understood as reductionist thinking, or sometimes we have to--

AUDIENCE: Well, at times, you just have to think, OK, how does, like, me, for instance, how do I come from interactions of particles? You have to do synthesis and [INAUDIBLE].

JUSTIN CURRY: But that doesn't-- does that explain you? And Hofstadter asks another question in the same chapter-- is, the guy who runs the 100 meters in 9.3 seconds, where is the 9.3 stored? It's not, right?

AUDIENCE: It's encoded in his brain. And what he calls those brain signals is like 9.3. That's what it says.

JUSTIN CURRY: Right, no, but 9.3-- the fact that he ran the 100 meters in 9.3 seconds was the result of training, him getting good traction at the start. It was really an emergent thing. It's an epiphenomenon, as Hofstadter calls it.

CURRAN KELLEHER: Here's a good question to ask yourself regarding things that are emergent. Do you exist? Because, what are you? Have you ever asked yourself, like, what am I?

It's an area where it's easily confusing, because you're an emergent property, from the things that you're made of.

AUDIENCE: The problem is it's not to say, do I exist. The answer to that question is almost the same answer to the question of what am I? If I describe myself as-- there's like almost two different parts of myself.

Now, [INAUDIBLE] myself as described by prim processes, maybe like some part of my brain has a description of what I've done, I've always done, and also my beliefs about myself. And also, another part of the brain has almost a way of going back, and also maybe having this self conscious awareness of like seeing the world, leaving my description of myself.

CURRAN KELLEHER: Yeah.

AUDIENCE: So which one am I? Well, probably I am the thing that the brain describes. I am the self.

I am not the thing that's aware of the self. Because the thing that's aware of the self only exists to be aware of that self. So I cannot be the awareness. I must be the one that's being aware, if that makes sense.

CURRAN KELLEHER: So you're basically dividing yourself into the observer and the observed. Right? The thing that--

AUDIENCE: I'm not the observer. I'm the one that's being observed.

CURRAN KELLEHER: So you are the one that's being observed? So what is it that's observing that? It's not you?

AUDIENCE: What it is is like this added structure of self awareness.

CURRAN KELLEHER: So that's a-- these are the issues that Buddhism grapples with. And--

AUDIENCE: How does [INAUDIBLE]?

CURRAN KELLEHER: Well, I don't know. I'm not enlightened. I don't know the answers. So I'm still as confused as you are.

AUDIENCE: I really think one of the main problems is that you think too much. If humans just stopped thinking, everything will just be fine.

CURRAN KELLEHER: So Atif just said if humans just stop thinking, everything will be fine. And that's what the zen masters say, also.

JUSTIN CURRY: Well, it's also saying we should just be [? featuring it. ?] Which also just says, let's give everybody mandatory frontal lobotomies, so no one can think any abstract thoughts or get worried about anything. And we'll just be reduced to basic hunting and surviving, if we can and--

AUDIENCE: It's saying what Godel says. Once you start talking to yourself, you're speaking nonsense basically.

JUSTIN CURRY: If you start talking to yourself or about yourself?

AUDIENCE: Talking about yourself. It's like, OK, here's me, if that makes any sense, because who's speaking? I'm talking about myself.

JUSTIN CURRY: Yes.

AUDIENCE: My self-reflective process that I'm talking about myself-- some of the most abstract process of my mind. That means they don't know the details, therefore they should not really be talking about myself.

JUSTIN CURRY: So you're essentially arguing that self-reference isn't a well formed thought?

AUDIENCE: No, no.

JUSTIN CURRY: No?

AUDIENCE: Yeah.

JUSTIN CURRY: OK.

CURRAN KELLEHER: Yeah, I mean that's true. When you talk about yourself, the thing that's talking is not well-informed about what it's actually talking about. Right? So--

AUDIENCE: [? Respect ?] the complexity of the brain because the brain is so complex that it can't even talk about itself without talking about the nonsense. It's just almost like looking around, like on a weird [INAUDIBLE], on the weird patterns. Like, oh, this may be, and this may be, this may be. It's like recognizing [INAUDIBLE].

CURRAN KELLEHER: Yeah, so essentially you're saying talking about yourself is just rambling nonsense?

AUDIENCE: It's sort of like intuition, like a mathematical intuition. You see some structure here, some structure there. You say, oh, yeah. That [? may ?] [? replace ?] some mathematicals or something. But it's just a hypothesis, and it's not really-- it doesn't have to be true.

CURRAN KELLEHER: Yeah, so you're getting at some really deep questions that we all have to face, if we choose to face them. So I mean, I encourage you to keep--

JUSTIN CURRY: Or as Sandra says, stop thinking about them.

CURRAN KELLEHER: Yeah, I mean, that's totally it. Because if we keep thinking about these things--

AUDIENCE: We'll get stuck.

CURRAN KELLEHER: --either we'll get stuck and go insane, and not be able to function, or we'll become enlightened maybe. I mean, I don't know what that is, even.

JUSTIN CURRY: Yeah, I mean, be bold, but not too bold.

CURRAN KELLEHER: And this gets back to what Justin was saying about earlier. Introspecting and asking these questions is not favorable to our survival. So in a sense, we've been programmed to just ignore them, and just live our lives.

JUSTIN CURRY: But we're hoping to evolve past that.

CURRAN KELLEHER: But, yeah, we're hoping to transcend that.

AUDIENCE: Perhaps when we build a conscious machine or something like that, we shouldn't give it the ability to analyze this stuff. But maybe the worst thing any person can ever do is to actually know how they think, who they actually really are. I think HP Lovecraft once said the same thing about that. The best thing that [? women, ?] we, I think, ever have is that at any given time, we don't ever have a complete model of the world. Because if we ever did, that's so frightening.

JUSTIN CURRY: I mean-- but exactly. On the other hand, though, we're inherently curious and we inherently want to understand bits of things. And I don't know if we can ever have a complete model of the world, as it is. But we can sure as heck try. And bottom line is you should just do whatever makes you feel good. if understanding part of the brain does that, then go for it.

AUDIENCE: Well, you can never fully understand the brain.

JUSTIN CURRY: Don't let that stop you from trying.

AUDIENCE: Come on, look. It's like one of those existentialists. You got a man [? holding ?] this stone up a mountain, or something, wants to get the stones dropped and then go back again. You know, the goal is point [INAUDIBLE] doing the process.

JUSTIN CURRY: So, I mean, you're essentially arguing that understanding the brain is a pointless goal?

AUDIENCE: Why do something that you can never achieve?

JUSTIN CURRY: Yeah.

AUDIENCE: So even if you achieve it, what's the next step?

Oh, you'll make yourself better.

JUSTIN CURRY: OK, so yeah. I don't know. I'm going with Sandra's idea.

CURRAN KELLEHER: What was Sandra's idea?

AUDIENCE: Like, there's puzzles, right? If you finish the puzzle, you're done. Right? So what else is next?

AUDIENCE: You make yourself smarter so you can be harder to understand your own self.

AUDIENCE: It's like essentially skirting the next [INAUDIBLE].

CURRAN KELLEHER: So what you're getting at, actually-- because what you're saying is sort of like this. Once you understand a certain amount of yourself, you've added another part of yourself which you don't understand.

AUDIENCE: You mean the part that actually understands itself is the part that you actually don't understand?

CURRAN KELLEHER: Yeah. So once you've come to terms with yourself, or so you think, that part of you, which has just come to terms with itself, you don't understand. That is analogous to adding g to number theory, because it can be Godel-ized again and be proved incomplete yet again. And then you can say, oh, well, it's not incomplete now.

So I can just add that in. This is a big part of some chapter in Godel, Escher, Bach. I forget which one. But it's essentially, you can never understand yourself.

I mean, it's a dangerous use of Godel's Incompleteness Theorem. And it's probably not well based. But it's something to think about.

AUDIENCE: [INAUDIBLE] do are like the complexity. I think it was some guy that-- he did some calculations. And it came out, once a system becomes so complicated, he reaches a point that it can't understand itself.

JUSTIN CURRY: Eventually-- if you could pull that reference, that would be good to see. But, yeah, [INAUDIBLE].

AUDIENCE: I think somebody all ready told me. I don't know who that is.

CURRAN KELLEHER: Yeah, I mean once a system gets to a certain point of complexity--

AUDIENCE: Maybe if somebody else can't understand it happening and [? understand itself. ?]

CURRAN KELLEHER: Yeah. I mean, we can't understand ourselves in the sense of everything that's going on, because we can't introspect on our own neurons. Right? Our neurons, in the case of agent based models, like we're talking about-- like, check this one out. This is pretty cool.

The balls on top are being pulled upwards, and the ones here are being pulled downwards. So there's a sort of stability. And I think there are the same number of balls. I'm not sure, but somehow-- I mean, it's so they equal out.

And this is just agent based modeling, applying these force equations but only between the balls that are connected to each other. So I mean, it's pretty close to physics and it's pretty cool. What was I going to say about agent?

OK, can someone remind me what we were talking about?

JUSTIN CURRY: I wanted to comment about this idea of modeling the universe. If you're really interested, I would recommend a book called Programming the Universe by Seth Lloyd. And essentially, it says, well, the universe, at the bottom, uses quantum mechanics.

We can't model quantum mechanics effectively on the classical computer. In fact, we need a quantum computer. But the universe itself is a quantum computer. And the only thing which could model the universe is the universe itself.

So we can't build-- I mean, we'd have to build another universe to model the other one. But why do that when we all ready have the universe, which is computing itself? So I mean, it's a very interesting book, and Seth Lloyd's a very prominent physicist. And he's a mechanical engineer here at MIT, who works with the Santa Fe Institute of Complexity Science.

It's a very good book. But I think it gets kind of at the heart of the problem-- that really the tools that we need to describe the universe is really the bits of the universe itself.

AUDIENCE: But don't you have some of those with properties of infinite's. You know, one like-- the whole thing [INAUDIBLE]. Even in that thing, there's another thing that's also infinite. So you can have a quantum mechanic computer inside of the universe that's also sort of emulating the universe.

JUSTIN CURRY: Yeah, but it can only emulate, pretty much, a part of the universe of equals sides. Right? A quantum computer involving setting the quantum bits can only essentially model something that big, right? But the information content of the universe, really, is equal to the amount of information the universe can compute with. So you need a computer as big as the universe, so you can model the universe.

CURRAN KELLEHER: Down to every detail.

JUSTIN CURRY: Down to every detail.

CURRAN KELLEHER: So all we can do is approximate on higher levels.

JUSTIN CURRY: And [? do those ?] [? chunking ?] descriptions, abstract [INAUDIBLE] details. Try to pull out salient features.

CURRAN KELLEHER: So I remember what I was going to say before. So this is an agent based model, where each one of these is considered an agent. And this emergent behavior happens where the structure comes together and just moves around.

I remember I was going to say, if you think of your brain as an agent based complex system, in a sense-- which is really what it is. Neurons are just interacting with each other and outside stimulus. So roughly speaking, it could be considered this agent based system.

And the neurons are analogous to the balls here. And the formations and the actions of what's going on at a higher level, that are emergent, is analogous to your thoughts. So when your thoughts try to introspect on-- well, I don't know. I can't really go much further.

AUDIENCE: Sort of like a [INAUDIBLE] you can't get back-- you can't get to the [? other island? ?]

CURRAN KELLEHER: So I mean, you can't understand each of your neurons because you-- I don't know. This structure can't introspect on what it's made out of, because then it wouldn't be itself anymore.

AUDIENCE: Oh, quantum mechanics, again.

CURRAN KELLEHER: Yeah, quantum mechanics again. I don't know. So one more example, which is pretty cool, which exhibits optimization. I can click and add balls, and they connect to each other.

And each one of them has this-- each pair, every pair, has this sort of optimal distance away from one another. And it sort of evolves to this optimal distance. So nowhere in my program do I say, if you have-- three of these form a triangle. It's an emergent property due to these various local forces trying to achieve local optimal solutions.

And this happens in chemistry a lot. Particles and molecules always trying to find the lowest energy state. And that's how molecules exist. But it's not as simple.

I mean, so here's another one. At four, this is the optimal configuration. And it just evolves into it. That's how physics works. And with five, it makes this star shape.

But this program is not as simple as I said it is. There are some strange rules that say when and when not to make edges between them. Edges are connections between the balls. So I can do something like this.

I can just drag and make a ton of things. I don't remember the rules exactly, but I tried to make it so that it was only local. And things that are far apart don't interact with one another. They only go through intermediaries.

JUSTIN CURRY: Unless it's in resonance, like it's forming [? breaking ?] [? laws. ?]

CURRAN KELLEHER: Yeah, it's like resonant, sort of. Resonance. I don't remember exactly what resonance is, but it's--

JUSTIN CURRY: Resonance is just when you're oscillating between kind of two superimposed-- two possible states. And, I mean, really, that's what's going on here is that you have these different possible stable arrangements. And there's this equilibrium that is confusing back and forth, but doing--

CURRAN KELLEHER: Right. So there are these two stable arrangements that bleed into one another. So it just oscillates between them. I mean, it's sort of roughly molecules in the world, and stuff.

If I can make it all the way around, I can make a cell-- I've done this before-- where you have this sort of membrane which forms. Which is exactly like what happens in real biology.

JUSTIN CURRY: Oh, yes.

CURRAN KELLEHER: There it is, more or less. So in biology, you have these hydrophilic and hydrophobic kinds of lipids. And the hydrophobic ones go on inside, and they all pair up with another. And the hydrophilic-- wait-- hydrophilic ones go on the outside.

Hydrophilic can interact with water. And they're soluble with water. The hydrophobic ones, they're a fear, a fear of water. So they try to go to one another.

And so you get this structure. I mean, it's not exactly analogous to this, but it's sort of close. And it just assumes this optimal configuration in our cells. If it weren't for this emergent-- this fundamentally low level, lower than the level of cells. This is membranes.

This-- without this emergent property of lipids, we wouldn't exist. It's a fundamental thing that holds us together all the time-- our cells. I mean, us as humans and all biology is just built up of these layers and layers of emergent properties. So I mean, layers-- so these are cells and-- no, no, these are lipids. And one layer level above that is these stable, spherical membranes.

And then a level above that is the interaction of cells with one another to form organs. And then a level above that is the interactions of organs with one another in the bloodstream. And a level above that is the brain interacting with that in a positive way. So we just built up all these crazy layers everywhere.

Everywhere in biology, you'll find this sort of thing. So it's immersions. It's really cool to grok emergence.

So any thoughts from anybody? Questions? We're just about out of time.

AUDIENCE: Why is it that almost every single little thing can be put down to the smallest, basic, physical processes? The only problem, I think, we have is to develop a theory of organization, like variants of properties. Like, say, even though almost like 1,000 of my brain cells have been destroyed in maybe the past day or something. I've been drinking or something.

I'm still me. What about my brain organization still says, OK, this is still Atif? He's still the same thing.

JUSTIN CURRY: I mean, robustness of systems, right?

CURRAN KELLEHER: Yeah, that's totally what you're hinting at-- robustness. Robustness means you can change parts of it, like for example with a wireless mesh network or something. It's robust because if you knock out a bunch of nodes here and there, it still exists as a network. And so what you're saying in your brain, even if you get completely wasted, you still know that you're yourself. So it's robust to these changes.

AUDIENCE: [INAUDIBLE]. Suppose, OK, some brain cells are destroyed. Are the new ones-- do I [INAUDIBLE] it up to make sure my self-concept is to preserve that. Between those brain cells being destroyed and neurons being made, created, where am I?

CURRAN KELLEHER: Well, OK, so you're saying-- say, some brain cells get destroyed. And your concept of yourself is temporarily is dissolved. And then new pathways or whatnot are formed, that get your sense of self back.

AUDIENCE: So at some times, I don't even exist anymore.

CURRAN KELLEHER: This is, I mean, hinting at a very important point, that what are you? You are an emergent property. Your sense of self is not the actual neurons, it's this thing which above exists on a higher level.

AUDIENCE: Yeah, but it has to be supported by the neurons in there, you know? You have to create that structure of it.

CURRAN KELLEHER: Yeah. It's a supporting structure. But, yeah, you're saying-- so if you temporarily dissolve, then you just don't exist. I mean, I don't think there's anything more to it. And you just reappear. You just disappear, reappear, and things do that all the time.

AUDIENCE: That could be the case if it was data. I could be knocked out, and then I could be rebuilt and, OK, I'll be the same. But is that [INAUDIBLE]?

Do I have to have continuity to be really there? Even like-- you can't even have much continuity because [INAUDIBLE] neurons like [INAUDIBLE]. Given the fact that a second continuous thought, maybe that's even an illusion that signals up the trouble.

JUSTIN CURRY: Right, so fundamentally you have-- the cells in your body are replaced about every seven years. So you, seven years ago, are not made of the same cells you're made of now, right? So there's just this idea that what constitutes you has to be stored at a higher level of description. That you can take entire chunks out at a time and still have things OK.

And what makes us different than a computer, is that we can be running us-- which is like, we would like to run a computer and be able to, I don't know, swap out a few transistors and maybe even like a stick of memory, and still have everything running smoothly up top. Because that's what happens all the time with us.

AUDIENCE: But given the fact that you can also kick out your memory, you're not going to be you.

JUSTIN CURRY: Yeah, so then there does seem to be certain points of view, which are really critical. But there are certain key nodes, that if you knock those out-- and this is really almost a problem in graph theory. And this is really something that we've used in understanding terrorist networks, is like, sure, you can knock out a few lower guys, but you're not going to destroy the whole network. But if you knock out that guy, is it-- the whole thing-- going to come to the ground.

It's just like if you get a pole to the head or something, suddenly you could be done. Or if someone cracks you in the leg, you're going to be OK. It's almost like a graph problem, fundamentally.

CURRAN KELLEHER: So a lot of the things that we're talking about are problems of complex systems. And you hinted at before, you said, well, maybe what we should do is try to come up with a general understanding of these kinds of systems and theories about them. And this is complex systems. So this is what New England Complex Systems Institute does, or Santa Fe Institute of California.

JUSTIN CURRY: New Mexico.

CURRAN KELLEHER: Oh, New Mexico.

JUSTIN CURRY: Sorry. Well, yeah, there's a lot of interesting stuff to go on. And whatever interests you, I hope we can plug that interest.

And please talk with us after class, and we can point you in the right direction. And we completely open-- we hope that's open-- ah. You're attached to me.

CURRAN KELLEHER: I am.

JUSTIN CURRY: No coulombic forces here. But yes, please feel free to come talk to us after class and we can try to suit you up with your particular interests. And even if you don't know what you're interested in-- or you have an idea of, well, that was really boring, but this was OK-- we can maybe cater to your interests. But this was really kind of the last lecture.

And I wanted to thank all of you guys for coming here, for bringing your questions, for bringing your open minds. And I think we've had a really good semester so far. And I hope you guys enjoy Waking Life in the next lecture. But other than that, any last questions before I say goodbye and bon voyage? Atif.

AUDIENCE: I've got a paradox that I've been struggling for the whole day with. So you have zero to one, right? And you have zero to two, right? Both of them have an infinite amount of elements, but it could be proven that as many points from 0-1 as to 0-2. So why is two greater than one if there's as much stuff between both?

JUSTIN CURRY: Because we don't use, as a well ordering operation, the number of points between this and that. We just start off with here are the integers. Here's a well ordering on them. And that's how we go.

AUDIENCE: So you just put distraction. It doesn't matter how much [INAUDIBLE].

JUSTIN CURRY: Right, exactly. I mean, it's an uncountable amount of stuff in the unit interval. And there's also an uncountable-- the same uncountable amount of stuff in the entire real line. These are the paradoxes of infinities, but they're not really paradoxes. Because we're not used to thinking about infinities.

AUDIENCE: Maybe it's just for convenience.

JUSTIN CURRY: So maybe it's just for convenience? Well, I mean, partly. But the other thing is mathematicians are pretty retentive creatures.

And we make these not just for convention, but we hope they're rigorously defined. And you can actually create your real numbers just out of irrational numbers by using this process of completion. And there, I mean, you have to go do an undergraduate level course in analysis, really.

AUDIENCE: [INAUDIBLE] do a square on one of them [INAUDIBLE], square on another. And so this has a bigger [INAUDIBLE] than the other one.

JUSTIN CURRY: Yeah, but you're still kind of getting away at the idea. Because just as you said, there are just as many points in R2, right? The two dimensional plane that started on the line. You could have space filling curves that visit everything, which can be put into one [INAUDIBLE] correspondence with just the real line. It's the same cardinality.

AUDIENCE: So then why is two greater than one?

JUSTIN CURRY: Why is two greater than one? It's because we started out that way, basically.

AUDIENCE: But then we're getting something that's conflicting.

JUSTIN CURRY: No it's not conflicting. It's just-- you can still-- if you give me a number and another number, I can tell you which ones is bigger than the other. That's not a problem.

AUDIENCE: How do you define bigger? The amount of stuff that's in it, or like the--

JUSTIN CURRY: What does that mean? What does the amount of stuff in it mean?

AUDIENCE: OK, like there's as many points between zero and one and zero and two.

JUSTIN CURRY: All right, well, see that's not a-- yes. I mean that's a true statement, but it's also true that there is as many odd numbers as there are natural numbers. There's as many even numbers as there are integers.

CURRAN KELLEHER: But it's not indefinite number. It's an infinity.

JUSTIN CURRY: Right. Just as you've said, it can be put into one to one correspondence with a subset of itself.

CURRAN KELLEHER: And like that, [? a picture. ?]

AUDIENCE: But then, I just don't know how you can get from the things in [INAUDIBLE] then you saying, OK, this is two greater than that one.

JUSTIN CURRY: Well, yeah. You just don't use that as your metric for counting number of points, because it's not a well defined idea. I mean, it's well defined, but it doesn't give you any info. What it would tell you is that the reals are bigger than the integers. But that's not-- you want to know that two is bigger than one.

AUDIENCE: So you just want this number one to pull a metric that's almost the same as the integers on it?

JUSTIN CURRY: I'm not sure, but if you want to know more about metrics, and things, we should talk about it after class. Just real quickly, can I field any other questions that people have burning deep down inside of them? All right, well then-- yes. Let's call it a wrap and then feel free to stick around and hang out after class.

AUDIENCE: If [INAUDIBLE] feel like explaining the self as almost this data thing, how are you going to explain the observer of that data?

CURRAN KELLEHER: Yeah, it's a tangled hierarchy.

AUDIENCE: And even then, you have the observer, where the observer just observed being also [INAUDIBLE].

CURRAN KELLEHER: They're not different from each other.

AUDIENCE: They're different or they're the same?

CURRAN KELLEHER: They're the same. They're not different.

AUDIENCE: So then, OK, you can say [? the dead ?] is observing itself, was recording things about itself.

JUSTIN CURRY: Because, I mean, even in quantum mechanics, we need an observer. The observer just really can be a measurement that just happens by the interaction of two particles. So particles themselves can be observers themselves.

CURRAN KELLEHER: [INAUDIBLE] mechanics.

JUSTIN CURRY: Did I say that?

CURRAN KELLEHER: No.

JUSTIN CURRY: Quantum.

CURRAN KELLEHER: I was just thinking of quantum mechanics.