Robustness and Bacterial Chemotaxis

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Description: In this lecture, Prof. Jeff Gore continues his discussion of bacterial chemotaxis, or how bacteria find food. The principle is a biased random walk of runs and tumbles, and is shown to display perfect adaptation.

Instructor: Prof. Jeff Gore

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

JEFF GORE: Today, what we want to do is focus more explicitly on bacterial chemotaxis. Of course, a discussion that we had on Thursday about "Life at Low Reynolds Numbers" is certainly very relevant to this question of how bacteria are maybe able to find food or what constraints they're faced in trying to solve this problem. Today, we're going to discuss in more depth this idea of the biased random walk that runs and tumbles that we talked about on Thursday, that allow bacteria to swim towards attractants and away from repellents. But in particular, there's something that is rather subtle about the particular biased random walk that bacteria implement.

Now, the way that you might kind of naively imagine that this thing would work is that you would just have this tumbling frequency be a function of the concentration of the attractant, for example. And that indeed would allow you to swim or to execute a biased random walk towards, say, food sources, towards attractants, but it would not be effective over a very wide range of concentrations. However, if you experimentally ask, how well can bacteria swim towards attractants, it turns out they can respond over five orders of magnitude of concentration of these attractants, which is really quite incredible, if you think about the engineering challenge that these little one-micron cells are able to overcome.

And the basic way that they do this is they implement what's essentially equivalent to integral feedback in the context of engineering, where it turns out that the steady-state tumbling frequency, so the frequency of which they'd execute one of these tumbles that randomizes their motion, that displays what's known as perfect adaptation. It's not a function of the concentration of-- if you have a constant concentration of an attractant, then it's not a function of that. Of course, changes in concentrations it responds to, but somehow, E. coli and many other microorganisms are able to implement this very clever thing where the steady-state frequency of tumbling, of changing direction, is somehow not a function of the overall attractant concentration.

Now, that's already an interesting, I think, phenomenon. And in some ways, you can think of it as some kind of robustness, because there is some way in which this tumbling frequency is robust against constant changes in the level of attractants. And it's that aspect, I think, that makes this example both rather subtle but also quite confusing, because it's the phenomenon of perfect adaptation that already has some aspects of robustness, but it's this phenomenon of perfect adaptation that is robust against changes in concentrations of proteins, for example, the concentration of the protein [? key ?] R that we're going to talk about.

So I think that this is in some ways maybe the prettiest example that we have of this principle of robustness, but I think it's also in some ways the most tricky to wrap your head around, because it's sort of robustness of a robustness in some ways.

All right. So our goal for today is going to be to make sure that we understand the challenge that E. coli are facing and then to try to understand the genetic circuit that they use in order to overcome this challenge. So what I'm going to do for the next hour and 15 minutes is we're going to leave this network up on the board so that any time that you're confused about what is R, B, Z, Y, so forth, you can kind of look up here and remind yourself what this thing is. But hopefully, the reading from last night will help you in following what's going on. There are a fair number of letters, I will admit it.

So first, I just want to make sure that we're all kind of remembering the basic phenomenon that we're trying to study, which is this idea of consecutive runs and tumbles. So this random walk is really composed of what you might call or what we do call runs, where the bacteria goes sort of straight and then tumbles, which they randomize their motion. So this is runs and tumbles, where bacteria, they go semi-straight for of order a second-- order one second-- and this is the run. Then, they have this tumbling that might last about one second, so a 0.1 second tumble, over which the motion is sort of random in this axis of the orientation. And then, they kind of go in a new direction and then tumble again and then new direction and so forth. So it's runs followed by tumbles.

Now, the thing that changes depending on whether the cells are sensing that they're moving up or down, say, an attractant gradient is the frequency of those tumbles, i.e. how long are the runs? I said that they're around one second, but this will vary depending upon whether bacteria sense that things are getting better or things are getting worse. Can somebody remind us how the bacteria actually execute one of these tumble motions? Yes?

AUDIENCE: They have this other motors-- they have mini-motors in their flagella and they usually all spin one way, which I think is counterclockwise.

JEFF GORE: That's right. I always have trouble-- so the runs are indeed when these things are rotating counterclockwise.

AUDIENCE: Yeah. And one of them can decide to switch and start turning clockwise. And what it does, it just sort of-- I don't know. I just imagine it throws the whole motor off.

JEFF GORE: Exactly. And the flagella in the context of-- so we have our E. coli. The velocity is of order, say, 30 microns per second. Now, they have these flagella that are actually distributed-- the motors are actually distributed across the entire cell. So it's not just on the back. But then, these individual filaments kind of come together towards the back and then they have this corkscrew shape.

When these things are all rotating in the counterclockwise direction, that corresponds to a run, when there's directed motion. But then if one of them goes clockwise, the bundle falls apart. And indeed, there are some very nice movies online where you can see that when they're going clockwise, you see that the flagella are kind of doing crazy things and that causes this thing to kind of tumble in a random orientation. So it randomizes its direction.

Now, one thing that we did not talk about in the context of this low Reynolds number motion was the question of how hard you would have to pull in order to get a cell to go 30 microns per second. Now, remember, we're in the low Reynolds number regime, where the force you have to apply is in general proportional to this velocity. And this thing is very much not a sphere, but if it were a sphere, for a sphere, the proportionality constant is this 6 pi eta av, where a here is again the radius.

Now, regardless of the precise shape, you'll always get for the low Reynolds regime something that looks vaguely like this, where this is going to the longest linear dimension and this here depends on the precise geometry of the object. And so just-- it's useful to try to get a sense of order of magnitude. How large are these forces that we're talking about? In particular, how hard would you have to pull a cell in order to get it to go 30 microns per second?

Now, of course then, there's a question. What's roughly the scale that we should be thinking about? Right now, a newton is the scale that's for macroscopic objects. So you'd say, OK, probably not going to be up that large. Then, on the other scale, we can think about the forces that can be applied or exerted by individual molecular motors. Has anybody studied this at all?

AUDIENCE: [INAUDIBLE]

JEFF GORE: Right, order of picanewtons. So force for a molecular motor is going to be of order picanewtons. Now, there's a simple way to kind of get at why this might actually be, because what is it that powers many of these molecular motors?

AUDIENCE: [INAUDIBLE]

JEFF GORE: Yeah, so it's often-- you're thinking of maybe the motors that operate in a membrane. And then, there might be some difference in a gradient, so a difference in concentration across that membrane and that can actually power, for example, rotation. We're going to talk a little bit more about this in the context of the flagella motor, because this is one of only a few known rotary motors. I'm just going to write this down so that I don't forget to say something about it. But in this case--

AUDIENCE: ATP?

JEFF GORE: ATP, right? So in many cases, what happens is that you have, for example, kinesin or myosin. These are motors that walk along some track and in many cases, each step corresponds to a single ATP being hydrolyzed. And so given that, there's some maximal force that you can imagine such a motor applying. So for example-- and what might it depend upon?

AUDIENCE: The energy of ATP hydrolysis [INAUDIBLE]

JEFF GORE: Ah, very good. So we have delta G for say, ATP. And there's going to be a length scale, right? And what's going to be the relevant length scale in the case of, for example, a motor like kinesin?

AUDIENCE: The flagella, I guess?

JEFF GORE: Well, kinesin is not walking along the flagella, but in this case, kinesin is walking along microtubules, for example. So you're right that we need a length scale, because this is going to have some units of say, picanewton nanometers-- oops, that's an Nm-- and we want to get a picanewton so we're going to need a length scale to divide by, right?

AUDIENCE: So how long you move each time?

JEFF GORE: That's right. It's kind of the distance that this thing's moving, right? So indeed, what happens in the case of kinesin is that they take steps with what are called "heads" but we can think of them as "feet" if we want. So each step is in this case eight nanometers. For example, delta L for kinesin is equal to eight nanometers. You don't need to know this, but just it's useful to have some sense of scale. Now, of course, delta G of ATP, this depends upon the concentrations of the reactants, of the products, and so forth. But this thing might be of order 100 picanewton nanometers. Now, depending on conditions, maybe it's 70, but of order here.

And indeed, what this tells us is that just based on what we've said right now, even without building any fancy microscopes to measure how much force these things can apply, you can see that the maximum force it could possibly apply would be something on the order of delta G divided by the length that it's pulling. Otherwise, you could make a perpetual motion machine. So this-- well, maybe what we'll do if we want to make our math simpler, we could say this is around 80. It's of order there. But the point is that this is of order 10 picanewtons, so it's kind of picanewton scale.

All right. So given this, it's interesting to ask, how hard would you have to pull? So how big is a kinesin model going to be, this protein?

AUDIENCE: About eight nanometers.

JEFF GORE: Yeah, right, about eight nanometers. Of course, it's kind of long, spindly things like my skinny legs, right, but let's indeed going to be around that scale, whereas the cell is one micron wide, multiple microns long, so much, much larger. We'll draw a little-- here's a kinesin in there. It's even smaller than that. Kinesin's small. Now, the question is, how much force would you have to apply in order to pull an E. coli at 30 microns a second? Force to pull E. coli at the speed that it's actually observed to go-- now, you're unlikely to be able to do the calculation right here. It's actually a simple calculation. We could do it in a moment, but it's useful to just kind of imagine what scale might it be.

All right, this is a way of-- we can go up as high as we want. You can continue it on and this is all in units of picanewtons. Once again, it's useful to make guesses about your intuition on these things just so you have some notion of where we might be. And of course, in this case, nobody wants to guess anything, because they feel-- all right, I'll give you 10 seconds just to make your best guess. Somebody's forcing you to do it. In this case, it's me. No reason that you actually should necessarily get this. Pulling through water.

AUDIENCE: At 30 microns.

JEFF GORE: At 30 microns per second.

AUDIENCE: Could you give us the viscosity of water in picanewtons?

JEFF GORE: Yeah, it's 10 to the minus nine in the units of picanewton nanometer second something.

AUDIENCE: Great.

JEFF GORE: Right. Now, I've given you enough time. You should have just been able-- now, I'm going to be disappointed if you don't get it right. No. Let's just see where we are. Ready? Three, two, one. OK, so I'd say that we have a bunch of-- I'd say the mean is kind of a C-D-ish. We got some A-B's. All right, so I'd say it's somewhere in the sense of, yeah, maybe it's 100,000.

So maybe if you had 100 of these kinesins pulling you along, then you'd be able to go 30 microns a second, although you have to be careful because kinesin can't actually go that fast. But these are details, right? No, there's a reasonable question. How hard would you have to pull. Of course, if we want, we could actually just do the equations here, right? The force, it's around 20. In units of picanewton nanometer seconds, I told you that the eta is around 10 to the minus 9. All right. And the radius, what radius do you want to use?

AUDIENCE: A micron.

JEFF GORE: A micron. So do we write a 1 here? Do we write what? 10 to the 3 because I already told you that this was in units where everything is picanewtons nanometers seconds. And velocity, this is again per second. So this is three times 10 to the 4, because that's how many nanometers per second. And this gets us to around-- that's 10 to the seven. We're going to have to divide by 100, so this is 60 divided by 100-- so less than a picanewton. So this is really pretty surprising. And again, this is a reflection of the wonders of low Reynolds numbers kind of behavior. So it's somewhere here. Yes?

AUDIENCE: So since there are many flagella, is it telling us-- [INAUDIBLE] is it telling us that they're very inefficient compared to the [INAUDIBLE]

JEFF GORE: Yeah, it's a good question, right? So I think there are several ways. And this is one aspect maybe of what we read about in the "Life at Low Reynolds Numbers." The comparison was to a Datsun in Saudi Arabia. Of course, I think that this was written in the '70s where that meant something. But I think that a Datsun, I guess, is a fuel efficient car? Well, that was my inference from that. And Saudi Arabia has a lot of oil, still true. And the saying was that it might only be 1% efficient or so, right? But if you don't have to apply very much force there, then maybe that's not a disaster.

The swimming speed is not actually a super strong function of the number of flagella that are there, as far as my understanding. So I'm actually a little bit-- there are cases where you do get somewhat higher speed with more flagella, although I have to confess, just from the geometry of the multiple flagella I find totally mystifying, because I would've thought they would get tangled up, because each of them goes in and they form a corkscrew. I don't know. It just doesn't seem that it should work. But it does, so I'm not going to argue that it doesn't.

AUDIENCE: Probably it has to do with the low Reynolds number. So you imagine swimming, sort of flying around.

JEFF GORE: Yeah, no, my concern is not even a matter of the low Reynolds number or not. It's just a matter of they're each spinning and then-- this is something really I've never understood, so I typically avoid talking about it. I don't know. I just feel like there's something wrong. But in any case, you don't need to pull very hard in order to get even a micron-size object to go rather fast. Yes?

AUDIENCE: [INAUDIBLE]

JEFF GORE: Oh, I'm not aware of it. This thing can actually go-- it's 10 microns long, so this thing is huge, actually. But I--

AUDIENCE: It's probably just a repeat. Or even if it is a repeat, you can [INAUDIBLE] I'd just be surprised if it wasn't, since it seems like that.

JEFF GORE: Yeah, so I don't know, but I'm not even sure if that would necessarily address my concern, which is it seems like they have to go through each other. But it's not true, so I don't want to make the argument too strongly. Did you have a question?

AUDIENCE: So I was just thinking about this [INAUDIBLE] Might it be possible that they, if they are all sync up their rotations, then it doesn't matter how tangled they are?

JEFF GORE: Yeah, no, even-- even if they're all spinning at the exact same rate, I still feel that they should get tangled, but it's a feeling that is apparently not true. So I don't want to--

AUDIENCE: But if they're always tumbling, maybe the time to--

JEFF GORE: No, no, no, wait-- so it's not-- no, the tumble is not due to my supposed mechanism. They're spinning at something like 100 hertz. The motors are spinning at something like 100 hertz, right? So in principle, if they were going to get tangled, they would've gotten tangled.

AUDIENCE: Seems [INAUDIBLE]

JEFF GORE: That's right, that's right. I do want to just come back and say one thing about this question of a rotary motor, because it's a fascinating question-- oh yes, go ahead.

AUDIENCE: Sorry, I just had a quick question. What was the units of eta again?

JEFF GORE: Oh, this is the thing. The units of eta, I always just go back here and I say, OK, well, it's the units of a force divided by unit of an area-- or sorry, of a radius, unit of a velocity, and then I have to plug everything in. And then, I have to find an equation with a force other than this equation, because if you put this equation back in, then you don't get anywhere. These are useful. So I always have to actually figure it out fresh each time.

So it turns out that these are indeed rotary motors. And I think that this was actually the first-- as far as I'm aware, it's the first example of a rotary motor in biology that had been demonstrated, which is a fascinating idea because in human engineering, rotary motors are everywhere. Yet somehow, if you look at living things, they don't seem to have rotary motors. Now, anybody want to suggest why rotary motors might be rare in biology, in life?

AUDIENCE: Rotary is just that it rotates, right, not in terms of a mechanism [INAUDIBLE] because there's also [INAUDIBLE]

JEFF GORE: Well, I guess I'm thinking of something that can continuously rotate, right? It's true that if you look at-- we have ball socket joints and I can rotate them a little bit, but certainly, I can't go-- there are limits, right? And this is a pretty striking difference between human engineering and biological engineering. Yeah?

AUDIENCE: If you tried to connect anything across this, it would be more [INAUDIBLE]

JEFF GORE: That's right.

AUDIENCE: [INAUDIBLE]

JEFF GORE: That's right. The problem is you just can't have anything connected across it. Otherwise, you really do get tangled, right? Now, the question then is, well, why is it that it's possible to do that here then?

AUDIENCE: Because we're at such a small scale that we're not thinking of connecting anything else. We're individual molecules, anyway. There's nothing that can go through that.

JEFF GORE: Yeah, it's interesting. And this is how I think about it that well, at the molecular scale, you just have some rotary something and it doesn't have to be attached via, say, covalent bonds or whatnot. And you can then force something to rotate. Of course, it's a little bit funny because this issue about connecting across this rotation, that should be a problem for human engineering, as well, but somehow, we do get around it.

I guess what I would say is I don't have any comments, except for this was then quite a surprise when it was discovered that this was a rotor motor, just because we had not seen any in other macroscopic living things. So then, it was quite exciting to see it in the case of the flagellar motor. And in your book, you'll see that there's a very nice kind of EM reconstruction of this motor and you can see how it's hinged and it's powered by proton gradients that kind of cause this thing to rotate. It's really a beautiful, amazing thing.

Does anybody know any other examples of rotary motors? That's right. The other really famous one is the F0F1 ATP synthase, which is, again, something that's across a membrane and has, once again, a really beautiful structure where it has this circular thing in the membrane that rotates, responds to, again, a proton gradient, and then that drives rotation of this other part of the protein, F1. And then, that makes ATP.

And that motor is amazing also because it's reversible, in that the cell can also burn ATP and drive a proton gradient. And indeed, in some single-molecule experiments, they've even done something where they attached a little magnetic particle onto this F1 and then rotated themselves. And they showed that they could actually make ATP.

The efficiency was maybe not great, something like that, because they're rotating a macroscopic magnet and then they're making ATPs there. But it's a pretty remarkable aspect of these molecular motors. Now in this class, we're not going to say anything more about molecular motors, but I just wanted to mention a few things about them because they're really fun, beautiful things and there are other classes that are at MIT that may give you an opportunity to think about them some more. Any question about where we are now?

AUDIENCE: [INAUDIBLE]

JEFF GORE: The common motor?

AUDIENCE: Yeah, [INAUDIBLE]

JEFF GORE: Oh. Well, I'd say that the other class of motors that are seen a lot in the context of these molecular motors are motors that travel along linear tracks. So there's kinesin that walks along microtubules. There are various myosins that walk on actin. And then, of course, DNA and RNA polymerase, we don't normally think of them as motors but indeed, they take an energy fuel and then they have to walk along the template as they make either the DNA or the RNA. So I'd say there are many, many examples of molecular motors that convert chemical energy into mechanical force and motion, particularly along one-dimensional tracks. So those, I think, are the most well-studied examples.

Now, one way to study this run and tumble motion is, of course, to actually apply a gradient and then watch the cells as they swim it. And that has been done. A lot's been learned from that kind of assay, but it turns out that there are two other assays that maybe allowed for more controlled analysis of this chemotaxis kind of response. So let's just say-- so studying chemotaxis.

The most obvious thing is to apply a gradient and then watch. And indeed, the classic assays where you add a little pipette with an attractant or a repellent and watch the bacteria swim toward or away, that demonstrates there is indeed chemotaxis. But it's a little bit difficult to quantify that process in many cases. What are the assays that maybe you've read about recently? In [? Yuri's ?] book, how is it that they actually analyzed this perfect adaptation? Yes?

AUDIENCE: So you can bind the flagella on the slide and then [INAUDIBLE]

JEFF GORE: So one thing that you can do is you can-- I may put it here. All right. So you can kind of attach the cell to a slide. And typically attach it to the slide by what? This isn't it. And does it matter where you attach the cell to the slide?

AUDIENCE: By the flagella.

JEFF GORE: Yeah, so you typically have attach it via this hook that's at the end, so by some part of the flagella-- to the slide by the flagella, we'll say. Flagellum? Flagella-- whatever. And the nice thing there is that as the cell is doing its thing and it's spinning either clockwise or counterclockwise, you can directly visualize that because the whole cell is moving.

And the cell is both the thing that is doing the work and processing the signals and everything, but it's also your marker for what the state of this little hook is, right? This is a wonderfully quantitative assay where you can get high time resolution. It's easy to do the image analysis. And then, how do you typically get this cell to change its, for example, tumbling frequency?

AUDIENCE: Put an attractant.

JEFF GORE: Right, so you can add an attractant. And indeed, I just wanted to separate this a little bit, because you can-- even without doing this-- this is kind of the next order step-- you can just add an attractant and mix, because you don't want the spatial patterns.

But a nice thing here, this is a gradient in space and then you can watch the bacteria swim. But you can also have the gradient in time. And the cells can't tell the difference. The nice thing here is that you can then just add the attractant, mix, and then you just watch all the cells as they're going. You don't need to try to follow them or whatnot, but you can just look to see how the tumbling frequency changes over time. And of course, you would typically use this trick together with this in order to study perfect adaptation and so forth.

If you collect this sort of data and you plot the tumbling frequency as a function of time, what you might see is that it starts out at one per second. Now, if at this time, I add an attractant, does the tumbling frequency go up or down? And we're going to do a verbal answer. Ready? Three, two, one.

AUDIENCE: Down.

JEFF GORE: Down. And that makes sense because the cells think that they're moving up an attractant gradient. So over a very short time scale, tumbling frequency goes down. But then, over a time scale of minutes-- it might be 5, 10 minutes-- that tumbling frequency goes back to where it started. I just want to-- this varies, but this could be order of 10 minutes.

AUDIENCE: Why is it so long?

JEFF GORE: Yeah.

AUDIENCE: 10 minutes is a very long time.

JEFF GORE: Yeah, it's a long time scale. And here's a question. Is it because this is the time that it takes to make new protein? New protein synthesis, we'll say.

AUDIENCE: You're asking if that's the time or if that's the reason why is it long?

JEFF GORE: I'm saying is this the explanation for why this is 10 minutes, because the cell has to go make new protein in order to do this? So the cell is presumably going to be making some new proteins, but is this what you would really describe as being the causative agent of this thing taking 10, 20 minutes? Ready? Three, two, one. All right. So I'd say most people are agreeing that actually, yes, it is. The answer is no. This is not. It may be the case that the cell is making new protein, but this is not what's setting the time scale there. Is--

AUDIENCE: More [INAUDIBLE]

JEFF GORE: Right. You agreed that that was the answer, but you didn't-- well, so I would say there are several ways you can think about this, but we're going to go through the model that is supported by, I think, a fair amount of experimental evidence. But the key feature there is that in this model, it works even if all the protein concentrations are constant over time. So everything that's happening in this network is happening as a result of changes of the states of the protein. So proteins are either getting methylated or phosphorylated and these-- right. But then, of course, it's the question of, why is it 10 minutes instead of 10 seconds or a minute?

AUDIENCE: You need a [INAUDIBLE] in there that's on the order of minutes, like 100 minutes, which is very slow. Is there--

JEFF GORE: Yeah, well, their one question is, this is what is called the adaptation time. Now, is this a robust feature in this model or in the cells, for that matter? We'll just add verbal, yes or no. Ready? Three, two, one,

AUDIENCE: No.

JEFF GORE: No. Now, and what that means is that indeed, different kind of versions of this network will have different times. And there is data looking at variation in this between different cells. And I guess I don't have a clear feeling for what would be optimal, in the sense of allowing optimal climbing up of an attractant gradient. This thing has to be much longer than the typical times for a tumble. Otherwise, it's not even-- well, we wouldn't have been able to measure it, I guess. But I agree that it could have been one minute and I wouldn't have batted an eye, in the sense that I don't have any feeling for why it had to have been this or something else. But somebody who actually studies this might be able to give a better answer. All right.

AUDIENCE: But I guess-- sorry, just one last thing. My question is not why it's 10 minutes. It's not an evolutionary question, like why is it useful, right? It's just somehow, [INAUDIBLE]

JEFF GORE: Yeah I think that there are different ways of looking at this and then depending on how you look at it, you either feel surprised or not. So it's a little-- now on the other hand, if you had added a repellent, then the tumbling frequency would actually go up and then come back down. But the key, key thing in this system that we want to focus our attention on is the fact that it comes back to where it started. So it's the fact that I can draw this dashed line that is this perfect adaptation.

And so what we want to do is understand where this phenomenon of perfect adaptation comes from and maybe why it is robust to changes in, for example, the concentrations of some of these proteins. Now, let's just make sure that we're all on the same page in terms of what was known about this chemotaxis network. And I think it's worth mentioning, perhaps, here that the whole series of studies of bacterial chemotaxis going back to Howard Berg and company and then later, the studies that in robustness that Uri Alon and Stan Leibler and Naama Barkai did. I think they represent really just a wonderfully beautiful exploration at the interface between physics and biology.

I think you could teach an entire course just on bacterial chemotaxis and you could hit pretty much all the major themes in biophysics over the last 40 years. It's really amazing to me. I myself have not done any work in the field, but from afar, I've really just admired the beauty of all these studies.

And because you go back to Howard Berg and Purcell and they're thinking about how simple physics can inform the challenges that bacteria are facing and how cells are actually able to do a biased random walk and get anywhere, limits on sensing both concentrations and gradients-- and then later, the studies that they're this topic of robustness, it's really a wonderful example where Naama Barkai, when she was a postdoc at Stan Leibler, they had published a Nature paper in maybe '97 basically saying, this idea of robustness is really important in biology and in order to have a robust response of perfect adaptation, a model has to have these features.

So there's no experiments there, but their model was guided by previous observations that people had made. And then, two years later, Uri, when he was a postdoc in Stan's lab again, did kind of the experimental confirmation of the model, where he went in and he controlled the concentration of chi R and showed that thee key predictions of the model, i.e. the perfect adaptation, would be robust to the concentration but that the tumbling frequency and the adaptation time, they would not be robust to chi R concentrations. They would move in a way predicted by the model and all that.

It's really amazing that it all kind of holds together, because in many cases, we do the modeling kind of post facto, right? And then, it's kind of explaining our results. But the case where a model is really useful is when it makes new predictions that get you to go make new measurements. And this is a situation where there are an infinite number of experiments that you could do, but only some of them will actually provide you a deep insight into the mechanisms that are going on in the system. And I think this was a real case where the models made some really clear predictions and that allowed, in this case, Uri to go and make the strains that allowed him to test the predictions of the model. And I think it's really amazing that it all kind of works.

All right. Now, all of the letters that you see up there, they're not actually-- well, first of all, the letters are somehow real. This is the real names of the protein components in the chemotaxis network in E. coli and largely in other organisms. But in each case, there's a chi that comes in front, right? So R corresponds to chi R, for example. And then, there's chi B, chi W, chi A. And these were all identified by genetics, so then by researchers looking for mutants that were defective in chemotaxis. I don't know what happened to C, D, E, F, G, because it does seem like we got the first part of the alphabet and the last part of the alphabet. I don't know.

All right. Now, the basic idea in the system is that you have chi W/A that we often here will refer to as X just for simplicity. Now, these are proteins in the membrane, so they have a binding kind of pocket outside that will allow binding of attractants or repellents. There might be say, five different kinds of these receptor complexes that can sense, that bind at different rates, different kinds of attractants, repellents, and then the signal is somehow integrated.

Now, this could be either an attractant or a repellant. The distinction between these two is that we get different levels of methylation onto chi W. Now in this lecture, we're only going to be talking about the methylated state versus the unmethylated. But in reality, depending on the receptor complex they have, they might have four or five different methylation sites. And this ability to switch between different methylation states is really at the heart of the phenomenon of robust perfect adaptation.

The base feature in it, though, is that chi A can phosphorylate chi Y and chi Y will then go and yield the output, which is in this case, increased tumbling frequency. But there are lots of other bells and whistles that you can see on here, right? So of course, it's not just that this phosphorylated chi Y just kind of comes off on its own, but rather it's actually done by chi Z. So there's a constant cycling here and again, there's a constant cycling here where the methyl groups are taken off and then put back on.

So if you look at this, you really do feel that it's rather wasteful, because there's a huge number of these futile cycles going on. Chi Y is always kind of being phosphorylated and then dephosphorylated and this is all going to be costly to the cell. So you can imagine then the only reason it's there is because it's doing something useful. Now, there was a comment in the chapter about what is the rate limiting step in all this. And can somebody remember what it was? Yeah?

AUDIENCE: Is it methylation?

JEFF GORE: So methylation, there is a sense that that is actually the longest time scale, but I guess because the methylation is what results in this thing coming in the perfect adaptation over this 10 minutes. I guess what I meant-- so yeah, that is the longest time scale. But I guess what I was thinking about in rate limiting is the sense of when the cell finds itself in a new environment, it changes its state over a much shorter time scale.

So this thing I drew is almost vertical, right? So this question is, if the cell finds itself in a new environment suddenly and then it just really wants to tumble-- so you find yourself in a crappy bar, how long does it take for you to get out? Now, what's going to be rate limiting there? Yeah?

AUDIENCE: Phospho-- phosphorylation.

JEFF GORE: Phosphorylation, yeah, although it turns out that's not the rate limiting step. And this actually comes back a little bit to something that we talked about in the first part of the class that in these transcription networks, the characteristic timescale is what?

AUDIENCE: G.

JEFF GORE: Right. So the characteristic timescale in the case of transcription networks is the time it takes for you to change concentrations of proteins, which is kind of cell generation time or if you have active degradation, you might be able to make it faster, whereas all of this is happening rather quickly, say, maybe 1/10 tenth of a second. And a lot of these kinds of processes, binding, unbinding, and actually even the interactions between the proteins can take place even maybe faster than that. So the actual rate limiting step for when the cell finds itself in a bad environment for it to start tumbling is actually due to diffusion. And that's diffusion of what?

AUDIENCE: Y protein.

JEFF GORE: Yeah, diffusion of the phosphorylated Y, right? And that's because we have the cells here. They find themselves in the bad environment. They rapidly bind the repellent or they quickly phosphorylate chi Y. But then, Y is going to be formed at one of the poles, because actually, there's actually clustering of these receptors at the poles of a cell and incidentally, we're not going to talk about that here, but I think there's strong experimental and theoretical evidence that this actually increases the sensitivity.

And indeed, people have used simple Ising type models to try to understand how the coupling between the binding of repellents on what's essentially almost like a crystalline array of receptors can allow the array to better than you build it to as an individual. We're not going to get into that here, but in any case, there's receptors at the poles-- I don't know if it's both poles or one pole, but one of the poles of the other-- and that's where chi Y is phosphorylated.

But then, you can see that these flagella are distributed all around the cell. So you have to diffuse from say, the pole to the site of the flagella motor in order to cause it to go clockwise and then cause a tumble. Yeah?

AUDIENCE: So do you need only one motor?

JEFF GORE: Yeah, so you actually only need one motor to get the tumbling. And so then, you may not have to diffuse all the way to the other end, but-- and of course, diffusion is random. And we all know that 0.1 seconds is around the time that it takes for a protein-sized object to diffuse across the volume of a bacterial cell. We did that calculation a couple weeks ago. So 0.1 seconds is indeed-- so the rate limiting step is indeed this step right here, which is diffusion of chi Y, the phosphorylated version of it.

All right. Now, we may not get too much into the models here, because you did read about them. But I will just kind of sketch out sort of what you might call the fine-tuned model and then the key assumption that goes into this robust model. Now, what both models assume and indeed what was known from previous work was that chi R is present at small number. So chi R, there might be around 100 proteins in the cell. And what does this mean about the activity of chi R? What is the other word for it? It doesn't actually have to quite mean it, but what is it that-- how--

AUDIENCE: [INAUDIBLE]

JEFF GORE: What's that?

AUDIENCE: [? High ?] saturation.

JEFF GORE: So yeah, something is high. And what is typically assumed is that chi R acts at saturation, chi R. But what do we mean by "saturation" in these models?

AUDIENCE: Maximum.

JEFF GORE: Right. Does it mean that if we add more chi R than the rate of methylation, it doesn't increase? And I should have put a little methyl group here. There are multiple things you might mean by "acts at saturation." And in the models that you read about last night, is what it means that if we increase the number of chi R that it doesn't change the rate of methylation? Yes or no? Ready? Three, two, one.

AUDIENCE: No.

JEFF GORE: No. And what they mean is something rather different, which is that if we plot or if we calculate the change in the concentration of the methylated X-- now, we're calling this whole thing Xm, methylated X, and this is just X. The assumption is that we have X is being methylated at a rate that is-- there's no Michaelis-Menten term. If we were to write this as it not being saturated, what is it acting on?

AUDIENCE: [INAUDIBLE]

JEFF GORE: Yeah, it's not the methylated x. So indeed, if we were to write this, we're assuming that this is at saturation, as they say. But any time that you see something like this, you have to ask, well, what would be the alternative, right? And the alternative would be to include a term that looks kind of like just X over some K plus X, because R is acting on the unmethylated X.

We're not including that. What that means is that we're assuming that this thing is saturated, that the concentration of X to be acted on is significantly larger than the Michaelis constant there. And of course, this is related to the amount of R, because as we get more and more R, then eventually, we'll remove some of this X and then we'll get into the non-saturated regime. So these two statements are related, but not the same thing.

Now, there's also some rate that the phosphorylated version of B removes the methyl groups and that's indeed just going to be this Michaelis constant. And this is going to be for the fine-tuned model. The robust model looks very similar, but this is the simplest kind of manifestation of this model.

Now, once we're writing this down, it's useful to make sure that we can keep track of what's actually happening over the course of perfect adaptation. So now, let's imagine first that an attractant arrives. That's going to change the activity of X. And when we say "activity," what we mean is the rate that's it's going to phosphorylate both B and Y.

So let's just make sure that we know this direction. So we add an attractant. We'll say "add." What does this do? Does it make activity go up or down? I'll give you 15 seconds to make sure that you kind of understand the workings of this network. This is activity of X, this complex X. All right. Do you need more time? All right. Let's see where we are. Ready? Three, two, one.

All right. So we got a majority of the group is saying that it should go down. Well, let's just follow the logic. So we imagine an attractant binding. If the activity goes down, that means that we get less of the phosphorylated chi Y. That means we get less propensity to tumbling, which means that we keep on going further. All right, that sounds reasonable. Any questions about that logic? Yes?

AUDIENCE: [INAUDIBLE]

JEFF GORE: Right. So we haven't said anything yet about BM. That's what going to happen next. And that's actually the slow time scale. So what we had in here is that we were kind of at this steady-state tumbling frequency of one per second. We add an attractant. The tumbling frequency goes down because of what we just said. But now, there's going to be this longer time scale process whereby we get recovery, where we come back to this steady-state tumbling frequency. And that's going to involve action on chi B.

All right. So what happens is that the attractant causes less of the phosphorylated chi Y. But it's also going to cause less of the phosphorylated chi B. Now, that means that we're going to-- and remember, the phosphorylated chi B is what's removing the methyl groups. So if we have less flux going to the left, but we have the same-- at that moment, we don't have any change in the flux going to the right. So chi R is still acting on the same unmethylated X's that it was operating on before. So it's the same flux to the right, less flux to the left.

So there's a net accumulation of the methylated receptor, which we are calling X. So the key thing here is that-- and it's this methylated receptor that has more activity. In this model, this unmethylated version actually doesn't have any activity. So then, if we get more of the methylated X, then over time, we get a buildup of the methylated X and that causes the activity to come back up.

Now, of course, there's a question of-- I just said that it comes back up, but I didn't say that it comes back up exactly to its original tumbling frequency. I didn't say that it necessarily displays perfect adaptation. And that's because in this model, the perfect adaptation arises as a result of what we call fine-tuning, because it only happens if all of the parameters are just so.

In Uri's book, he describes a typical condition where that would be the case. And the problem is that you can always fine-tune for some concentrations of everything, chi R or chi this, chi that. But then if the concentrations change, then you're no longer fine-tuned correctly. You were fine-tuned for a different world and now you're not-- and that's the definition of being fine-tuned is that if things change, you're no longer fine-tuned. You're just finely off-tune, right?

So this is the problem with the fine-tuned model is that you can get it right for a given-- for given concentrations of everything, you can always find what numbers. This is VB and then there's going to be-- you can talk about the activities of XM given the previous attractant concentration and so on and so forth, but it's not going to be fine-tuned if you change anything else, like if you change concentration of R, for example. And I'm happy to go through the math and so forth maybe after class if anybody's curious, but it's really precisely what Uri did, so maybe I won't get into it now.

But the question then is, well, how is it that you might change this model in order to make it robust? And the change is somehow surprisingly simple, which is that what you want is chi B, instead of just acting on any old methylated X, you want it to act only on the methylated X that is in sort of in what we would call this active state, where it's actually able to catalyze either of those reactions.

So the notion is that if you have this methylated X, then it's kind of over a very fast time scale. It's switching between what we call an "active" state and some inactive one. And it's really only in the active versions that chi B is able to act on and remove the methyl group. And this is on the one hand a clever thing that allows you to implement this integral feedback. On the other hand, it's a little bit of like pulling a bunny out of a hat, because you feel like, well, these things may be happening over microsecond time scales. It's hard to know exactly-- how would you actually experimentally confirm this is precisely what's going on?

And I think that here, it's a little bit subtle because you could maybe show that indeed the rate of this demethylation is proportional to the activity here, but you don't necessarily have access to all of the molecular dynamics that are taking place over microsecond time scales. So I think that you can do measurements that give you confidence that this is maybe what's going on, but you can't quite 100% nail it because of the nature of these molecular fluctuations.

So the idea there is that if we say that there's this rapid shuttling between the so-called "active" and "inactive" methylated guys-- so this is indicating that it's what we call "active," able to catalyze this and this-- then this ends up being equivalent to integral feedback, where you'll always get perfect adaptation. Yeah?

AUDIENCE: When you put [INAUDIBLE]

JEFF GORE: So the idea is that we imagine that we're this receptor X. It's chi W, chi A. Now whether it's-- let's say it's bound to something. It still bound to an attractant, right? The way that the attractant influences its activity-- and it has to influence its activity if it's going to do anything-- what we assume is the way that it's doing it is that it's changing the sort of fraction of time that I'm in some active conformation where I can actually do work, versus the inactive conformation where I'm taking a break. So when you get the attractant and you spend more of your time in this active conformation where you're, in this case, phosphorylating proteins, and that's sort of the mechanism through which an attractant or a repellent or whatnot actually transmits its signal. And indeed, it has to do something.

Nothing that we're discussing would work at all if we don't allow the signal to be transmitted somehow through this receptor. So there is this sense that this activity has to be a function of the things out there. And the assumption that goes into the perfect adaptation is really that the rate of demethylation is proportional to that kind of active fraction. And then, you can argue about how discrete these entities have to be in order for the mechanism to work and so forth, but certainly, there has to be some way that binding to an attractant leads to what we decided was less activity. Yeah?

AUDIENCE: Just a quick question. So what's the relationship between activity and methylation? Was it [INAUDIBLE]

JEFF GORE: So the idea is that we're typically maybe assuming that the unmethylated guy has no activity, so it doesn't do any of this phosphorylation, whereas the methylated guy has some activity. And you can characterize it by some rate of activity or some fraction of the time that it is in this active state that is doing something. Now, the question-- and indeed this ends up-- well, you can see here that in this model, because you're directly acting on the active XM, then the steady-state activity you can get from just setting this equal to zero and it's some number.

But then, the question is, how long does it take to come back to that steady state? And that's where we get differences as a function of concentration of chi R, because what's happening always is that we have some kind of cycle here where chi B is removing the methyl groups, groups chi R is adding them back. So you can imagine that if you have more chi R than a steady state, you get more-- when you're moving right by a steady state, you have to have the same moving to the left, because at steady state, it's equal. So the more chi R you have, the faster this thing is going around. And that means that the more chi R that you have, the more rapidly that you'll get this perfect adaptation.

So the experiment that Uri did that I think is very nice is he directly modulated the amount of chi R and he looked at this adaptation time. And he found this kind of came down, whereas if you look at the steady-state tumbling frequency, this came up, whereas the degree of perfect adaptation, say the ratio or the error in this thing, perfect adaptation was always kind of correct, in the sense it always came back to its original value.

AUDIENCE: Sorry. You say the [INAUDIBLE] it only phosphorylates chi B when an attractant [INAUDIBLE] Y [INAUDIBLE] attractant.

JEFF GORE: No. So the attractant or repellent can be combined to either the methylated or the non-methylated, right?

AUDIENCE: But [INAUDIBLE] only [INAUDIBLE] phosphorylation of chi B when [INAUDIBLE]

JEFF GORE: No. So you're talking about-- oh, OK. So it's really that the methylated state can phosphorylate either chi B or chi Y, but this is regardless whether an attractant is bound or not. The attractant will influence the rate or the activity that this happens. And given this model, you can see that you have more chi R, then at steady state, you're going to have more activity. More activity corresponds to more phosphorylated chi y and more tumbling, so an increase in the tumbling frequency. Yes?

AUDIENCE: I guess it's not clear to me where the ligand concentration actually come in.

JEFF GORE: Where which concentration?

AUDIENCE: The ligand concentration.

JEFF GORE: Oh, OK, yeah.

AUDIENCE: Because that's what [INAUDIBLE]

JEFF GORE: Yeah, right. Yeah, so the idea here is that if we start out without any attractant-- so first of all, so let's imagine we're at steady state. There's no attractant or little attractant now, where of course, the fluxes to the left and the right are the same. So there's some methylated, some not.

AUDIENCE: No, but I agree the mechanics of just in the actual to write an equation for the ligands to come in like [INAUDIBLE].

JEFF GORE: Right. So the idea is that when you bind an attractant, that's going to change the activity of the methylated. So it's going to change, for example, the fraction that are active.

AUDIENCE: And decreases the activity.

JEFF GORE: And it decreases the activity, right.

AUDIENCE: And some sort of signal--

JEFF GORE: Yeah, there's some-- right. And of course, we haven't specified what that function, but the idea is that it leads to a rapid decrease in activity, which corresponds to a rapid decrease in this fraction that are active XM star. Does that make sense?

So the last thing I wanted to do is say something about what this means for individuality. In particular, let's imagine that we have a clonal population of bacteria. And the question is, in what ways will they be similar or different? So now, we can just imagine an experiment where I take a population of cells with exactly the same genetic code and I go and I measure, for example, the tumbling frequency across this population.

So we can talk about-- we measure f1, f2, f3, fn-- so these are the tumbling frequencies across n cells. This is n we'll say genetically identical cells. The question is, will we get the same tumbling frequency or should these things be the same? And if not, why not? So let's just do our little votes, all right? We have should they be the same or should they be different. All right. Do you understand the question? Let's vote. Ready? Three, two, one. All right. Well, OK, so at least the majority of people are saying they should be different. And why might that be, somebody?

AUDIENCE: We might have [INAUDIBLE]

JEFF GORE: Right. For example, we might have variation in the concentration of chi R. Now, this variation is sort of a natural result of just fluctuating this or that, right? But it's a little bit similar to the experiment Uri did, where he actually in this case has actually put chi R under the control of an inducible promoter, where he could just add IPDG and drive expression and then just measure the mean of these things across the population. But even if you try to have everything be the same, you'll have some variation, which we found for small numbers of proteins could be large. So that variation will transmit itself into variations in both the adaptation time and the tumbling frequency.

And this is indeed was observed as early as 1976. So there's a paper in Nature in 1976 by Jim Spudich and Dan Koshland. So Spudich went on to study a number of these molecular motors. In particular, he studied many of these myosins. But he wrote this classic paper in '76 called "Non-genetic individuality, chance in the single cell." What he did is they looked at many different individual cells using a few of those techniques that I told you about.

And what they found is that some of the cells seemed to be what they called kind of "twitchy" and some of them seemed to be more relaxed. So the twitchy cells are the guys that had a larger tumbling frequency. So they wouldn't swim as far as the others. So they'd swim just a little bit and then they'd change their mind, swim a little bit, change their mind, whereas other cells had much longer kind of runs, even though they're all nominally identical.

Now, can somebody say the time scale over which we would expect that personality to persist?

AUDIENCE: Cell generation time.

JEFF GORE: Yeah, cell generation time. And why is that?

AUDIENCE: [INAUDIBLE]

JEFF GORE: Yeah, right. So if this is chi R, over time, it's going to do something. And so the typical autocorrelation time should be kind of the cell generation time. So what we call the typical time should be rather generational. So indeed, this is the same as humans. We have a well-defined personality and then we pass on some of our personality to our kids. The typical time scale is a generation. So I'm teaching you guys super useful things in this class. I don't want anybody saying anything else. But I think that this is a neat example of how different cells can have what you might describe as being somehow different personalities, but it arises for a very particular reason because this information is being transmitted through this network.

Now, despite the fact that they all have different chi R concentrations-- so they might have different tumbling frequencies and so forth, but they should all be able to display this phenomenon of perfect adaptation. So what we then see is that from the experiments and from some simple models, you can get insight into how it is that this phenomenon of perfect adaptation might be robust to changes in the concentrations of different things, particularly chi R in this case. And chi R is the dominant source of error because it's present in such small numbers.

Now, in the last two minutes or so, I just want to remind everybody about the other context in which we studied robustness, because it's a much simpler example and it helps to clarify what we mean by it. Does anybody remember the other context that we have talked about robustness?

AUDIENCE: [INAUDIBLE]

JEFF GORE: Negative autoregulation someone said, maybe?

AUDIENCE: Yes.

JEFF GORE: OK, good, all right. So if we have some protein that is negatively autoregulating itself, then this adds robustness. And this is because we can say, all right, this is the degradation term, alpha x, the production term. And the limit of being perfect, sharp, negative autoregulation looks like this. So this is the production/degradation. And here, this might be beta. This is a k. Then, what we can talk about is how the steady-state concentration of x here is going to be k. And this thing is robust to variations in some things, but not other things. For example, it's robust to changes in alpha. That changes the slope. It's robust to changes in beta, because that just brings it up and down, but it's not robust to changes in k.

So I think that any time that if you are confused about robustness, in particular thinking about robustness in the context of chemotaxis gets you confused because of perfect adaptation being confusing, I think it's always good to come back and think about this because this is the most clear case where you can talk about-- this is the robustness of the steady-state concentration of some protein against variations in alpha and beta, but not against k. So it reminds you that it's not that this level of x is robust against everything, but it's robust against some things. And you can make sense of which things it should be robust against, which things not.

So with that, I think we're going to quit. I just want to remind everybody that none of the work that we're talking about here in the context of chemotaxis and the genetic network will appear on the exam, but we may have a problem on low Reynolds number flow, maybe something Stokes drag. Diffusion might make an appearance. Good luck on the exam. I'll see you guys on Tuesday.