Lecture 8.3: Tony Prescott - Control Architecture in Mammals and Robots

Flash and JavaScript are required for this feature.

Download the video from Internet Archive.

Description: Layered control architectures of the brain inform the development of layered control architectures in robotics. Covers computational models, behavior, control, behavioral decomposition, fixed action patterns, spatial attention, vibrissal control.

Instructor: Tony Prescott

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

TONY PRESCOTT: Great pleasure to be in Woods Hole, my first visit here, had a wonderful swim in the sea yesterday. Sheffield Robotics is across both universities in Sheffield. And it's been founded since 2011, but we've really been doing robotics since the 1980s. And I joined them in 1989.

And we do pretty much every different kind of robotics, but I'm going to talk about biomimetics. I also do what you might call cognitive robotics. And I collaborate with Giorgio Metta on that, but since he's speaking too, I'm going to focus on the more animal-like robots that we've been developing.

So this is one of our latest projects. This is a small, autonomous, mobile robot, which is a commercial platform, which will be available in the UK, I think, next January. And there will hopefully be a developer program for people that are interested in helping to develop the intelligence for this robot. So this is at a conference we had in Barcelona last month. And you can see that it's a robot pet.

And we've been focusing on giving it some effective communication abilities, responding particularly to touch. You can see that it's orienting to stimuli. It has stereo vision. It has stereo sound. And it can orient to visual stimuli and also to auditory stimuli. Here we're showing it a picture of itself on this magazine cover.

And the goal is to demonstrate that we can, in a commercial robot that will cost less than $1,000, considerably less, some of the principles of how the brain generates behavior. So this robot is called MiRo. It is based on some high-level principles abstracted from what we know about how mammalian brains control behavior, so it's a relatively complex robot, 13 degrees of freedom, 3 arm processes corresponding to different levels, if you like, of the neuroaxis, the central nervous system.

So once you start out with some general ideas, and questions, and issues about how we might learn from the biology in the brain, how to develop robots, and how we might use robots to help us understand the brain. And a central question I think that robotics can help us answer, and I think that's a core question in neuroscience, is what you might call the problem of behavioral integration. And the neuroscientist Ernest Barrington summarized this quite nicely. He said, "the phenomenon so characteristic of living organisms, so very difficult to analyze, the fact that they behave as wholes rather than as the sum of their constituent parts. Their behavior shows integration, a process unifying the actions of an organism into patterns that involve the whole individual."

And this picture of a squirrel over here I think nicely demonstrates this. So of course this squirrel is leaping from one branch to the next. And you can see that every part of his body is coordinated and organized for this action, so you can see that his eyes looking straight ahead, his whiskers-- and I'll talk a lot more about whiskers-- pointing forward.

His arms and his feet are all ready and they're there ready to catch his fall. Even his tail is angled and positioned to help him fly through the air. So it's the coordination of the different parts of the body, and the multiple degrees of freedom of the body, and the sensory systems in space and in time, which, I think, is a critical problem for biological control and also a problem for robots, which we're still struggling to address with our robots.

And I want to give you two very general principles for thinking about how brains solve this problem. So many of you will have come across Rodney Brooks, yes, from MIT. And he's famous for, in robotics, the notion of layered control, which he called subsumption. And I think the ideas that he brought into robotics really changed how people thought about robots in the 1980s.

But if we go back to the 1880s, John Hughlings Jackson, who was a British neurologist, proposed a similar idea, but with respect to the nervous system. So in the 1880s, people thought about the higher areas of the brain, particularly the cortex, as being about higher thought, and reasoning, and language, and not so much about perception and action. And Hughlings Jackson, I think, was revolutionary in his day in saying, that the highest motor senses represent over again in more complex combinations what the middle motor centers represent.

In other words, he was saying that, the whole of the brain, all the way up, is about coordinating perception with action. And he described it in many senses as a layered system. He talked about how you could take off the top layers of the system and the competences of the lower layers remained intact, which, of course, is very much the idea of Rodney Brooks subsumption architecture.

And some old studies, they did transection in animals like cats and rats-- demonstrate this nicely. So if you take a cat or a rat, particularly a rat, and you remove, in fact, all of the cerebral cortex, so if you make a slice here that takes away cortex, you get an animal that actually, to all appearances, looks fairly normal.

It does motivated behavioral sequences, so it will get hungry. And it will look for food. And it will eat. If there's an appropriate mate nearby, it will look to have a sexual relationship. And it will fail in some challenges such as learning, and perhaps also in dexterous control, but in many ways, it will look normal.

If you slice below the other part of the forebrain, the thalamus and the hypothalamus, you remove these areas, then you remove this capacity for motivated behavior, but you leave intact midbrain systems that can still generate individual actions. And if you remove parts of the midbrain, you leave intact, still, component movements, so for example, animals that can run on a treadmill.

So we are, with our MiRo robot, loosely recapitulating this architecture, so we have three processors. And the idea with this robot, it's actually a part work, so you build it up. You get a magazine every week with a new part for the robot.

And you build, essentially, a spinal robot first. And then you add a midbrain processor. Eventually you add a cortical processor, which gives with it some learning capacities, some pattern recognition, some navigation, so that's one principle, layered architecture, which seems to work both for biology, and perhaps in robotics.

So second principle, and this goes back to another famous neuroscientist, this time Wilder Penfield, who is known to many people for his discovery of somatotopic maps in the brain, that if you stimulate, in the brain, in the area, the sensory area, then you find that people have experience of tickling on parts of the body. And adjacent parts of cortex correspond to adjacent parts of the body. And he found a similar homunculus in the motor area that you stimulate, and you get movement in adjacent parts of the body.

And he also proposed another idea. And that was sort of a transencephalic dimension to nervous system organization. And that's to say that, down the midline of the central nervous system, there are a group of structures that don't seem to be specifically involved in specific aspects of perception and action, but seem to be about integration. And amongst them, particularly the basal ganglia, he noted as being important, and parts of the reticular formation.

So Michael Frank was here talking to you about basal ganglia, so I'm not going to say much more about this, but this is just to point out, in an slice in the rat brain, these are the elements of the basal ganglia, particularly, the striatum is the input system. In the rat, the substantia nigra and part of the globus pallidus are the output systems. And then you have, in the rat brain, and also in our brains, you have massive convergence onto the input area, the caudate putamen it is also called, from the cortex and from the brain stem.

So you have signals coming in from all over the brain to the striatum, which could be interpreted as a request for action. And then you have inhibition coming out from the output structures of the basal ganglia, here I'm showing it for the substantia nigra, going back to all of those areas of the brain. And this inhibition is tonic. And in order to have functional reaction, you have to remove the inhibition.

So this is a system that can give you some of that behavioral integration that you need, the ability to ensure that you do one thing at a time. You do that quickly. You do that consistently. You dedicate all of your resources to the action that you want to do.

Here's a little video of a rat. And I'm showing you some integrated behavior over time in an intact rat. So this is a rat exploring in a large container. And rats generally don't like open spaces, so when you first put the animal into this space, it will tend to stay near the walls.

And it prefers this corner, which is dark. And of course, it's hungry too, so there's a dish of food here. And eventually, it gets up the courage to go out, collect a piece of food, and it will take it back into this dark corner to consume it.

And one of the first models that we built was a model of basal ganglia operating as this, kind of, action-selection device. And with a simple Kepler robot, this is a robot that just really uses infrared sensors and a gripper arm. And we are using a model of the basal ganglia to control decision making about which actions to do at which time, and to generate sequencing of those actions.

So as the need to stay close to walls diminishes, the robot, like the rat, goes and collects these cylinders. And it carries them back into the corners and deposits them. So a model of the central brain structures-- and I'm happy to discuss in more detail about how that model operates, and how similarities to the model that Michael will have described to you, but that's controlling the behavior switching, if you like, in this robot.

So I spent some time working on this question of how central systems in the brain, particularly the basal ganglia, are involved in the integration of behavior, but I became frustrated with not understanding what were the signals coming in to the central brain structures and not understanding what effects those brain structures were having on the motor system of the animal. So I thought that what we needed to do was look at complete sensory motor loops. We needed to look at sensing and action, and how those interact.

And in our Psychology Department, we have a neuroscience group that works mainly with rats, so it was natural for us to look at the rat. And in the rat, we know that one of the key perception systems is the vertebral system. So here you see, this is actually a pet rat, wandering around on my windowsill in my house in Sheffield.

And the thing to notice is the whiskers here. And the whiskers are moving back and forth pretty much all the time that the rat is exploring. And we understand from nearly 100 years now of research that this system is very important for the rat to understand the environment. In fact, if it's completely dark, the rat would move around in much the same way. And it would be able to understand the world through touch pretty well, even in the absence of vision.

So this is the same video, but now slow down 10 times, and just to show you these movements of the whiskers, and how quite precise they are, because the rat isn't just, in a stereotypical way, banging its whiskers against the floor. It is lightly touching the whiskers in places it will get useful information. And you can see, when he puts his head over the window sill here, the whiskers push forward, as if he knows that he's going to have to reach further forward if he's going to find anything.

Here you see him exploring this wooden cup. And you can see light touches by the whiskers. And you can also see that the movement of the whiskers is being modulated by the shape of the surface that he's investigating, so there's some fairly subtle control happening here. And I think it's not too much to say that the way in which the rat controls its whiskers has almost the same richness as the way that we control our fingertips.

So I'm interested in how this plays out in terms of a layered architecture story. And of course, many people study this system. The beauty of it is, if you're a neuroscientist, that you can look in the cortex.

This is rat cortex here. And a huge area of rat cortex is dedicated to somatosensation, to touch, of which a large area is dedicated to whiskers. In fact, you zoom in, you can find this area called barrel cortex.

And with the right kind of staining, you can find groups of cells which preferentially receive signals from individual whiskers, so for example, you can move one whisker here, and you can know exactly where you record in the barrel cortex to get a very strong response from that whisker. And this means that barrel cortex and the whisker system has become one of the prepared, preferred preparations in which to study the cortical microcircuit altogether, so people study this system to really understand how cortex operates.

Now, if we think about this system as a pathway from the whiskers up to barrel cortex, we're really only capturing one element of what's going on in the vertebral system. And that's this pathway here from the vertebrae, via the trigeminal complex, goes by the thalamus up to sensory cortex. And this is probably where 9 out of 10 papers on this system are published, but actually, this system is only part of a looped architecture, or we might say, a layered architecture.

And at each level of this layered architecture, there's a completed loop, so that sensing can affect action, so sensing on the vertebrae can affect the movement and control of the vertebrae. So there's a loop via the brainstem here, so that, directly from the trigeminal complex, signals come back to the facial nucleus, which is where the motor neurons are that move the whiskers. There's a loop via the midbrain here so that sensory signals ascend very quickly to the midbrain superior colliculus. And they come back to affect how the whiskers move. And then, of course, there's the loop via the cortex too, so there's essentially those three loops, at least, that we need to think about.

So since 2003, we've been building different whiskered robots, the aim being to instantiate our theories about how whiskered control works in this layered architecture and demonstrate it in a robot platform. And often, actually, building a robot platform causes us to ask new questions that might not be obvious to you just by doing biological experiments or even by doing simulation.

Before I show you some robots, just quickly show a little bit more about the rat and its whiskers. So we began thinking we could just build robots, but we quickly realized that we didn't know enough about how rats use their whiskers to do that. And that's partly because the experiments that had been done haven't been done with the purpose of building a whiskered robot. So when you try and build a whiskered robot, you have to ask questions like, how do the whiskers move?

And when you look at a video like this filmed from above, this is with a high speed camera, you think, well, the whiskers are sweeping backward and forward, like this. But in fact, if you put a mirror just tilted down here and you see what happens, then it turns out to be a little bit different, so you see that the whiskers are going up and down as much as they are going backwards and forwards. So the whiskers are actually sweeping like this, and they're making a series of touches on the surface. And if you watch, you can see that the whiskers are sort of playing down on the surface, sort of in a sequence, quite quickly, so that information might be giving you details about the shape of the surfaces in your world.

So we mainly look at the long whiskers. This is a rat that's running up an alley. And we put an unexpected object in the alley, which could be this aluminum rectangle, or it could be this plastic step. And what you see is, if the animal encounters something unexpected with its long whiskers, then it turns very quickly and investigates it.

So the long whiskers are like the periphery of a sensory system that has a fovea. And the fovea at the center of that system is a set of short whiskers around the mouth, also the lips and the nose, so that you can sniff and smell the surface that you're investigating. So we can zoom in and see that sensory fovea, here you see these short whiskers that are being used to investigate this plastic puck, and the longer whiskers investigating around the outside.

So we have recapitulated elements of this layered architecture in our robot. And this is-- these are the loops. And this was about five years ago, we built a system with a brain stem loop, and really, midbrain loop.

And this is our robot Scratchbot, which is the first of the whiskered robots that we felt really was capturing whisking in the way that the rat does it. It's running at about 4 hertz, whereas the real rat is whisking from 8 to 12 hertz, but it's scaled up to be about four times rat size. And what it's doing here is, it's using the whiskers to orient to stimuli, so this is Martin Pearson from Bristol Robotics Lab.

He's putting an object in the whisker field. And the robot is turning and orienting to the touch with the object. It's putting its short micro vibrissae, in fact, against the object and exploring it.

Now to do that, to detect a stimulus on the whiskers and then turn is not a fantastically hard task. The main challenge is to work out where the whisker was in its sweep when you made contact with an object, because the whisker is sweeping back and forth, so if you want to know the location of the point of contact, you need to integrate the position of the whisker in its sweep, what you might call a theta signal, and the presence of the contact on the whiskers. And the coincidence of those two is detected in the brain.

And we know that there are cells in the barrel cortex that respond to that coincidence. So we have in our robot a model of the super colliculus, which is the location in the brain which we think is involved in orienting. And in our model of the colliculus, we have a head-centered map, which looks for this coincidence between a cell encoding the position of the whisker in its sweep and a cell encoding a contact and makes a turn to orient and explore that position.

And then if we want to actually create behavior, which is integrated over time, so if the robot was just to orient every time it touched something, that wouldn't be very animal-like, particularly, you don't want to orient every time you touch the ground, so you just want to orient when you touch important stimuli. So we put into our model the basal ganglia so that we can decide whether the contact we've just made is something we want to investigate or something that doesn't interest us so much. So we have a system now with a midbrain that does orienting, a basal ganglia that makes decisions about sequencing. And those two things together give us reasonably lifelike behavior in our robot, Scratchbot.

And that's quite a lot of what we have running now on the new robot, MiRo, is this system. It's for orienting and exploring. And we can use it-- we use it here for tactile orienting, but of course, the same system can underlie orienting to sounds, if you can localize those, and space, and orienting to visual stimuli too.

So it turns out that this isn't a complete solution to the problem of orienting for our whiskered robot. And that's because sometimes our robot would stop as it was moving around and just move and investigate a point in space where nothing was happening. We call that a ghost orient.

And the problem is that the whiskers, because they're moving back and forth, they sometimes generate signals in the strain gauges that are detecting bending of the whisker. And sometimes those signals, just as a consequence of the movement and the mass of the whisker, are strong enough to be above threshold to generate an orient, so you get, if you like, these ghost-orienting movements towards stimuli that don't exist. And we know that rats don't make these kind of ghost orients, so something else must be going on in the brain.

And one part of the brain that might be helping here is a region called the cerebellum, which, I'm not sure if you've covered that in the summer school, but the cerebellum, this large structure at the back of the right brain. One of its key functions seems to be to make predictions about sensory signals, and particularly, to be able to predict sensory signals that have been caused by your own movement.

And there's a lovely experiment that's been done by Blakemore et al, where they put people into a scanner. And they investigated how they responded to tickling stimuli. So of course, if somebody tickles you, that can be quite amusing, but unfortunately, if you try to tickle yourself, it's really uninteresting. It doesn't work as a stimulus. And it's worth thinking about why it is that self-tickling is so unrewarding.

And one of the reasons is that it's just not surprising. You know what's going to happen when you tickle yourself, whereas if somebody else is doing it, it's unexpected and surprising. So why is self-tickling unexpected? Why is it not surprising?

It must be because the brain expects and anticipates the signal that it's going to get. And what Blakemore et al did was to show that the cerebellum really lights up when you try to tickle yourself, because it's estimating and predicting the sensory signal, and using that to cancel out, if you like, the signal that's coming from your skin.

The same thing is happening in electric fish, which generate this broad electric field which they use for catching prey. And they need to be able to tell the difference between a distortion to the electric field caused by a prey animal and a distortion caused by their own movement, by swimming. And they do that by having a very large cerebellum.

So we put a model of the cerebellum in our whiskered robot. And the cerebellum predicts the noise you might get due to the movement of the whiskers. And it learns online to accurately predict what noise signals you might get, and to cancel them out, so you get a much better signal-to-noise ratio in the robot.

So we've dealt with whisking. And we've dealt with orienting. But as you saw with that rat on the windowsill, the whisker movements are really precise. And they're really controlled. And the rat seems to really care about how it's moving its whiskers and how it's touching. We call this active sensing.

And if you look at these high-speed videos, you can see, for instance, this rat when it's exploring this perspex block. The whiskers aren't moving in a stereotype symmetric way. You can see that here, the whiskers on the right-hand side are really reaching round to try and reach the other side of the block.

If you watch this rat here, you see that too. You've got asymmetry. And you'll see that, even as the rat comes up to the cylinder here, the whiskers at the front are pushing forward while the ones at the back are hardly moving at all, so there's some ability to control even the whiskers on one side of the head.

And when you move your fingers, of course, there's some coupling between your finger movements. You can't move them entirely independently. And each of these whiskers has its own muscle, so there's a degree of independence in how the whiskers can move. And we find that when we record over long intervals. So this was a study--

[ELECTRONIC NOISE]

--in which we recorded the whisking muscles using EMG. And that's the sound that you can hear as the rat explored. And we tracked the rat as he was moving around. And we showed that, whenever he came close to the edge of the box here, the whiskers would become asymmetric.

And the whiskers that were furthest away from the wall would push round to try and touch the sides of the box. The whiskers that were close to the wall would barely move at all. So we want to put that kind of control into our robot.

So I briefly want to come back to this question of how we decompose control. So in our original robot that was controlled by the basal ganglia, and it's collecting cans, we decompose behaviors into the different elements of behavior-- looking for a can, picking it up, carrying it to the wall, these sorts of things. And if we look in the ethology literature, we find that people have talked about these kinds of decompositions.

There's a very famous paper by Baerends about the herring gull. And with the herring gull, there's this famous experiment where the egg rolls out the nest. And the bird will retrieve the egg with its bill and push it back into the nest.

And it will do this same action really reliably and repeatedly. And it can do it with eggs of various size. It might even do it for a Coke can. And if you take the egg away during the movement, it will still complete the movement.

And ethologists have called this a fixed action pattern, so it may be that behavior is decomposed into action patterns. And that's one of the ways, for instance, in which Rodney Brooks wants to decompose robot behavior. We decompose it into different things we might want the robot to do.

And we can do that with our whiskered robots. Here's another one with its behavior decomposed into different kinds of, if you like, orienting behaviors and fixed action patterns. Another way to decompose behavior is to think about where your attention is, so where you put your attention might decide what you're going to do next.

And for an animal that doesn't have arms, and of course most animals except humans and some primates don't usually use their forelimbs for much else other than locomotion. And they primarily are positioning their head and their face. And their main effector, then, is their mouth. So where you position your attention could determine what you're going to do next.

So another way of decomposing control is to solve the attention problem first. And then once you solve that, the problem of what you're going to do is simplified. So in this robot, we're controlling it by deciding where its attention should go. And then the rest of the body kind of follows.

When humans have special attention, of course, we explore that in the visual modality. And we look at the saccadic eye movements that people make. So in the famous experiment, Albert Yarbus had people looking at this picture and tracking where their eyes would look. And of course, we look at the socially-significant elements of the picture, people's faces and so on, not just arbitrary points of light, or corners, and so on.

And we can actually calculate a saliency map for space and say what are the important parts of space for exploring and attending to. And we've taken that idea and transferred it into our model for understanding the rat. And we thought about tactile saliency maps, so can we, with a sense of touch, think about areas of the world which are important to explore and understand through touch? And can we use that to control the movement of our robot, or in this case, our simulation?

So here, we have a form of emergent wall following, which is a consequence of the rat's spatial attention being driven by contact with vertical objects, which we-- we program it so that the vertical surfaces are salient and interesting. And it has this salient zone. And it tries to put its whiskers into the salient zone.

And then here is a robot instantiating this. This is Ben Mitchinson, who's programmed many of these robots. And so what we're doing now is following this biologically-inspired orienting system to explore shapes.

And in this case, he put his own face in front of the robot. And you can see the robot making light touches against his face and investigating it, looking-- making a series of, if you like, exploratory touches, somewhat like saccades, somewhat like what you might imagine a blind person would do if they were investigating your face to try and recognize you. And Mitra Hartmann from Northwestern has shown that you can take signals off these kinds of whiskers and reconstruct a face, so it should be possible from this to build up from the touches, the sequence of touches, a lot of rich information about the object that's being investigated.

How much time? I need to finish. OK, let me just skip through. So we've been doing-- working on the cortex. We have a number of models of that, which I'd like to show you, but I want to just finish, just to make contact with John's talk, is that we've been doing, in our robots, tactile simultaneous localization and mapping.

So this is our whiskered robot. And we have various models for this, some of which are more hippocampal-like. This one, I think, was more of an engineered model. But you can see the robot just using touch on these artificial whiskers, building up a map of its environment. These two lines show its dead-reckoning position and its calculated position. And just using touch, we can build up a reasonably accurate map of the world that we're exploring.

So Giorgio will talk about the iCub. And I just wanted to mention that, in the work we're doing with Giorgio, we are very much trying to understand human cognition. I wrote a short article for New Scientist on the possibility that robots might one day have selves.

Free Downloads

Video


Caption

  • English-US (SRT)