Artificial intelligence–equipped rovers could offer psychologists a newand highly malleablemodel of the brain.

William Hahn/Florida Atlantic University

Could robots be psychology’s new lab rats?

WASHINGTON, D.C.—Sending a mouse through a maze can tell you a lot about how its little brain learns. But what if you could change the size and structure of its brain at will to study what makes different behaviors possible? That’s what Elan Barenholtz and William Hahn are proposing. The cognitive psychologist and computer scientist, both at Florida Atlantic University in Boca Raton, are running versions of classic psychology experiments on robots equipped with artificial intelligence. Their laptop-size robotic rovers can move and sense the environment through a camera. And they’re guided by computers running neural networks–models that bear some resemblance to the human brain.

Barenholtz presented this “robopsychology” approach here last week at the American Psychological Association’s Technology Mind & Society Conference. He and Hahn told Science how they’re using their unusual new test subjects. The interview has been edited for clarity and length.

Q: Why put neural networks in robots instead of just studying them on a computer?

Elan Barenholtz: There are a number of groups trying to build models to simulate certain functions of the brain. But they’re not making a robot walk around and recognize stuff and carry out complex cognitive functions.

William Hahn: What we want is the organism itself to guide its own behavior and get rewards. One way to think about it would be to try to build the simplest possible models. What is the minimum complexity you need to put in one of these agents so that it acts like a squirrel or it acts like a cat?

Q: What kind of experiments can you run with these machines?

E.B.: We actually have a preliminary result with our little rover car, in what we jokingly call a Skinner box. [In B. F. Skinner’s classic experiments on animal learning], a pigeon wanders around the cage, and then it maybe walks over to a certain location, and maybe that’s electrified. It gets a shock, so it learns very quickly not to go there. Or maybe the pigeon pecks at a little button and it gets a food reward.

We put [the rover] in a box with colors on the various sides of the wall, and we just reward it for facing the correct direction. We were asking whether we could get this kind of robot to engage in a behavior just based on reinforcement. We’re never telling it, “This is the right thing to do.” Instead, we’re just allowing it to explore, given, “Here’s my camera input, here’s my behavior, is there an outcome—do I get rewarded?”

Q: And its “reward” is just being told it’s correct?

E.B.: [Yes,] right now, this is what it’s trying to optimize. And that brings up a very interesting question—a psychological question: What’s the nature of reward that really best simulates the way it works in organisms? There isn’t a score in our heads. There’s endorphins and there’s serotonin, and there’s all this stuff that happens that we call reward.

Q: So did the robot learn to face the right wall?

E.B.: Yes, it was able to solve it.

Q: And what did that tell you?

E.B.: One is, OK, these systems are capable, in a real-world situation, of solving this kind of problem. On the flip side, we also realized, through the course of trial and error, how incredibly difficult even that simple task was.

Q: Are there more complex questions you’d eventually like to ask in this type of experiment?

E.B.: [We want to] extend the rover’s capability beyond rotating on its axis in a little box to be able to have, say, multistep kinds of processes to get the reward. It first has to go to location A, and then location B. Even something as simple as that, in a small space, is extraordinarily difficult.

Q: In your talk, you mentioned that you’re testing whether some computational units in these networks evolve properties of place cellsthe neurons that fire when an animal is at a particular location, no matter which way its head is facing. Can you tell me more about that?

E.B.: We give [the robot] the current frame, and we say, “What do you think the world would look like a second from now if you were to take a right?” To be able to do that, it has to know where it is. It has to build, in its own mind, a map of, “I am here, and then there’s another world over there, and if I turn, I’ll now be at that world.”

W.H.: [We want to know,] do we have to put place cells in there explicitly? Or if we just [give] reinforcement, do place cells just show up because that makes it easier to find reward?

Q: Do you encounter people who are skeptical that these robots are good models to study the brain?

W.H.: We get pushback from both directions. People are like, “This doesn’t look like regular engineering robotics,” and then other people are like “This doesn’t look like psychology research. Why would you think this has anything to do with the brain?”

E.B.: [To those] who say, “This can’t be the brain, the brain is much too complex,” … my response is: Let’s see how far this can go. Let’s see what it can’t account for.

Q: Do you think these robot experiments could ever replace certain kinds of animal research?

W.H.: That’s been one of our motivations. If you imagine 100 years from now, are we still going to be running mice in mazes? Probably not.