Neurons in our brain’s visual cortex respond to remarkably specific stimuli, including the faces of celebrities such as Jennifer Aniston. But researchers have long struggled to determine precisely what images excite individual neurons in this region, because the possibilities are literally infinite. Now, a study has tackled that problem in monkeys, using a computer algorithm that can rapidly figure out what type of image is most stimulating to a neuron. The results reveal hundreds of odd images, including bizarrely distorted, gargoylelike monkey faces.
The work is “an incredibly clever and creative application of artificial intelligence to an old problem,” says Bevil Conway, a neuroscientist at the National Eye Institute in Bethesda, Maryland, who was not involved in the research.
In the experiment, a monkey with electrodes inserted into its inferior temporal cortex—a brain region involved in object recognition—views a series of pictures. (When this region is damaged in people, they can lose their ability to identify faces and objects, a rare disorder called agnosia.) The images start out devoid of content, a gray blur of visual noise. But based on which ones trigger a selected neuron to fire, a machine learning algorithm creates a new batch of images that the monkey neuron is predicted to “like” even more.
Over many iterations, the algorithm produces vaguely recognizable objects, including “gnomelike, monkeyish things,” says Margaret Livingstone, a neurobiologist at Harvard Medical School in Boston and the study’s principal investigator. Some of those images remind Conway of portraits by Pablo Picasso or Francis Bacon.
Further testing suggested that, although the same neurons respond to pictures of real monkey faces, they seem to prefer the distorted abstractions—things an animal would never see in real life, Livingstone and her colleagues report today in Cell. Other monkey neurons produce images that look a bit like the monkeys’ food dispenser, or one of the caretakers, named Diane, who wears a protective mask.
Why the monkeys prefer the abstracted images to real ones is still a mystery. One possibility is that the neurons work by computing the difference between faces, putting more weight on extreme features like a caricature artist would, Livingstone suggests. That would make them more responsive to such exaggerated features—and it would make them more efficient to have than lots of individual neurons wired to recognize specific faces, she notes. “That way, you get everything in between for free.”
One thing is clear: The majority of the preferred stimuli are learned through experience. “There’s no way a monkey evolved a cell to code for a person wearing protective gear,” Livingston says. The next step is testing the approach in people undergoing brain surgery for epilepsy who agree to be studied while their brains are more accessible, she says. “Then, we’ll be able to learn what neurons in the human brain want.”