When we spot a table of friends across a crowded restaurant, we instantly know who’s who with a quick glimpse at their faces. But explaining how we perform such a complex task isn’t easy—and for scientists studying the brain, it’s even harder. But now one group of researchers has taken a major step: decoding signals from monkey brain cells to recreate faces they see in real life. The findings could help unravel other complex brain functions, identify criminals from security camera footage, and maybe even lead to “mindreading” technology for the fully paralyzed.
“We've really cracked the code for how facial identity is represented in the brain,” says Doris Tsao, a neuroscientist at the California Institute of Technology (Caltech) in Pasadena who led the study. “This is the first time [we’ve understood] the code for a high-level object in any sensory system.”
Scientists have a good handle on how groups of brain cells come together to code simple, “low-level” visual features like color, angles, and edges. But complex, high-level objects like faces have long stumped them. Scientists have wondered if recognizing faces in particular may be a special neural task: Scientists have known for decades that they trigger strong reactions in specific regions of the brain called “face patches.”
For years, some scientists wondered if neural cells in these brain regions might code for the faces of specific individuals––firing off rapidly to them, firing only somewhat to similar faces, and not much at all to very different faces. But Tsao’s “rock-solid” study squarely refutes this idea, says Rodrigo Quian Quiroga, a neuroscientist at the University of Leicester in the United Kingdom. The new study suggests instead that even at this high level of visual processing, cells are coding specific facial subfeatures, which give rise to the image of a face when combined together. And this relatively simple system suggests that maybe facial recognition neurons aren’t that special after all, Tsao says.
Tsao and her colleagues started studying face patches years ago, by testing how neurons respond differently to facial features like eye size and mouth length. But nothing they tried got them their face code. The mystery, says Tsao, was how to represent facial identity. “It seems so difficult because … we can't describe it in words.”
So Tsao and her postdoc Le Chang gave up on easily describable features and instead used a computer program to process a set of 200 computer-adjusted faces, taken from a database of real faces. The program came up with 50 dimensions that mathematically described how those faces most differed from one another. None of the dimensions corresponded to a specific facial feature, but half took into account characteristics related to shape, such as the distance between a person’s eyes or the width of their hairline. The other half took into account features like skin tone and texture. Together, these 50 dimensions could be used to represent a simplified version of all of the 200 faces.
Then, the scientists inserted electrodes into the brains of two macaque monkeys and monitored how 205 face patch neurons responded to thousands of computer-generated human faces that varied across the 50 dimensions. From billions of responses, they built a decoder that told them what each response meant. Then, they used their data set to rebuild the faces.
The reconstructions “blew [other such studies] out of the water,” says Brice Kuhl, a neuroscientist at the University of Oregon in Eugene who was not involved in the new work. When the researchers asked people to look at the reconstructed faces and match them to the original (hidden in a group of 40 other faces), they got it right almost 80% of the time, Tsao and her team report today in Cell.
The researchers further validated their model by showing that though neurons diligently ramp their firing up and down in response to their “preferred” set of dimensions, they seem completely indifferent to changes in others. That means a cell might respond exactly the same way to two very different faces if they happen to share the few key features that the cell codes for. Tsao explains the phenomenon using the concept of color coding. A cell that is tuned to the color red, for example, should respond exactly the same way to orange and purple—so long as the red is the same, the cell doesn’t care about the blue or the yellow mixed in.
But face coding is still far more complex, and Kuhl suggests it might be more appropriate to say the study has cracked the “scheme” of facial representations, rather than the “code.” Though we know that all visible colors can be represented by three basic types of cells, we don’t know exactly how many different kinds of cells code for the full combination of facial features or exactly how they are organized, he says. And though scientists think monkeys process faces similarly to humans, such an extensive study won’t be able to be repeated in human brains, he says.
Still, says Ed Connor, a neuroscientist at Johns Hopkins University in Baltimore, Maryland, the study represents a major breakthrough that is “destined to be famous” for as long as people read about neuroscience. “It’s something very close to the core of human experience, but here we're seeing the neural basis. For me, it just doesn't get more exciting.”