On an African savanna 10 million years ago, our ancestors awoke to the sun rising over dry, rolling grasslands, vast skies, and patterned wildlife. This complex scenery influenced the evolution of our eyes, according to a new study, guiding the arrangement of light-sensitive cone cells. The findings might allow researchers to develop machines with more humanlike vision: efficient, accurate, and attuned to the natural world.
The human retina contains three types of light-sensitive cone cells—responding to red, green, or blue light—that are arranged in a mosaic pattern. This pattern isn't random. Previous studies suggest that the retina adapts to an animal's surroundings, evolving to extract the most information. For instance, the retinas of fish living at different depths of a lake have distinct patterns because they are attuned to detecting wavelengths of light filtered and distorted to varying degrees by the water. Physicist and lead author Gasper Tkačik of the University of Pennsylvania (Penn) calls this the "efficient coding hypothesis."
Are human eyes also efficiently coded? They don't seem to be. The sky and sea make up much of our natural scenes, yet only 6% of our cone cells detect blue, and they are mostly located around the edge of our retina. Of the remaining cones, the ratio of red to green cones varies wildly between individuals.
To find out why this is, Tkačik, along with neurobiologist Vijay Balasubramanian of Penn and colleagues, created a database of more than 5000 high-resolution photographs taken at various locations in Botswana, a place near where humans likely evolved and other primates still live. The same scenes were shot at different times of day, with different exposure lengths, apertures, and distances from the camera. Using an algorithm they developed from previous studies of how human cones detect light, the researchers calculated how many photons of different wavelengths the camera had captured and what cone arrangement would pick up the largest number of them.
The actual pattern of cones in the human retina matches the algorithm's predictions, the researchers reveal in a paper uploaded to the arXiv database this month and another published in PLoS Computational Biology. Red and green cones would pick up more photons from the images than could blue cones. That explains why the eye makes so few blue cones and places them around the periphery of the retina rather than at the center, where light focuses, Balasubramanian says. Red and green cones, however, pick up about the same amount of information, so there's no evolutionary benefit in keeping their ratio tightly regulated.
In addition to illuminating human eye evolution, the efficient coding hypothesis could help researchers develop robots that "see" as well as we do, the authors say. Currently, machine vision draws on a storehouse of images rather than actually translating color and patterns like human vision does. That creates problems when it has to recognize an object in an unfamiliar context. "We're very far from really versatile machine vision," says Tkačik.
The Botswana database will be useful as a standard for many researchers studying visual perception who are interested in contrast and shape recognition, not just color, says neuroscientist Matthias Bethge of the Werner Reichardt Centre for Integrative Neuroscience in Tübingen, Germany. Whether images from Africa would give very different results from images in another part of the world isn't certain; he points out that human vision works just as well even in outer space. Balasubramanian says that future research may address this question of whether human eyes in different environments have adapted differently.