Satellite image

Daytime satellite images, like this one of the border between rich and poor neighborhoods in Nairobi, do a better job of predicting poverty than images taken at night.

lightyear105/iStockphoto

Satellite images can map poverty

You can fix the world's problems only if you know where they are. That’s why tracking poverty in Africa, for example, is critical for the United Nations, which launched a global poverty campaign last year. But gathering the data on the ground can be dangerous, slow, and expensive. Now, a study using satellite images and machine learning reveals an alternative: mapping poverty from space.

High-powered cameras on satellites are constantly snapping photos of Earth, and scientists have wondered whether poverty can be detected just by analyzing the images. The first attempts to do that relied on images of the planet at night. The glow of electric lights paints a glittering map of a region's infrastructure, showing roughly where the rich and poor live. But at night, moderate economic underdevelopment doesn’t look much different from absolute poverty, defined by the World Bank as life on less than $1.90 per day.

So a team of social and computer scientists led by Marshall Burke, an economist at Stanford University in Palo Alto, California, has been sifting through daytime images. They, too, show only subtle differences between regions of absolute and moderate poverty. Both might have muddy, unpaved roads winding through clusters of tiny dwellings. But daytime images include other key indicators: How far away is the nearest source of water or the closest urban marketplace? Where are the agricultural fields?

Drawing conclusions from these subtle hints is beyond even a trained human expert—but perhaps not a computer. Making sense of big data sets with multiple variables is a classic challenge for the field of machine learning. The strategy is this: First, get a data set for which the target variable—in this case, per capita income—is already known. Then, train the computer on a subset of those data to create a statistical model that accurately predicts the hidden variable in the rest of the data.

Burke's team used a machine learning technique called a convolutional neural network, which has revolutionized the field of machine vision. They focused on five African countries: Nigeria, Tanzania, Uganda, Malawi, and Rwanda. These countries have both large proportions of their populations living in absolute poverty and good survey data to ground truth any predictions made by the computer.

As the team reports online today in Science, daytime satellite images are dramatically better than nighttime images for mapping African poverty. Compared with the nighttime images, the daytime images were 81% more accurate at predicting poverty in places under the absolute poverty line and 99% more accurate in areas where incomes are less than half that.

Ground-based surveys will still be needed to build and validate this tool, says Marc Levy, a political scientist at The Earth Institute at Columbia University in Palisades, New York, who was not involved in the research. But the study shows that satellites plus surveys are “vastly more powerful than either one alone," he says, especially in regions where ground-based surveys are difficult or impossible. Extending this technique to the rest of the world will take more work, he notes. "These five countries are much more similar to each other than they are as a group to other world regions." For example, he says, Africa is the last “holdout” in the historic trend toward urbanization, with most of its people still living in rural areas. “Using the techniques of this paper in countries that are majority urban is likely to be harder—though still likely to work."