Read our COVID-19 research and news.

Debrocke/ClassicStock/Getty Images

Artificial intelligence reveals how U.S. stereotypes about women and minorities have changed in the past 100 years

How do you measure the stereotypes of the past after the past is gone? You could read what people wrote and tally up the slurs, but bias is often subtler than a single word. Researchers are now developing artificial intelligence (AI) to help out. A new study has analyzed which stereotypes are still holding fast—and which are going the way of the floppy disk.

To quantify bias, one team turned to a type of AI known as machine learning, which allows computers to analyze large quantities of data and find patterns automatically. They designed their program to use word embeddings, strings of numbers that represent a word’s meaning based on its appearance next to other words in large bodies of text. If people tend to describe women as emotional, for example, “emotional” will appear alongside “woman” more frequently than “man,” and word embeddings will pick that up: The embedding for “emotional” will be closer numerically to that for “woman” than “man.” It will have a female bias.

The researchers first wanted to see whether embeddings were a good measure of stereotypes. Looking at published English text from various decades, they found that their program’s embeddings clearly lined up with the results of surveys on gender and ethnic stereotypes from the same times. Then they analyzed sentiments that had not been surveyed, using 200 million words taken from U.S. newspapers, books, and magazines from the 1910s to the 1990s.

Going decade by decade, they found that words related to competence—such as “resourceful” and “clever”—were slowly becoming less masculine. But words related to physical appearance—such as “alluring” and ”homely”—were stuck in time. Over the decades, their embeddings were still distinctly “female.” Other findings focused on race and religion: Asian names became less tightly linked to terms for outsiders (including “devious”) and in a separate data set—gathered from The New York Times from 1988 to 2005—words related to terrorism became more closely associated with words related to Islam after the 1993 and 2001 attacks on the World Trade Center in New York City. People from other times and places might not be able to tell you their biases, but they can’t hide them either. The work appears in the Proceedings of the National Academy of Sciences.

Platform Migration Update