Is a crime scene gang-related? A new computer program may have the answer.

Artificial intelligence could identify gang crimes—and ignite an ethical firestorm

When someone roughs up a pedestrian, robs a store, or kills in cold blood, police want to know whether the perpetrator was a gang member: Do they need to send in a special enforcement team? Should they expect a crime in retaliation? Now, a new algorithm is trying to automate the process of identifying gang crimes. But some scientists warn that far from reducing gang violence, the program could do the opposite by eroding trust in communities, or it could brand innocent people as gang members.

That has created some tensions. At a presentation of the new program this month, one audience member grew so upset he stormed out of the talk, and some of the creators of the program have been tight-lipped about how it could be used.

“This is almost certainly a well-intended piece of work,” says Google software engineer Blake Lemoine, who is based in Mountain View, California, and has studied ways of reducing bias in artificial intelligence. “But have the researchers considered the possible unintended side effects?”

For years, scientists have been using computer algorithms to map criminal networks, or to guess where and when future crimes might take place, a practice known as predictive policing. But little work has been done on labeling past crimes as gang-related.

In the new work, researchers developed a system that can identify a crime as gang-related based on only four pieces of information: the primary weapon, the number of suspects, and the neighborhood and location (such as an alley or street corner) where the crime took place. Such analytics, which can help characterize crimes before they’re fully investigated, could change how police respond, says Doug Haubert, city prosecutor for Long Beach, California, who has authored strategies on gang prevention.

To classify crimes, the researchers invented something called a partially generative neural network. A neural network is made of layers of small computing elements that process data in a way reminiscent of the brain’s neurons. A form of machine learning, it improves based on feedback—whether its judgments were right. In this case, researchers trained their algorithm using data from the Los Angeles Police Department (LAPD) in California from 2014 to 2016 on more than 50,000 gang-related and non–gang-related homicides, aggravated assaults, and robberies.

The researchers then tested their algorithm on another set of LAPD data. The network was “partially generative,” because even when it did not receive an officer’s narrative summary of a crime, it could use the four factors noted above to fill in that missing information and then use all the pieces to infer whether a crime was gang-related. Compared with a stripped-down version of the network that didn’t use this novel approach, the partially generative algorithm reduced errors by close to 30%, the team reported at the Artificial Intelligence, Ethics, and Society (AIES) conference this month in New Orleans, Louisiana. The researchers have not yet tested their algorithm’s accuracy against trained officers.

It’s an “interesting paper,” says Pete Burnap, a computer scientist at Cardiff University who has studied crime data. But although the predictions could be useful, it’s possible they would be no better than officers’ intuitions, he says. Haubert agrees, but he says that having the assistance of data modeling could sometimes produce “better and faster results.” Such analytics, he says, “would be especially useful in large urban areas where a lot of data is available.”

But researchers attending the AIES talk raised concerns during the Q&A afterward. How could the team be sure the training data were not biased to begin with? What happens when someone is mislabeled as a gang member? Lemoine asked rhetorically whether the researchers were also developing algorithms that would help heavily patrolled communities predict police raids.

Hau Chan, a computer scientist now at Harvard University who was presenting the work, responded that he couldn’t be sure how the new tool would be used. “I’m just an engineer,” he said. Lemoine quoted a lyric from a song about the wartime rocket scientist Wernher von Braun, in a heavy German accent: “Once the rockets are up, who cares where they come down?” Then he angrily walked out.

Approached later for comment, Lemoine said he had talked to Chan to smooth things over. “I don’t necessarily think that we shouldn’t build tools for the police, or that we should,” Lemoine said (commenting, he specified, as an individual, not as a Google representative). “I think that when you are building powerful things, you have some responsibility to at least consider how could this be used.”

Two of the paper’s senior authors spent nearly 20 minutes deflecting such questions during a later interview. “It’s kind of hard to say at the moment,” said Jeffrey Brantingham, an anthropologist at the University of California, Los Angeles. “It’s basic research.” Milind Tambe, a computer scientist at the University of Southern California in Los Angeles, agreed. Might a tool designed to classify gang crime be used to, say, classify gang crime? They wouldn’t say.

The researchers are happy to talk about other applications for partially generative neural networks: classifying wildlife crime, improving grasslands management, and predicting which people would be best at spreading public health information to their peers. Basically, any time you have only partial information about a situation, which is pretty much all the time.