Read our COVID-19 research and news.

Algorithms that detect the threat level of airline passengers might operate without accountability. 

Shutterstock/Anucha Maneechote. Adapted by Adham Tamer/Oxford Internet Institute

Q&A: Should artificial intelligence be legally required to explain itself?

As artificial intelligence (AI) becomes more sophisticated, it also becomes more opaque. Machine-learning algorithms can grind through massive amounts of data, generating predictions and making decisions without the ability to explain to humans what it’s doing. In matters of consequence—from hiring decisions to criminal sentencing—should we require justifications? A commentary published today in Science Robotics discusses regulatory efforts to make AI more transparent, explainable, and accountable. Science spoke with the article’s primary author, Sandra Wachter, a researcher in data ethics at the University of Oxford in the United Kingdom and the Alan Turing Institute. This interview has been edited for brevity and clarity.

Q: In what areas is transparency needed? 

A: An algorithm can do very boring work for you, it’s efficient, it doesn’t get tired, and it can often make better decisions than a human can. But transparency is needed where technologies affect us in significant ways. Algorithms decide if individuals are legitimate candidates for mortgages, loans, or insurance; they also determine interest rates and premiums. Algorithms make hiring decisions and decide if applicants can attend universities. St. George’s Hospital Medical School in London developed software for initial screening of applicants back in the 1970s. It was later revealed to show racial and gender discrimination. Judges and the police use algorithms for sentencing, granting parole, and predictive policing. Last year, ProPublica reported that a popular program called COMPAS overestimated the risks of black defendants reoffending. Robotics and autonomous systems can be used for surgery, care, transport, and criminal justice. We should be entitled to assess the accuracy and thinking behind these decisions.

Q: How have regulators responded to the need

A: Regulators around the world are discussing and addressing these issues but sometimes they must satisfy competing interests. On the one hand, the public sector must ensure that algorithms, AI, and robotics are deployed in safe ways, and guarantee that these systems do not discriminate or otherwise harm individuals. On the other hand, regulation requiring transparency could hamper innovation and research, and have an adverse effect on business interests, such as trade secrets.

Regulation can cause problems if requirements are not well defined from the outset. Regulation can also be problematic if it calls for something that’s technically impossible to implement. Some people in the AI community feel that you can't always give explanations because not even the developers of the systems actually understand how they work. With AlphaGo, the programmers didn’t know how the algorithm came up with its moves.

Q: Are there differences between how U.S. and European regulators have acted? 

Sandra Wachter

The Alan Turing Institute

A: The U.S. believes in a more soft-touch, self-regulatory approach. Their current policies focus more on education of researchers and voluntary codes of practices for the private sector. This might be the result of their belief that too much regulation can have a negative effect on research, innovation, and economic growth.

The EU is more inclined to create hard laws that are enforceable. The EU General Data Protection Regulation, or GDPR, which will come into force in May 2018, is an excellent example. This framework creates certain transparency rights and safeguards against automated decision-making. Article 22, for example, grants individuals the right to contest a completely automated decision if it has legal or other significant effects on them. Other articles require data collectors such as advertisers to provide people with access to the collectors’ data on them, and to inform people about the general functionality of the automated system when decisions are made using that data.

Q: Has enough been made of the fact that human decision-makers are also “black boxes”? 

A: Yes, humans often have prejudices that lead to discriminatory decisions, and we often have no way of knowing when and why people are biased. With machine learning we have the potential to make less biased decisions. But algorithms trained with biased data pick up and replicate these biases, and develop new ones.

Q: Can you give an example? 

A: If you’re hiring someone for a management position and you feed your algorithm data from the last 30 years, the data will be skewed, and the projected ideal candidate will be someone male, white, and in his 40s or 50s. I am a woman in my early 30s, so I would be filtered out immediately, even if I’m suitable for that position. And it gets even worse, because sometimes algorithms are used to display job ads, so I wouldn’t even see the position is available.

Other times we have more latent biases. There’s a textbook hypothetical example. People with red cars might receive higher insurance premiums, which is not discriminatory against a protected group but could have unintended consequences. Sports cars are often red, and people who buy sports cars are often macho people who drive more dangerously and have more accidents, so if they have higher insurance premiums, that’s fair. But if red cars are more likely to be damaged in accidents and sold second-hand, then people with less disposable income might be more likely to drive them, too, and they will receive higher insurance premiums. So we don’t know just from the data we’re using that it could have discriminatory effects.

But we can develop better tools to flag biases and act against them.