Read our COVID-19 research and news.

An autonomous sentry freezes an “intruder” during a 2006 test of the weapons system by the South Korean military.


South Korean university’s AI work for defense contractor draws boycott

Fifty-seven scientists from 29 countries have called for a boycott of a top South Korean university because of a new center aimed at using artificial intelligence (AI) to bolster national security. The AI scientists claim the university is developing autonomous weapons, or “killer robots,” whereas university officials say the goal of the research is to improve existing defense systems.

On 20 February, the Korea Advanced Institute of Science and Technology (KAIST) in Daejeon, South Korea, celebrated its new Research Center for the Convergence of National Defense and Artificial Intelligence. A web page that has since been removed by the university said the center, to be operated jointly with South Korean defense company Hanwha Systems, would work on “AI-based command and decision systems, composite navigation algorithms for mega-scale unmanned undersea vehicles, AI-based smart aircraft training systems, and AI-based smart object tracking and recognition technology.”

Toby Walsh, a computer scientist at the University of New South Wales in Sydney, Australia, who organized the boycott, fears that the research will be applied to autonomous weapons, which can include unmanned flying drones or submarines, cruise missiles, autonomously operated sentry guns, or battlefield robots. “It is regrettable that a prestigious institution like KAIST looks to accelerate the arms race to develop such weapons,” an open letter from the boycotters states. “We therefore publicly declare that we will boycott all collaborations with any part of KAIST until such time as the President of KAIST provides assurances, which we have sought but not received, that the Center will not develop autonomous weapons lacking meaningful human control.”

In response to a query from ScienceInsider, KAIST President Sung-Chul Shin prepared a statement ruling out such activities. “I am saddened to hear about the announcement on the boycott of KAIST for allegedly developing killer robots. I would like to reaffirm that KAIST does not have any intention to engage in development of lethal autonomous weapons systems and killer robots. KAIST is significantly aware of ethical concerns in the application of all technologies including artificial intelligence.”

The boycott letter does not define “meaningful human control.” But the phrase is often interpreted as putting the trigger in the hand of a person. Next week in Geneva, Switzerland, the United Nations Group of Government Experts on Lethal Autonomous Weapons Systems will explore the elements of “meaningful human control” as part of an ongoing discussion of emerging technologies.

Walsh said he was heartened by Shin’s statement. “It is good to see a public commitment by KAIST for the first time to ‘meaningful human control.’ This will add weight to the UN discussions.”

Boycotting an entire university because of the research that might take place in one lab seems “a bit extreme to me,” says Ronald Arkin, a computer scientist at the Georgia Institute of Technology in Atlanta. “Please tell me specifically what types of systems these people are planning on researching that warrant a boycott,” says Arkin, who notes he’s not against a ban but believes this one is premature. “That’s a fair question, isn’t it?”