Read our COVID-19 research and news.


Which movies get artificial intelligence right?

Take Science’s quiz and test your knowledge of AI in the movies!

In the opening scene of the 1982 film Blade Runner, an interrogator asks an android named Leon questions “designed to provoke an emotional response.” According to the movie, empathy is one of the few features that distinguish humans from artificial intelligence (AI). When the test shifts to questions about his mother, Leon stands up, draws a gun, and shoots his interviewer to death. 

It’s not a happy ending for the human, but when Hollywood portrays AI, it rarely is. Writers and directors have been pitting man against machine on the silver screen for decades, but just how scientifically plausible are these plots? We consulted a group of AI experts and asked them to weigh in on 10 different films in the genre. We’ve ranked them least to most plausible. (Danger, Will Robinson: Spoilers ahead.) 


10. Chappie (2015)

Summary: A robot police warrior gains self-awareness after a programmer cracks the code for true AI.

What it gets right: Chappie is “born” with a very basic understanding of the world and his surroundings, but it learns through experience. Although the film might not be the most realistic portrayal of machine learning, it is accurate in the sense that many of our most advanced AI algorithms today require the robot to undergo a trial-and-error learning phase. “Certainly the fact that he learns very quickly is potentially quite realistic,” says Stuart Russell, a computer scientist at the University of California, Berkeley, and the author of Artificial Intelligence: A Modern Approach

What it gets wrong: There are a lot of problems with this movie, according to the experts. For one, there’s a single rogue programmer who writes a program for AI by himself in his apartment. The experts agree this sort of breakthrough is highly unrealistic, and that the first true AI will be developed slowly over time by a large team of scientists.  Another issue: brain/consciousness uploading—the idea that somehow human consciousness can be extracted from a human brain and replicated on a chip—which is a major theme in the movie. “It’s pure speculation that has no basis in fact whatsoever,” Russell says. “It’s nonsense.” That strikes a blow to the idea, popularized by futurist Ray Kurzweil, that we’ll one day be able to upload our consciousness into computers, granting us immortality, adds Randy Goebel, a computer scientist at the University of Alberta in Canada who studies the theory and application of intelligent systems. “Kurzweil is just plain wrong.”  

Realism score: 1/10


9. A.I. (2001)

Summary: After their son must be put into a hibernationlike stasis to save his life, a family adopts a robotic boy, David, which is programmed to love. 

What it gets right: Throughout the film, David has one unchanging objective that is the direct result of his programming—to love and be loved. He never goes rogue or changes his goals. “This robot boy wants to be loved. If you design this robot child in such a way, it will have these desires and it will act in such a way,” says Marcus Hutter, a computer scientist at the Australian National University who studies mathematical approaches to AI.  “Since the aim of the company was to produce artificial children, it makes a lot of sense that this AI behaved as it did.” 

What it gets wrong: Like in Chappie, we see a single team of scientists create AI over a very short period of time. “I cringe when I watch that, starting from scratch in 18 months, they achieve a conscious robot,” Hutter says. “And then there’s a button you press to turn on the consciousness module.” Then there’s the matter of David and his kin’s integration into the larger world. “I thought the robots in A.I. were too well accepted into society,” Russell says. 

Realism score: 3/10


8. Blade Runner (1982)

Summary: In the future, humanity’s genetic engineering technology allows for the creation organic life in a form completely indistinguishable from humans, but these “replicants” only live for 4 years and aren’t allowed on Earth.

What it gets right: By opting for organic AI over mechanical, Blade Runner asks if consciousness can be grown in a lab. “How do I know that you have feelings?” Hutter asks. “I have no way of really knowing that. I just assume that because you are built up similarly to me and I know that I have emotions.” By being “built up similarly,” the replicants muddy the waters of defining consciousness even further, he says.

What it gets wrong: The experts were somewhat divided on Ridley Scott’s critically acclaimed foray into film noire. Hutter rated it as his “least favorite” with respect to realism whereas some of his colleagues placed it more in the middle of the pack. The replicants’ sentienceseems to be derived, at least in part, from their implanted memories. Now, no genetic engineering technique can implant complex memories into humans. 

Realism score: 4/10


7. Ex Machina (2015)

Summary: A young programmer (Caleb) wins a trip to visit a computer genius’s (Nathan) compound where he will get to administer a Turing test (designed to test whether a machine is capable of humanlike intelligence) to a potentially sentient robot (Ava). 

What it gets right: Nathan doesn’t suddenly “crack the AI problem.” He’s not sure if Ava is sentient or not; she needs to be tested. Although Hutter thinks the movie bungled the accuracy on the Turing test, he praises it for its sophisticated treatment of consciousness and for avoiding over-the-top action scenes. Consciousness is one of the biggest themes in all of these films, and Hutter thinks that if consciousness is ever achieved, it’s likely to be an emergent property of advanced AI rather than something that was explicitly programmed or activated: “In general I would say that if I have a system which is sufficiently complicated … if they display behavior we would interpret as emotions as humans, then there’s a reasonable chance that it has emotions.” Ex Machina at least treats the subject as complicated—something that needs to be tested.   

Russell is less optimistic that such a test will ever be needed. “I don’t think anyone’s going to crack consciousness—at least not absent a major conceptual breakthrough,” he says. “It’s not going to come from programming; it’s going to come from a complete philosophical conception of what we’re talking about.” The problem, Russell says, is that we don’t understand the origins of our own consciousness well enough to program one. “Nobody in AI is working on building conscious machines because we just have nothing to go on,” he says. “We just don’t have a clue about what to do.”

What it gets wrong: Yet again we see the “isolated genius” trope. Though Nathan runs the massive internet company BlueBook, it appears most of his work on AI has been done solo—alone in a high-tech house in the woods. The film also has a particularly poor explanation of the technology behind Ava’s (maybe) sentient brain. (This is somewhat forgivable considering we don’t know how to create AI in the first place.) In the scene in question, Nathan shows Caleb the various pieces of hardware inside of Ava. When they get to the brain, Nathan mentions it’s not hardware, but “wetware” implying a biological component. But the software behind Ava’s intelligence is apparently derived from compiling the massive amount of data in Internet searches. How search queries equate to sentience is never explained in any detail. The lack of specifics is especially surprising given how excited Nathan is to expound upon the ins and outs of the tech behind Ava’s genitals.

Realism score: 5/10


6. Transcendence (2014)

Summary: Computer scientist Will Caster becomes the first world’s first AI specimen when his consciousness is uploaded to a quantum computer after he dies. 

What it gets right: Until the very end, Will Caster’s AI self is confined entirely to the digital world; he exists solely as a computer program. The experts agree that shrugging off a physical body could allow an AI considerable freedoms and safety. “Once you have the ability to change the mind or the brain you can also get rid of a lot of evolutionary artifacts,” Hutter says. “I don’t think they’ll care so much about becoming like humans.”

What it gets wrong: The whole brain uploading/downloading thing. (See #10.) 

Realism score: 6/10


5. I, Robot (2004)

Summary: After an executive (Alfred Lanning) at USR robotics corporation is murdered, detective Del Spooner suspects a one of the company’s own robots is the perpetrator. 

What it gets right: Of any film on the list, I, Robot addresses Isaac Asimov’s three laws of robotics most directly: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 

On their own, the laws might be a fairly good starting point for creating safe AIs. Sonny, the film’s main robotic character, appears to have somehow defied his programming and gone rogue. According to the experts, this is something that could never happen, but I, Robot provides a very reasonable explanation for the machines’ sudden change of behavior when it is revealed that an AI named VIKI has introduced the “Zeroth law” which states: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” Like in many of the other films on the list, this directive is taken to the extreme when the robots decide that humanity is a danger to itself and must be pacified. Unpredicted and unwanted consequences could become a reality if we aren’t careful about how we program advanced AIs. 

Plus, Spooner’s Audi is totally rad

What it gets wrong: Adding the Zeroth law or any other overriding directive to the robots’ programming could certainly allow them the change their behavior and violate the traditional Three Laws of Robotics, but this doesn’t explain why or how VIKI came to the decision to implement the Zeroth law to begin with. All the experts are quick to point out that robots do not change their programming, and the notion that they could spontaneously develop new agendas is pure fiction. Hutter says the underlying goals programmed into the machine are “static.” “There are mathematical theories that prove a perfectly rational goal-achieving agent has no motivation to change its own goals.” 

Realism score: 6.5/10


4. Colossus: The Forbin Project (1970)

Summary: An American supercomputer designed to prevent nuclear war teams up with its Russian counterpart and together, with control over most of the world’s nukes, they hold humanity at ransom unless humans relinquish control of society to their new computer overlords. 

What it gets right: In Hollywood there seems to be a misconception that a machine must acquire sentience or free will to oppose humans. But according to Russell, “It’s completely unnecessary. It’s completely nonscientific.” Whether the supercomputers are sentient in this film is debatable, but they wouldn’t need to be to oppose humanity. All the machine needs is programming that contradicts with our own wants. “If we give these machines goals and we’re not very careful about it, they’ll do what we asked them to, but we may not like the results,” Russell says. Hutter goes so far as to say that he might not mind being ruled by a sentient machine. Humans are greedy and prone to pursue self-interest even at the expense of others, he notes. A completely rational computer, with intelligence far beyond our own, might actually be able to create a more fair society for everyone. He agrees with Colossus when, at the film’s conclusion, the AI states, “You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride.”

What it gets wrong: Aside from the idea that a computer operating on punch cards would have enough computational power to outwit and subjugate humanity, there isn’t too much wrong with Colossus. In a 2001 publication, computer scientist Rodney Brooks pointed out that the progression of any technology happens in incremental steps. If we ever create a robot that we can’t control, we will most likely have already created many robots that we almost couldn’t control or a robot that we occasionally lost control of. He thinks there will be plenty of warning if we ever get close. Considering IBM’s Watson didn’t even know it was playing Jeopardy when it bested Ken Jennings and Brad Rutter in 2011, it seems likely that we have some time. 

Realism score: 7/10

3. Bicentennial Man (1999)

Summary: A robot butler becomes human over several generations, even replacing his mechanical pieces with lab-grown organs. 

What it gets right: For the first time in our list, we have nonviolent AI. Although it doesn’t make for a very compelling story, most of the experts are optimistic that humanity will be able to peacefully coexist alongside AI. “As far as accuracy, there aren’t any things that are outrageous,” Goebel says. “The fear or anxiety that Hollywood portrays  … is maybe the most serious thing that’s mistreated.”

What it gets wrong: Hutter says the notion that a robot as advanced as Andrew would have any desire to become human is probably “somewhat egocentric.” Like in Transcendence, sufficiently advanced machines may recognize the benefits or their inorganic circuitry. Perhaps it’s missing the entire point of the film, but it seems unlikely that AI as smart as Andrew would think relish the opportunity to become fragile or breakable. And, as always, there’s the issue that Andrew has inexplicably obtained goals and wants outside of his original programming. 

Realism score: 7.5/10


2. Her (2013)

Summary: A recently divorced writer (Theodore Twombly) installs a new sentient operating system (Samantha) on his computer and the two begin dating. 

What it gets right: Samantha doesn’t have a body, but she does have a voice. Her shows the risks of becoming emotionally attached to machines, and does so without the need to package AI into a humanoid frame. Russell, in particular, warns of designing humanoid AIs. “People are going to become emotionally attached,” he says. “You’re less likely to ascribe consciousness to a grey box. That’s one reason I think it’s a bad reason to have humanoid robots. Imagine how difficult that [would be] for a child growing up.” 

And furthermore, AI may have different interests than its human creators. In the film, Theodore Twombly may grow as a result of his relationship with Samantha, but the two were clearly never an ideal pair. Samantha was free to roam the Internet and the world, carrying out hundreds of conversations at once. Twombly is confined to the limitations of his body and brain. “Machines don’t have to experience the world at the same rate at humans,” Russell says. That makes them great for performing millions of computations per second, but pretty lousy companions. 

What it gets wrong: There’s no explanation for how Samantha works or what it means to evolve beyond the need for matter. Also, considering how advanced AI has become, the rest of civilization appears strangely unchanged. 

Realism score: 8/10


1. 2001: A Space Odyssey 

Summary: While investigating a strange signal emanating from a large black monolith on the moon, the crew of Discovery One discover that their onboard AI (HAL 9000) is malfunctioning. 

What it gets right: The experts seem to agree that 2001’s treatment of AI is the most accurate of any of the movies on the list. HAL seems certainly seems sentient, but when asked whether the computer has feelings or emotions, one of the astronauts (Dave) responds that there’s really no way to know. When asked if he would believe a computer that claimed to have feelings, Russell says something similar: “It could be that we end up just shrugging our shoulders.” HAL seems to express fear as Dave slowly deactivates him, but the desperate pleading could just be one final attempt to carry out his mission. 

HAL also sticks to his programming. Like Colossus, HAL never strays from his original goals. All of his seemingly nefarious actions are carried out simply because he believes it is the best way to complete the mission. It’s not a survival instinct or emotion that makes HAL into a villain, just simple programming. 2001 makes it clear that consciousness is not a requirement for AI opposition. 

What it gets wrong: Not too much. You don’t get to the top of this list by messing up too royally. We took a point off because there’s no explanation of how HAL works, but again, since we don’t know how to build an advanced AI, no explanation might be better than some vague science jargon. 

Realism score: 9/10 


*See Science's special section on artificial intelligence.