At 11:15 on a Monday morning in May, an ordinary looking delivery van rolls into the intersection of 16th and K streets NW in downtown Washington, D.C., just a few blocks north of the White House. Inside, suicide bombers trip a switch.
Instantly, most of a city block vanishes in a nuclear fireball two-thirds the size of the one that engulfed Hiroshima, Japan. Powered by 5 kilograms of highly enriched uranium that terrorists had hijacked weeks earlier, the blast smashes buildings for at least a kilometer in every direction and leaves hundreds of thousands of people dead or dying in the ruins. An electromagnetic pulse fries cellphones within 5 kilometers, and the power grid across much of the city goes dark. Winds shear the bomb's mushroom cloud into a plume of radioactive fallout that drifts eastward into the Maryland suburbs. Roads quickly become jammed with people on the move—some trying to flee the area, but many more looking for missing family members or seeking medical help.
It's all make-believe, of course—but with deadly serious purpose. Known as National Planning Scenario 1 (NPS1), that nuclear attack story line originated in the 1950s as a kind of war game, a safe way for national security officials and emergency managers to test their response plans before having to face the real thing.
Sixty years later, officials are still reckoning with the consequences of a nuclear catastrophe in regular NPS1 exercises. Only now, instead of following fixed story lines and predictions assembled ahead of time, they are using computers to play what-if with an entire artificial society: an advanced type of computer simulation called an agent-based model.
Today's version of the NPS1 model includes a digital simulation of every building in the area affected by the bomb, as well as every road, power line, hospital, and even cell tower. The model includes weather data to simulate the fallout plume. And the scenario is peopled with some 730,000 agents—a synthetic population statistically identical to the real population of the affected area in factors such as age, sex, and occupation. Each agent is an autonomous subroutine that responds in reasonably human ways to other agents and the evolving disaster by switching among multiple modes of behavior—for example, panic, flight, and efforts to find family members.
The point of such models is to avoid describing human affairs from the top down with fixed equations, as is traditionally done in such fields as economics and epidemiology. Instead, outcomes such as a financial crash or the spread of a disease emerge from the bottom up, through the interactions of many individuals, leading to a real-world richness and spontaneity that is otherwise hard to simulate.
That kind of detail is exactly what emergency managers need, says Christopher Barrett, a computer scientist who directs the Biocomplexity Institute at Virginia Polytechnic Institute and State University (Virginia Tech) in Blacksburg, which developed the NPS1 model for the government. The NPS1 model can warn managers, for example, that a power failure at point X might well lead to a surprise traffic jam at point Y. If they decide to deploy mobile cell towers in the early hours of the crisis to restore communications, NPS1 can tell them whether more civilians will take to the roads, or fewer. "Agent-based models are how you get all these pieces sorted out and look at the interactions," Barrett says.
The downside is that models like NPS1 tend to be big—each of the model's initial runs kept a 500-microprocessor computing cluster busy for a day and a half—forcing the agents to be relatively simple-minded. "There's a fundamental trade-off between the complexity of individual agents and the size of the simulation," says Jonathan Pfautz, who funds agent-based modeling of social behavior as a program manager at the Defense Advanced Research Projects Agency in Arlington, Virginia.
But computers keep getting bigger and more powerful, as do the data sets used to populate and calibrate the models. In fields as diverse as economics, transportation, public health, and urban planning, more and more decision-makers are taking agent-based models seriously. "They're the most flexible and detailed models out there," says Ira Longini, who models epidemics at the University of Florida in Gainesville, "which makes them by far the most effective in understanding and directing policy."
The roots of agent-based modeling go back at least to the 1940s, when computer pioneers such as Alan Turing experimented with locally interacting bits of software to model complex behavior in physics and biology. But the current wave of development didn't get underway until the mid-1990s.
One early success was Sugarscape, developed by economists Robert Axtell of George Mason University in Fairfax, Virginia, and Joshua Epstein of New York University (NYU) in New York City. Because their goal was to simulate social phenomena on ordinary desktop computers, they pared agent-based modeling down to its essence: a set of simple agents that moved around a grid in search of "sugar"—a foodlike resource that was abundant in some places and scarce in others. Though simple, the model gave rise to surprisingly complex group behaviors such as migration, combat, and neighborhood segregation.
Another milestone of the 1990s was the Transportation Analysis and Simulation System (Transims), an agent-based traffic model developed by Barrett and others at the Los Alamos National Laboratory in New Mexico. Unlike traditional traffic models, which used equations to describe moving vehicles en masse as a kind of fluid, Transims modeled each vehicle and driver as an agent moving through a city's road network. The simulation included a realistic mix of cars, trucks, and buses, driven by people with a realistic mix of ages, abilities, and destinations. When applied to the road networks in actual cities, Transims did better than traditional models at predicting traffic jams and local pollution levels—one reason why Transims-inspired agent-based models are now a standard tool in transportation planning.
A similar shift was playing out for epidemiologists. For much of the past century, they have evaluated disease outbreaks with a comparatively simple set of equations that divide people into a few categories—such as susceptible, contagious, and immune—and that assume perfect mixing, meaning that everybody in the affected region is in contact with everyone else. Those equation-based models were run first on paper and then on computers, and they are still used widely. But epidemiologists are increasingly turning to agent-based models to include factors that the equations ignore, such as geography, transportation networks, family structure, and behavior change—all of which can strongly affect how disease spreads. During the 2014 Ebola outbreak in West Africa, for example, the Virginia Tech group used an agent-based model to help the U.S. military identify sites for field hospitals. Planners needed to know where the highest infection rates would be when the mobile units finally arrived, how far and how fast patients could travel over the region's notoriously bad roads, and a host of other issues not captured in the equations of traditional models.
In another example, Epstein's laboratory at NYU is working with the city's public health department to model potential outbreaks of Zika, a mosquito-borne virus that can lead to catastrophic birth defects. The group has devised a model that includes agents representing all 8.5 million New Yorkers, plus a smaller set of agents representing the entire population of individual mosquitoes, as estimated from traps. The model also incorporates data on how people typically move between home, work, school, and shopping; on sexual behavior (Zika can be spread through unprotected sex); and on factors that affect mosquito populations, such as seasonal temperature swings, rainfall, and breeding sites such as caches of old tires. The result is a model that not only predicts how bad such an outbreak could get—something epidemiologists could determine from equations—but also suggests where the worst hot spots might be.
In economics, agent-based models can be a powerful tool for understanding global poverty, says Stéphane Hallegatte, an economist at the World Bank in Washington, D.C. If all you look at are standard metrics such as gross domestic product (GDP) and total income, he says, then in most countries you're seeing only rich people: The poor have so little money that they barely register.
To do better, Hallegatte and his colleagues are looking at individual families. His team built a model with agents representing 1.4 million households around the globe—roughly 10,000 per country—and looked at how climate change and disasters might affect health, food security, and labor productivity. The model estimates how storms or drought might affect farmers' crop yields and market prices, or how an earthquake might cripple factory workers' incomes by destroying their cars, the roads, or even the factories.
The model suggests something obvious: Poor people are considerably more vulnerable to disaster and climate change than rich people. But Hallegatte's team saw a remarkable amount of variation. If the poor people in a particular country are mostly farmers, for example, they might actually benefit from climate change when global food prices rise. But if the country's poor people are mostly packed into cities, that price rise could hurt badly.
That kind of granularity has made it easier for the World Bank to tailor its recommendations to each country's needs, Hallegatte says—and much easier to explain the model's results in human terms rather than economic jargon. "Instead of telling a country that climate change will decrease their GDP by X%," he says, "you can say that 10 million people will fall into poverty. That's a number that's much easier to understand."
Given how much is at stake in those simulations, Barrett says, users always want to know why they should trust the results. How can they be sure that the model's output has anything to do with the real world—especially in cases such as nuclear disasters, which have no empirical data to go on?
Barrett says that question has several answers. First, users shouldn't expect the models to make specific predictions about, say, a stock market crash next Tuesday. Instead, most modelers accommodate the inevitable uncertainties by averaging over many runs of each scenario and displaying a likely range of outcomes, much like landfall forecasts for hurricanes. That still allows planners to use the model as a test bed to game out the consequences of taking action A, B, or C.
Second, Barrett says, the modelers should not just slap the model together and see whether the final results make sense. Instead, they should validate the model as they build it, looking at each piece as they slot it in—how people get to and from work, for example—and matching it to real-world data from transit agencies, the census, and other sources. "At every step, there is data that you're calibrating to," he says.
Modelers should also try to calibrate agents' behaviors by using studies of human psychology. Doing so can be tricky—humans are complicated—but in crisis situations, modeling behavior becomes easier because it tends to be primal. The NPS1 model, for example, gets by with built-in rules that cause the agents to shift back and forth among just a few behaviors, such as "health care–seeking," "shelter-seeking," and "evacuating."
Even so, field studies point to crucial nuances, says Julie Dugdale, an artificial intelligence researcher at the University of Grenoble in France who studies human behavior under stress. "In earthquakes," she says, "we find that people will be more afraid of being without family or friends than of the crisis itself." People will go looking for their loved ones first thing and willingly put themselves in danger in the process. Likewise in fires, Dugdale says. Engineers tend to assume that when the alarm sounds, people will immediately file toward the exits in an orderly way. But just watch the next time your building has a fire drill, she says: "People don't evacuate without first talking to others"—and if need be, collecting friends and family.
The evidence also suggests that blind, unthinking panic is rare. In an agent-based model published in 2011, sociologist Ben Aguirre and his colleagues at the University of Delaware in Newark tried to reproduce what happened in a 2003 Rhode Island nightclub fire. The crowds jammed together so tightly that no one could move, and 100 people died. Between the police, the local paper, and survivors' accounts, Aguirre's team had good data on the victims, their behavior, and their relationships to others. And when the researchers incorporated those relationships into the model, he says, the runs most consistent with the actual fire involved almost no panic at all. "We found that people were trying to get out with friends, co-workers, and loved ones," Aguirre says. "They were not trying to hurt each other. That was a happenstance."
The NPS1 model tries to incorporate such insights, sending its agents into "household reconstitution" mode (searching for friends and family) much more often than "panic" mode (running around with no coherent goal). And the results can sometimes be counterintuitive. For example, the model suggests that right after the strike, emergency managers should expect to see some people rushing toward ground zero, jamming the roads in a frantic effort to pick up children from school or find missing spouses. The model also points to a good way to reduce chaos: to quickly restore partial cell service, so that people can verify that their loved ones are safe.
If agent-based modelers have a top priority, it's to make the simulations easier to build, run, and use—not least because that would make them more accessible to real-world decision-makers.
Epstein, for example, envisions national centers where decision-makers could access what he calls a petabyte playbook: a library containing digital versions of every large city, with precomputed models of just about every potential hazard. "Then, if something actually happens, like a toxic plume," he says, "we could pick out the model that's the closest match and do near–real-time calculation for things like the optimal mix of shelter-in-place and evacuation."
At Virginia Tech, computer scientist Madhav Marathe is thinking along the same lines. When a Category-5 hurricane is bearing down, he says, someone like the mayor of San Juan can't be waiting around for a weeklong analysis of the storm's possible impact on Puerto Rico's power grid. She needs information that's actionable, he says—"and that means models with a simple interface, running in the cloud, delivering very sophisticated analytics in a very short period of time."
Marathe calls it "agent-based modeling as a service." His lab has already spent the past 4 years developing and testing a web-based tool that lets public health officials build pandemic simulations and do what-if analyses on their own, without having to hire programmers. With just a few clicks, users can specify key variables such as the region of interest, from as small as a single city to the entire United States, and the type of disease, such as influenza, measles, Ebola, or something new. Then, using the tool's built-in maps and graphs, users can watch the simulation unfold and see the effect of their proposed treatment protocols.
Despite being specialized for epidemics, Marathe says, the tool's underlying geographic models and synthetic populations are general, and they can be applied to other kinds of disasters, such as chemical spills, hurricanes, and cascading failures in power networks. Ultimately, he says, "the hope is to build such models into services that are individualized—for you, your family, or your city." Or, as Barrett puts it, "If I send Jimmy to school today, what's the probability of him getting Zika?"
So it won't just be bureaucrats using those systems, Barrett adds. It will be you. "It will be as routine as Google Maps.