Read our COVID-19 research and news.

Thousands in Mozambique have been displaced by Cyclone Idai, one of the biggest ever to hit the flood-prone country.


Could computers provide short-term warnings of the world’s worst floods?

Floods have wrought destruction in the United States and Mozambique this month, highlighting the struggle scientists face in predicting where high water will spread. In the United States, above-average rainfall helped swell the Missouri River to record levels, inundating thousands of homes and destroying farms. And forecasters warn that 200 million Americans and 25 states could face further “unprecedented” flooding later this spring.

Many U.S. residents could be surprised to find water at their doors because of shortcomings in the floodplain maps that U.S. agencies use to identify at-risk areas, says Oliver Wing, a graduate student in flood risk science at the University of Bristol in the United Kingdom. The maps suggest 13 million Americans could get hit by a once-in-a-century flood—but the real number is likely more than 40 million, if the maps are updated using high-resolution topographic data, Wing and colleagues reported last year in Environmental Research Letters.

The threat of high water extends globally. By 2030, it’s estimated that 40% of global urban land will be in high-frequency flood zones. In Mozambique, hundreds died this month after a cyclone’s torrential rain flooded more than 2000 square kilometers. The toll might have been lower, researchers say, if government officials had access to better flood models that could help improve short-term warnings and long-term planning.

Wing and a colleague, Andrew Smith, spoke to ScienceInsider about the technical flaws in floodplain maps and their own efforts to create better global models of flood risk. Smith is chief operations officer at Fathom, a consulting firm started by researchers at the University of Bristol that provides flood data and forecasts to clients that include insurance companies, NASA, the Nature Conservancy, and the World Bank.

Q: Why can’t U.S. floodplain maps, made by the Federal Emergency Management Agency (FEMA), predict more accurately where big floods will strike?

O.W.: They use techniques that are quite dated. Their hands are tied, because they are congressionally mandated to build maps in this way. They build flood maps from the ground up for each river basin based on a very expensive, engineering-grade flood model. That would be very good if it was made recently and used up-to-date methods. But in many cases, they just don’t have the resources to do that. FEMA maps show the floodplain of a once-a-century flood. That’s sort of an arbitrary probability. But the models that we’ve built allow for a much more detailed picture. You can look at the whole spectrum from small to large floods.

Q: What are the prospects for more accurate U.S. and global flood forecasts?

A.S.: There’s been something of a revolution in our ability to build flood models in the past 5 years. We’ve gone from building models at really small scales to models of entire continents. It’s useful in the U.S. because it allows us to fill in the gaps where FEMA probably doesn’t have data. From a global perspective, it’s really exciting because it allows us to build models in areas where there’s simply no data available, in places like Mozambique.

The revolution was driven by better terrain data, advances in our ability to process and apply existing data, and computational speed-up owing to better hardware and algorithms. The principal and most critical data set in building a flood model is having an accurate map of the Earth’s surface [elevation]. Water flows downhill, so if you don’t have an accurate map, then you’re going to build a pretty cruddy flood model.

O.W.: The U.S. Geological Survey collects that information, and that’s pretty good across most of the U.S. Where these FEMA data exist, and where they were built in the last few years, using detailed methods, our U.S. model compares quite favorably to it and produces a similar answer.

A.S.: Unfortunately, there’s been a real lack of investment in terrain data sets globally. Currently, the data set that we use for global flood modeling is based on the Shuttle Radar Topography Mission [SRTM]. This is a 20-year-old, radar-based data set. It’s full of errors that have quite literally taken 20 years to iron out. There’s been a whole load of research in order to render SRTM usable in flood models. A new scientific field emerged in processing that specific terrain data set. It is still far behind what we have in most of the U.S. and indeed most parts of Western Europe, where we have laser altimeter data available.

There are a few different global models being produced by different research centers. And some research done a few years ago, comparing the very first generation of these models, identified the fact that they produce very different realizations of global flood risk. Given that these are first generation models, this is perhaps unsurprising, as each uses different methods. However, we’re hopeful that there will be a convergence over the coming years.

We’re also hopeful that, at some point, somebody’s going to build accurate global terrain data sets and make them freely available. Estimates suggest that we can build an advanced global terrain data set for substantially less than most satellite missions.

Q: How will these maps improve short-term emergency forecasts?

O.W.: An actual projection of where water’s going to go in real time is something that is not done very often. And that’s because it’s quite a computation-intensive thing to do. The new models are fast and accurate enough to really provide information in real time. We can actually give a view of what the flood extent might be in 3 days’ time, 1 day time, and that information is going to be invaluable. It’s in its infancy at the moment, but the model framework now exists to allow that to happen.

Q: How will better flood models improve long-term flood planning?

O.W.: The tools that we build expand the information that FEMA can provide because we’re not constrained by the 100-year flood, which is just a single view of the present-day risk. We can produce flood maps for any recurrence interval, spanning the range from incredibly frequent flooding that has a 20% chance of occurring in a given year, all the way to low-probability, huge-magnitude flooding that would be a one-in-a-1000-year event.

The issue of how on Earth does this get taken up by emergency managers and communities, to manage that risk effectively, is probably an even bigger task than the science.

A.S.: One of the nice things about our model is that every year that we have new observations, we can run the model again and have an updated estimate and visualization of what the 100-year flood event looks like. Because the 100-year event today will look very different to what one looked like 10 years ago, simply because we have more data points.