As a first step toward predicting earthquakes, geophysicists are using computers to simulate the behavior of the world's most studied 25 kilometers of fault, the Parkfield segment of the San Andreas fault in central California. This storied bit of fault ruptures every 20 years on average in quakes of magnitude 6.0, causing minor damage in California cattle country and fascinating seismologists. Now, researchers report that a relatively sophisticated model of the Parkfield segment can produce quakes that bear a striking resemblance to real ones. The simulations even suggest why the only official U.S. quake forecast ever made failed to get the timing of the latest Parkfield temblor right.
The trick to getting a computer to correctly forecast the time, place, and magnitude of a coming earthquake is giving a computer model's fault enough of the real fault's physical properties. To make their simulations reasonably realistic, geophysicists Sylvain Barbot, Nadia Lapusta, and Jean-Philippe Avouac of the California Institute of Technology (Caltech) in Pasadena constructed a fault model based on both a century's worth of seismological theory and decades of Parkfield observations.
On an earthquake fault, the two sides are pushed past each other against the friction between the sides. Earthquakes can happen only when the nature of the friction varies from place to place. On the Parkfield segment, numerous microearthquakes paint a picture of a patch of fault 5 to 10 kilometers down where friction allows the fault to lock tight and produce few microquakes. Surrounding this locked patch, including both ends of it, friction on the fault behaves differently and lets the two sides creep by each other, producing many microearthquakes. That creep transfers stress onto the more resistant locked patch, stress that will eventually rupture the patch in a sizable quake. The Caltech group built their model fault to resemble this picture of the Parkfield segment.
To calibrate their model fault, the Caltech group drew on observations from the dense instrument networks that have been monitoring the Parkfield segment for as long as 3 decades. Those observations included GPS and satellite measurements of how the two sides of the fault slowly moved or simply deformed as strain built on the fault. Exactly how the fault slipped in the two most recent Parkfield quakes—in 1966 and 2004—was also included, as were the times and starting points of the six Parkfield quakes since 1881.
When driven by the typical motion on the San Andreas, the Caltech model fault produced recognizable Parkfield earthquakes, the team reports online today in Science. They were magnitude 6.0, as real Parkfield quakes have been. They repeated every 20 years, on average, as the real ones have. And most ruptured the fault starting at the north end of the segment, as most Parkfield ruptures did. The model fault could also switch directions and rupture south to north, as the Parkfield segment did in 2004.
The researchers saw something else intriguing. One time in the model, the next magnitude-6 quake appears to have been delayed after a flurry of magnitude 3s struck near the usual rupture starting point. The failure of these small quakes to trigger a magnitude 6 suggests the locked patch was not ready to fail, the group writes; the eventual model quake arrived only after the longest interval between the nine model quakes. That delay is reminiscent of the time when a flurry struck the real fault in the early 1990s. That was near the expected starting point of the Parkfield quake officially forecasted to strike by the mid-1980s; it, too, arrived late, 20 years after it was forecasted to occur.
"I think the paper is an excellent demonstration of the progress that has been made in understanding the processes controlling the occurrence of earthquakes," says seismologist James Dieterich of the University of California, Riverside.
Still, the model has plenty of shortcomings. It reproduces the 20-year repeat time of Parkfield quakes, but the real quake cycle length varies from 12 years to 38 years; the model cycle varied only from 15 to 25 years. That does not bode well for getting the timing at Parkfield right the next time. The Caltech group suggests many ways to improve the model, such as including the effects of distant quakes on the Parkfield segment. But most sobering about the findings, perhaps, is Parkfield's status as the most studied and most well-behaved 25 kilometers of fault in the world. The truly dangerous faults will likely not be so cooperative.