Earthquake prediction

Earthquake prediction is a branch of the science of seismology concerned with the specification of the time, location, and magnitude of future earthquakes within stated confidence limits but with sufficient precision that a warning can be issued.[1][2] Of particular importance is the prediction of hazardous earthquakes likely to cause loss of life or damage to infrastructure. Earthquake prediction is sometimes distinguished from earthquake forecasting, which can be defined as the probabilistic assessment of general earthquake hazard, including the frequency and magnitude of damaging earthquakes in a given area over years or decades.[3] It can be further distinguished from earthquake warning systems, which upon detection of an earthquake, provide a real-time warning to regions that might be affected.

In the 1970s, scientists were optimistic that a practical method for predicting earthquakes would soon be found, but by the 1990s continuing failure led many to question whether it was even possible.[4] Demonstrably successful predictions of large earthquakes have not occurred and the few claims of success are controversial.[5] Extensive searches have reported many possible earthquake precursors, but, so far, such precursors have not been reliably identified across significant spatial and temporal scales [6] While some scientists still hold that, given enough resources, prediction might be possible, many others now maintain that earthquake prediction is inherently impossible.[7]

Evaluating earthquake predictions

All predictions of the future can be to some extent successful by chance.

Mulargia & Gasperini 1992

Predictions are deemed significant if they can be shown to be successful beyond random chance.[8] Therefore, methods of statistical hypothesis testing are used to determine the probability that an earthquake such as is predicted would happen anyway (the null hypothesis). The predictions are then evaluated by testing whether they correlate with actual earthquakes better than the null hypothesis.[9]

In many instances, however, the statistical nature of earthquake occurrence is not simply homogeneous. Clustering occurs in both space and time.[10] In southern California about 6% of M≥3.0 earthquakes are "followed by an earthquake of larger magnitude within 5 days and 10 km."[11] In central Italy 9.5% of M≥3.0 earthquakes are followed by a larger event within 30 km and 48 hours.[12] While such statistics are not satisfactory for purposes of prediction (giving ten to twenty false alarms for each successful prediction) they will skew the results of any analysis that assumes that earthquakes occur randomly in time, for example, as realized from a Poisson process. It has been shown that a "naive" method based solely on clustering can successfully predict about 5% of earthquakes;[13] slightly better than chance.

As the purpose of short-term prediction is to enable emergency measures to reduce death and destruction, failure to give warning of a major earthquake, that does occur, or at least an adequate evaluation of the hazard, can result in legal liability,[14] or even political purging.[15] But warning of an earthquake that does not occur also incurs a cost:[16] not only the cost of the emergency measures themselves, but of civil and economic disruption.[17] False alarms, including alarms that are cancelled, also undermine the credibility, and thereby the effectiveness, of future warnings.[18] The acceptable trade-off between missed quakes and false alarms depends on the societal valuation of these outcomes. The rate of occurrence of both must be considered when evaluating any prediction method.[19]

Difficulty or impossibility

Earthquake prediction may be intrinsically impossible. It has been argued that the Earth is in a state of self-organized criticality "where any small earthquake has some probability of cascading into a large event".[20] It has also been argued on decision-theoretic grounds that prediction of major earthquakes is impossible.[21] However, these theories and their implication that earthquake prediction is intrinsically impossible are still disputed.[22]

Prediction methods

Earthquake prediction is an immature science—it has not yet lead to a successful prediction of an earthquake from first physical principles. Therefore, some research focuses on empirical analysis, either identifying distinctive precursors to earthquakes, or identifying some kind of geophysical trend or pattern in seismicity that might precede a large earthquake.[23]

Precursors

An earthquake precursor is an anomalous phenomenon that might give effective warning of an impending earthquake.[24] Reports of these – though generally recognized as such only after the event – number in the thousands,[25] some dating back to antiquity.[26] There have been around 400 reports of possible precursors in scientific literature, of roughly twenty different types,[27] running the gamut from aeronomy to zoology.[28] None have been found to be reliable for the purposes of earthquake prediction.[29]

In the early 1990, the IASPEI solicited nominations for a Preliminary List of Significant Precursors. Forty nominations were made, of which five were selected as possible significant precursors, with two of those based on a single observation each.[30]

After a critical review of the scientific literature the International Commission on Earthquake Forecasting for Civil Protection (ICEF) concluded in 2011 there was "considerable room for methodological improvements in this type of research."[31] In particular, many cases of reported precursors are contradictory, lack a measure of amplitude, or are generally unsuitable for a rigorous statistical evaluation. Published results are biased towards positive results, and so the rate of false negatives (earthquake but no precursory signal) is unclear.[32]

Animal behavior

For centuries there have been anecdotal accounts of anomalous animal behavior preceding and associated with earthquakes. In cases where animals display unusual behavior some tens of seconds prior to a quake, it has been suggested they are responding to the P-wave.[33] These travel through the ground about twice as fast as the S-waves that cause most severe shaking.[34] They predict not the earthquake itself — that has already happened — but only the imminent arrival of the more destructive S-waves.

It has also been suggested that unusual behavior hours or even days beforehand could be triggered by foreshock activity at magnitudes that most people do not notice.[35] Another confounding factor of accounts of unusual phenomena is skewing due to "flashbulb memories": otherwise unremarkable details become more memorable and more significant when associated with an emotionally powerful event such as an earthquake.[36] A study that attempted to control for these kinds of factors found an increase in unusual animal behavior (possibly triggered by foreshocks) in one case, but not in four other cases of seemingly similar earthquakes.[37]

Changes in Vp/Vs

Vp is the symbol for the velocity of a seismic "P" (primary or pressure) wave passing through rock, while Vs is the symbol for the velocity of the "S" (secondary or shear) wave. Small-scale laboratory experiments have shown that the ratio of these two velocities – represented as Vp/Vs – changes when rock is near the point of fracturing. In the 1970s it was considered a likely breakthrough when Russian seismologists reported observing such changes in the region of a subsequent earthquake.[38] This effect, as well as other possible precursors, has been attributed to dilatancy, where rock stressed to near its breaking point expands (dilates) slightly.[39]

Study of this phenomena near Blue Mountain Lake in New York State led to a successful prediction in 1973.[40] However, additional successes have not followed, and it has been suggested that the prediction was a fluke.[41] A Vp/Vs anomaly was the basis of a 1976 prediction of a M 5.5 to 6.5 earthquake near Los Angeles, which failed to occur.[42] Other studies relying on quarry blasts (more precise, and repeatable) found no such variations;[43] and an alternative explanation has been reported for such variations as have been observed.[44] Geller (1997) noted that reports of significant velocity changes have ceased since about 1980.

Radon emissions

Most rock contains small amounts of gases that can be isotopically distinguished from the normal atmospheric gases. There are reports of spikes in the concentrations of such gases prior to a major earthquake; this has been attributed to release due to pre-seismic stress or fracturing of the rock. One of these gases is radon, produced by radioactive decay of the trace amounts of uranium present in most rock.[45]

Radon is useful as a potential earthquake predictor because it is radioactive and thus easily detected,[46] and its short half-life (3.8 days) makes radon levels sensitive to short-term fluctuations. A 2009 review[47] found 125 reports of changes in radon emissions prior to 86 earthquakes since 1966. But as the ICEF found in its review, the earthquakes with which these changes are supposedly linked were up to a thousand kilometers away, months later, and at all magnitudes. In some cases the anomalies were observed at a distant site, but not at closer sites. The ICEF found "no significant correlation".[48] Another review concluded that in some cases changes in radon levels preceded an earthquake, but a correlation is not yet firmly established.[49]

Electromagnetic variations

Various attempts have been made to identify possible pre-seismic indications in electrical, electric-resistive, or magnetic phenomena.[50] The most touted, and most criticized, is the VAN method of professors P. Varotsos, K. Alexopoulos and K. Nomicos – "VAN" – of the National and Capodistrian University of Athens. In a 1981 paper[51] they claimed that by measuring geoelectric voltages – what they called "seismic electric signals" (SES) – they could predict earthquakes of magnitude larger than 2.8 within all of Greece up to 7 hours beforehand. Later the claim changed to being able to predict earthquakes larger than magnitude 5, within 100 km of the epicentral location, within 0.7 units of magnitude, and in a 2-hour to 11-day time window.[52] Subsequent papers claimed a series of successful predictions.[53] However, the VAN group generated intense public criticism in the 1980s by issuing telegram warnings, a large number of which were false alarms.

Objections have been raised that the physics of the VAN method is not possible. None of the earthquakes which VAN claimed were preceded by SES generated SES themselves, as would have been expected. Analysis of the wave propagation properties of SES in the Earth’s crust showed that it would have been impossible for signals with the amplitude reported by VAN to have been transmitted over the several hundred kilometers distances from the epicenter to the monitoring station.[54] In addition, VAN’s publications do not account for (i.e. identify and eliminate) possible sources of electromagnetic interference (EMI). Taken as a whole, the VAN method has been criticized as lacking consistency in the statistical testing of the validity of their hypotheses.[55] In particular, there has been some contention over which catalog of seismic events to use in vetting predictions. This catalog switching can be used to conclude that, for example, of 22 claims of successful prediction by VAN[56] 74% were false, 9% correlated at random and for 14% the correlation was uncertain.[57]

In 1996 the journal Geophysical Research Letters presented a debate on the statistical significance of the VAN method;[58] the majority of reviewers found the methods of VAN to be flawed, and the claims of successful predictions statistically insignificant.[59] In 2001, the VAN method was modified to include time series analysis, and Springer published an overview in 2011.[60]

Further information: VAN method

After the 1989 Loma Prieta earthquake occurred, a group led by Antony C. Fraser-Smith of Stanford University reported that the event was preceded by disturbances in background magnetic field noise as measured by a sensor placed in Corralitos, California, about 4.5 miles (7 km) from the epicenter.[61] From 5 October, they reported a substantial increase in noise in the frequency range 0.01–10 Hz. The measurement instrument was a single-axis search-coil magnetometer that was being used for low frequency research. Precursory increases of noise apparently started a few days before the earthquake, with noise in the range .01–.5 Hz rising to exceptionally high levels about three hours before the earthquake. Though this pattern gave scientists new ideas for research into potential precursors to earthquakes, and the Fraser-Smith et al. report remains one of the most frequently cited examples of a specific earthquake precursor, more recent studies have cast doubt on the connection, attributing the Corralitos signals to either unrelated magnetic disturbance[62] or, even more simply, to sensor-system malfunction.[63]

Trends

Instead of watching for anomalous phenomena that might be precursory signs of an impending earthquake, other approaches to predicting earthquakes look for trends or patterns that lead to an earthquake. As these trends may be complex and involve many variables, advanced statistical techniques are often needed to understand them, therefore these are sometimes called statistical methods. These approaches also tend to be more probabilistic, and to have larger time periods, and so merge into earthquake forecasting.

Elastic rebound

Even the stiffest of rock is not perfectly rigid. Given a large force (such as between two immense tectonic plates moving past each other) the earth's crust will bend or deform. According to the elastic rebound theory of Reid (1910), eventually the deformation (strain) becomes great enough that something breaks, usually at an existing fault. Slippage along the break (an earthquake) allows the rock on each side to rebound to a less deformed state. In the process energy is released in various forms, including seismic waves.[64] The cycle of tectonic force being accumulated in elastic deformation and released in a sudden rebound is then repeated. As the displacement from a single earthquake ranges from less than a meter to around 10 meters (for an M 8 quake),[65] the demonstrated existence of large strike-slip displacements of hundreds of miles shows the existence of a long running earthquake cycle.[66]

Characteristic earthquakes

The most studied earthquake faults (such as the Nankai megathrust, the Wasatch fault, and the San Andreas fault) appear to have distinct segments. The characteristic earthquake model postulates that earthquakes are generally constrained within these segments.[67] As the lengths and other properties [68] of the segments are fixed, earthquakes that rupture the entire fault should have similar characteristics. These include the maximum magnitude (which is limited by the length of the rupture), and the amount of accumulated strain needed to rupture the fault segment. Since continuous plate motions cause the strain to accumulate steadily, seismic activity on a given segment should be dominated by earthquakes of similar characteristics that recur at somewhat regular intervals.[69] For a given fault segment, identifying these characteristic earthquakes and timing their recurrence rate (or conversely return period) should therefore inform us about the next rupture; this is the approach generally used in forecasting seismic hazard.[70] Return periods are also used for forecasting other rare events, such as cyclones and floods, and assume that future frequency will be similar to observed frequency to date.

The idea of characteristic earthquakes was the basis of the Parkfield prediction: fairly similar earthquakes in 1857, 1881, 1901, 1922, 1934, and 1966 suggested a pattern of breaks every 21.9 years, with a standard deviation of ±3.1 years.[71] Extrapolation from the 1966 event led to a prediction of an earthquake around 1988, or before 1993 at the latest (at the 95% confidence interval).[72] The appeal of such a method is that the prediction is derived entirely from the trend, which supposedly accounts for the unknown and possibly unknowable earthquake physics and fault parameters. However, in the Parkfield case the predicted earthquake did not occur until 2004, a decade late. This seriously undercuts the claim that earthquakes at Parkfield are quasi-periodic, and suggests the individual events differ sufficiently in other respects to question whether they have distinct characteristics in common.[73]

Further research into the Parkfield seismic data revealed that several 4.0 earthquakes had reduced the stresses on the northwest portion of the Parkfield segment, causing it to skip generating the predicted 6.0 earthquake.[74]

The failure of the Parkfield prediction has raised doubt as to the validity of the characteristic earthquake model itself.[75] Some studies have questioned the various assumptions, including the key one that earthquakes are constrained within segments, and suggested that the "characteristic earthquakes" may be an artifact of selection bias and the shortness of seismological records (relative to earthquake cycles).[76] Other studies have considered whether other factors need to be considered, such as the age of the fault.[77] Whether earthquake ruptures are more generally constrained within a segment (as is often seen), or break past segment boundaries (also seen), has a direct bearing on the degree of earthquake hazard: earthquakes are larger where multiple segments break, but in relieving more strain they will happen less often.[78]

Seismic gaps

At the contact where two tectonic plates slip past each other every section must eventually slip, as (in the long-term) none get left behind. But they do not all slip at the same time; different sections will be at different stages in the cycle of strain (deformation) accumulation and sudden rebound. In the seismic gap model the "next big quake" should be expected not in the segments where recent seismicity has relieved the strain, but in the intervening gaps where the unrelieved strain is the greatest.[79] This model has an intuitive appeal; it is used in long-term forecasting, and was the basis of a series of circum-Pacific (Pacific Rim) forecasts in 1979 and 1989–1991.[80]

However, some underlying assumptions about seismic gaps are now known to be incorrect. A close examination suggests that "there may be no information in seismic gaps about the time of occurrence or the magnitude of the next large event in the region";[81] statistical tests of the circum-Pacific forecasts shows that the seismic gap model "did not forecast large earthquakes well".[82] Another study concluded that a long quiet period did not increase earthquake potential.[83]

Seismicity patterns

Various heuristically derived algorithms have been developed for predicting earthquakes. Probably the most widely known is the M8 family of algorithms (including the RTP method) developed under the leadership of Vladimir Keilis-Borok. M8 issues a "Time of Increased Probability" (TIP) alarm for a large earthquake of a specified magnitude upon observing certain patterns of smaller earthquakes. TIPs generally cover large areas (up to a thousand kilometers across) for up to five years.[84] Such large parameters have made M8 controversial, as it is hard to determine whether any hits that happened were skillfully predicted, or only the result of chance.

M8 gained considerable attention when the 2003 San Simeon and Hokkaido earthquakes occurred within a TIP.[85] But a widely publicized TIP for an M 6.4 quake in Southern California in 2004 was not fulfilled, nor two other lesser known TIPs.[86] A deep study of the RTP method in 2008 found that out of some twenty alarms only two could be considered hits (and one of those had a 60% chance of happening anyway).[87] It concluded that "RTP is not significantly different from a naïve method of guessing based on the historical rates [of] seismicity."[88]

Accelerating moment release (AMR, "moment" being a measurement of seismic energy), also known as time-to-failure analysis, or accelerating seismic moment release (ASMR), is based on observations that foreshock activity prior to a major earthquake not only increased, but increased at an exponential rate.[89] In other words, a plot of the cumulative number of foreshocks gets steeper just before the main shock.

Following formulation by Bowman et al. (1998) into a testable hypothesis,[90] and a number of positive reports, AMR seemed promising[91] despite several problems. Known issues included not being detected for all locations and events, and the difficulty of projecting an accurate occurrence time when the tail end of the curve gets steep.[92] But rigorous testing has shown that apparent AMR trends likely result from how data fitting is done,[93] and failing to account for spatiotemporal clustering of earthquakes.[94] The AMR trends are therefore statistically insignificant. Interest in AMR (as judged by the number of peer-reviewed papers) has fallen off since 2004.[95]

The occurrence of foreshocks has long been thought to be the most promising avenue in predicting earthquakes. A foreshock is a smaller earthquake that can strike minutes or days before a larger one. Because the rupture process for the earthquakes is still not completely clear, foreshock occurrence may give clues into an earthquake-triggering process. In the Non-Critical Precursory Accelerating Seismicity Theory (N-C PAST), foreshocks happen because of the constant buildup of pressure along the fault lines.[96] This theory is given weight due to seismic measurements. This had led to the conclusion for some scientists that foreshocks are a precursor to a larger event, and should be further studied and considered in earthquake prediction.

Notable predictions

These are predictions, or claims of predictions, that are notable either scientifically or because of public notoriety, and claim a scientific or quasi-scientific basis. As many predictions are held confidentially, or published in obscure locations, and become notable only when they are claimed, there may be some selection bias in that hits get more attention than misses. The predictions listed here are discussed in Hough's book[97] and Geller's paper.[98]

1975: Haicheng, China

The M 7.3 Haicheng (China) earthquake of 4 February 1975 is the most widely cited "success" of earthquake prediction.[99] Study of seismic activity in the region led the Chinese authorities to issue a medium-term prediction in June 1974. The political authorities therefore ordered various measures taken, including enforced evacuation of homes, construction of "simple outdoor structures", and showing of movies out-of-doors. The quake, striking at 19:36, was powerful enough to destroy or badly damage about half of the homes. However, the "effective preventative measures taken" were said to have kept the death toll under 300 in an area with population of about 1.6 million, where otherwise tens of thousands of fatalities might have been expected.[100]

However, although a major earthquake occurred, there has been some skepticism about the narrative of measures taken on the basis of a timely prediction. This event occurred during the Cultural Revolution, when "belief in earthquake prediction was made an element of ideological orthodoxy that distinguished the true party liners from right wing deviationists".[101] Recordkeeping was disordered, making it difficult to verify details, including whether there was any ordered evacuation. The method used for either the medium-term or short-term predictions (other than "Chairman Mao's revolutionary line"[102]) has not been specified.[103] The evacuation may have been spontaneous, following the strong (M 4.7) foreshock that occurred the day before.[104]

A 2006 study that had access to an extensive range of records found that the predictions were flawed. "In particular, there was no official short-term prediction, although such a prediction was made by individual scientists."[105] Also: "it was the foreshocks alone that triggered the final decisions of warning and evacuation". They estimated that 2,041 lives were lost. That more did not die was attributed to a number of fortuitous circumstances, including earthquake education in the previous months (prompted by elevated seismic activity), local initiative, timing (occurring when people were neither working nor asleep), and local style of construction. The authors conclude that, while unsatisfactory as a prediction, "it was an attempt to predict a major earthquake that for the first time did not end up with practical failure."

Further information: 1975 Haicheng earthquake

1985–1993: Parkfield, U.S. (Bakun-Lindh)

The "Parkfield earthquake prediction experiment" was the most heralded scientific earthquake prediction ever.[106] It was based on an observation that the Parkfield segment of the San Andreas Fault[107] breaks regularly with a moderate earthquake of about M 6 every several decades: 1857, 1881, 1901, 1922, 1934, and 1966.[108] More particularly, Bakun & Lindh (1985) pointed out that, if the 1934 quake is excluded, these occur every 22 years, ±4.3 years. Counting from 1966, they predicted a 95% chance that the next earthquake would hit around 1988, or 1993 at the latest. The National Earthquake Prediction Evaluation Council (NEPEC) evaluated this, and concurred.[109] The U.S. Geological Survey and the State of California therefore established one of the "most sophisticated and densest nets of monitoring instruments in the world",[110] in part to identify any precursors when the quake came. Confidence was high enough that detailed plans were made for alerting emergency authorities if there were signs an earthquake was imminent.[111] In the words of the Economist: "never has an ambush been more carefully laid for such an event."[112]

1993 came, and passed, without fulfillment. Eventually there was an M 6.0 earthquake on the Parkfield segment of the fault, on 28 September 2004, but without forewarning or obvious precursors.[113] While the experiment in catching an earthquake is considered by many scientists to have been successful,[114] the prediction was unsuccessful in that the eventual event was a decade late.[115]

Further information: Parkfield earthquake

1987–1995: Greece (VAN)

Professors P. Varotsos, K. Alexopoulos and K. Nomicos – "VAN" – claimed in a 1981 paper an ability to predict M ≥ 2.6 earthquakes within 80 km of their observatory (in Greece) approximately seven hours beforehand, by measurements of 'seismic electric signals'. In 1996 Varotsos and other colleagues claimed to have predicted impending earthquakes within windows of several weeks, 100–120 km, and ±0.7 of the magnitude.[116]

The VAN predictions have been criticized on various grounds, including being geophysically implausible,[117] "vague and ambiguous",[118] failing to satisfy prediction criteria,[119] and retroactive adjustment of parameters.[120] A critical review of 14 cases where VAN claimed 10 successes showed only one case where an earthquake occurred within the prediction parameters.[121] The VAN predictions not only fail to do better than chance, but show "a much better association with the events which occurred before them", according to Mulargia and Gasperini.[122]

1989: Loma Prieta, U.S.

On 17 October 1989, the Mw 6.9 (Ms 7.1[123]) Loma Prieta ("World Series") earthquake (epicenter in the Santa Cruz Mountains northwest of San Juan Bautista, California) caused significant damage in the San Francisco Bay area of California.[124] The U.S. Geological Survey (USGS) reportedly claimed, twelve hours after the event, that it had "forecast" this earthquake in a report the previous year.[125] USGS staff subsequently claimed this quake had been "anticipated";[126] various other claims of prediction have also been made.[127]

Harris (1998) reviewed 18 papers (with 26 forecasts) dating from 1910 "that variously offer or relate to scientific forecasts of the 1989 Loma Prieta earthquake." (In this case no distinction is made between a forecast, which is limited to a probabilistic estimate of an earthquake happening over some time period, and a more specific prediction.[128]) None of these forecasts can be rigorously tested due to lack of specificity,[129] and where a forecast does bracket the correct time and location, the window was so broad (e.g., covering the greater part of California for five years) as to lose any value as a prediction. Predictions that came close (but given a probability of only 30%) had ten- or twenty-year windows.[130]

One debated prediction came from the M8 algorithm used by Keilis-Borok and associates in four forecasts.[131] The first of these forecasts missed both magnitude (M 7.5) and time (a five-year window from 1 January 1984, to 31 December 1988). They did get the location, by including most of California and half of Nevada.[132] A subsequent revision, presented to the NEPEC, extended the time window to 1 July 1992, and reduced the location to only central California; the magnitude remained the same. A figure they presented had two more revisions, for M ≥ 7.0 quakes in central California. The five-year time window for one ended in July 1989, and so missed the Loma Prieta event; the second revision extended to 1990, and so included Loma Prieta.[133]

When discussing success or failure of prediction for the Loma Prieta earthquake, some scientists argue that it did not occur on the San Andreas fault (the focus of most of the forecasts), and involved dip-slip (vertical) movement rather than strike-slip (horizontal) movement, and so was not predicted.[134] Other scientists argue that it did occur in the San Andreas fault zone, and released much of the strain accumulated since the 1906 San Francisco earthquake; therefore several of the forecasts were correct.[135] Hough states that "most seismologists" do not believe this quake was predicted "per se".[136] In a strict sense there were no predictions, only forecasts, which were only partially successful.

Iben Browning claimed to have predicted the Loma Prieta event, but (as will be seen in the next section) this claim has been rejected.

Further information: 1989 Loma Prieta earthquake

1990: New Madrid, U.S. (Browning)

Further information: Tidal triggering of earthquakes

Dr. Iben Browning (a scientist with a Ph.D. degree in zoology and training as a biophysicist, but no experience in geology, geophysics, or seismology) was an "independent business consultant" who forecast long-term climate trends for businesses.[137] He supported the idea (scientifically unproven) that volcanoes and earthquakes are more likely to be triggered when the tidal force of the sun and the moon coincide to exert maximum stress on the earth's crust (syzygy.[138] Having calculated when these tidal forces maximize, Browning then "projected"[139] what areas were most at risk for a large earthquake. An area he mentioned frequently was the New Madrid Seismic Zone at the southeast corner of the state of Missouri, the site of three very large earthquakes in 1811–12, which he coupled with the date of 3 December 1990.

Browning's reputation and perceived credibility were boosted when he claimed in various promotional flyers and advertisements to have predicted (among various other events[140]) the Loma Prieta earthquake of 17 October 1989.[141] The National Earthquake Prediction Evaluation Council (NEPEC) formed an Ad Hoc Working Group (AHWG) to evaluate Browning's prediction. Its report (issued 18 October 1990) specifically rejected the claim of a successful prediction of the Loma Prieta earthquake.[142] A transcript of his talk in San Francisco on 10 October showed he had said: "there will probably be several earthquakes around the world, Richter 6+, and there may be a volcano or two" – which, on a global scale, is about average for a week – with no mention of any earthquake in California.[143]

Though the AHWG report disproved both Browning's claims of prior success and the basis of his "projection", it made little impact after a year of continued claims of a successful prediction. Browning's prediction received the support of geophysicist David Stewart,[144] and the tacit endorsement of many public authorities in their preparations for a major disaster, all of which was amplified by massive exposure in the news media.[145] Nothing happened on 3 December,[146] and Browning died of a heart attack seven months later.[147]

2004 & 2005: Southern California, U.S. (Keilis-Borok)

The M8 algorithm (developed under the leadership of Dr. Vladimir Keilis-Borok at UCLA) gained respect by the apparently successful predictions of the 2003 San Simeon and Hokkaido earthquakes.[148] Great interest was therefore generated by the prediction in early 2004 of a M ≥ 6.4 earthquake to occur somewhere within an area of southern California of approximately 12,000 sq. miles, on or before 5 September 2004.[85] In evaluating this prediction the California Earthquake Prediction Evaluation Council (CEPEC) noted that this method had not yet made enough predictions for statistical validation, and was sensitive to input assumptions. It therefore concluded that no "special public policy actions" were warranted, though it reminded all Californians "of the significant seismic hazards throughout the state."[149] The predicted earthquake did not occur.

A very similar prediction was made for an earthquake on or before 14 August 2005, in approximately the same area of southern California. The CEPEC's evaluation and recommendation were essentially the same, this time noting that the previous prediction and two others had not been fulfilled.[150] This prediction also failed.

2009: L'Aquila, Italy (Giuliani)

At 03:32 on 6 April 2009, the Abruzzo region of central Italy was rocked by a magnitude M 6.3 earthquake.[151] In the city of L'Aquila and surrounding area around 60,000 buildings collapsed or were seriously damaged, resulting in 308 deaths and 67,500 people left homeless.[152] Around the same time, it was reported that Giampaolo Giuliani had predicted the earthquake, had tried to warn the public, but had been muzzled by the Italian government.[153]

Giampaolo Giuliani was a laboratory technician at the Laboratori Nazionali del Gran Sasso. As a hobby he had for some years been monitoring radon using instruments he had designed and built. Prior to the L'Aquila earthquake he was unknown to the scientific community, and had not published any scientific work.[154] He had been interviewed on 24 March by an Italian-language blog, Donne Democratiche, about a swarm of low-level earthquakes in the Abruzzo region that had started the previous December. He said that this swarm was normal and would diminish by the end of March. On 30 March, L'Aquila was struck by a magnitude 4.0 tremblor, the largest to date.[155]

On 27 March Giuliani warned the mayor of L'Aquila there could be an earthquake within 24 hours, and an earthquake M~2.3 occurred.[156] On 29 March he made a second prediction.[157] He telephoned the mayor of the town of Sulmona, about 55 kilometers southeast of L'Aquila, to expect a "damaging" – or even "catastrophic" – earthquake within 6 to 24 hours. Loudspeaker vans were used to warn the inhabitants of Sulmona to evacuate, with consequential panic. No quake ensued and Giuliano was cited for inciting public alarm and injoined from making public predictions.[158]

After the L'Aquila event Giuliani claimed that he had found alarming rises in radon levels just hours before.[159] He said he had warned relatives, friends and colleagues on the evening before the earthquake hit,[160] He was subsequently interviewed by the International Commission on Earthquake Forecasting for Civil Protection, which found that there had been no valid prediction of the mainshock before its occurrence.[161]

See also

Notes

  1. Geller et al. 1997, p. 1616, following Allen (1976, p. 2070), who in turn followed Wood & Gutenberg (1935). Kagan (1997b, §2.1) says: "This definition has several defects which contribute to confusion and difficulty in prediction research." In addition to specification of time, location, and magnitude, Allen suggested three other requirements: 4) indication of the author's confidence in the prediction, 5) the chance of an earthquake occurring anyway as a random event, and 6) publication in a form that gives failures the same visibility as successes. Kagan & Knopoff (1987, p. 1563) define prediction (in part) "to be a formal rule where by the available space-time-seismic moment manifold of earthquake occurrence is significantly contracted ...."
  2. Kagan 1997b, p. 507.
  3. Kanamori 2003, p. 1205. See also ICEF 2011, p. 327.
  4. Geller et al. 1997, p. 1617; Geller 1997, §2.3, p. 427; Console 2001, p. 261.
  5. E.g., the most famous claim of a successful prediction is that alleged for the 1975 Haicheng earthquake (ICEF 2011, p. 328), and is now listed as such in textbooks (Jackson2004, p. 344). A later study concluded there was no valid short-term prediction (Wang et al. 2006), as described in more detail below.
  6. Geller 1997, Summary.
  7. Kagan 1997b; Geller 1997. See also Nature Debates.
  8. Mulargia & Gasperini 1992, p. 32; Luen & Stark 2008, p. 302.
  9. Luen & Stark 2008; Console 2001.
  10. Jackson 1996a, p. 3775.
  11. Jones 1985, p. 1669.
  12. Console 2001, p. 1261.
  13. Luen & Stark 2008. This was based on data from Southern California.
  14. The manslaughter convictions against the seven scientists and technicians in Italy are not for failing to predict the L'Aquila earthquake (where some 300 people died) as for giving undue assurance to the populace – one victim called it "anaesthetizing" – that there would not be a serious earthquake, and therefore no need to take precautions. Hall 2011; Cartlidge 2011. Additional details in Cartlidge 2012.
  15. It has been reported that members of the Chinese Academy of Sciences were purged for "having ignored scientific predictions of the disastrous Tangshan earthquake of summer 1976." Wade 1977.
  16. In January 1999 there was a report (Saegusa 1999) that China was introducing "tough regulations intended to stamp out ‘false’ earthquake warnings, in order to prevent panic and mass evacuation of cities triggered by forecasts of major tremors." This was prompted by "more than 30 unofficial earthquake warnings ... in the past three years, none of which has been accurate."
  17. Geller 1997, §5.2, p. 437.
  18. Atwood & Major 1998.
  19. Mason 2003, p. 48 and through out.
  20. Geller et al. 1997, p. 1616; Kagan 1997b, p. 517. See also Kagan 1997b, p. 520, Vidale 1996 and especially Geller 1997, §9.1, "Chaos, SOC, and predictability".
  21. Matthews 1997.
  22. E.g., Sykes, Shaw & Scholz 1999 and Evison 1999.
  23. PEP 1976, p. 9.
  24. The IASPEI Sub-Commission for Earthquake Prediction defined a precursor as "a quantitatively measurable change in an environmental parameter that occurs before mainshocks, and that is thought to be linked to the preparation process for this mainshock." Geller 1997, §3.1
  25. Geller 1997, p. 429, §3.
  26. E.g., Claudius Aelianus, in De natura animalium, book 11, commenting on the destruction of Helike in 373 BC, but writing five centuries later.
  27. Rikitake 1979, p. 294. Cicerone, Ebel & Britton 2009 has a more recent compilation
  28. Jackson2004, p. 335.
  29. Geller (1997, p. 425). See also: Jackson2004 (, p. 348): "The search for precursors has a checkered history, with no convincing successes." Zechar & Jordan (2008, p. 723): "The consistent failure to find reliable earthquake precursors...". ICEF (2009): "... no convincing evidence of diagnostic precursors."
  30. Wyss & Booth 1997, p. 424.
  31. ICEF 2011, p. 338.
  32. ICEF 2011, p. 361.
  33. ICEF 2011, p. 336; Lott, Hart & Howell 1981, p. 1204.
  34. Bolt 1993, pp. 30–32.
  35. Lott, Hart & Howell 1981.
  36. Brown & Kulik 1977.
  37. Lott, Hart & Howell 1981. In an earlier study similar behavior was seen before storms. Lott et al. 1979, p. 687.
  38. Hammond 1973. Additional references in Geller 1997, §2.4.
  39. Scholz, Sykes & Aggarwal 1973.
  40. Aggarwal et al. 1975.
  41. Hough 2010b, p. 110.
  42. Allen 1983, p. 79; Whitcomb 1977.
  43. McEvilly & Johnson 1974.
  44. Lindh, Lockner & Lee 1978.
  45. ICEF 2011, p. 333. For a fuller account of radon as an earthquake precursor see Immè & Morelli 2012.
  46. Giampaolo Giuiliani's claimed prediction of the L'Aquila earthquake was based on monitoring of radon levels.
  47. Cicerone, Ebel & Britton 2009, p. 382.
  48. ICEF 2011, p. 334. See also Hough 2010b, pp. 93–95.
  49. Immè & Morelli 2012, p. 158.
  50. Park 1996.
  51. Varotsos, Alexopoulos & Nomicos 1981, described by Mulargia & Gasperini 1992, p. 32, and Kagan 1997b, §3.3.1, p. 512.
  52. Varotsos et al. 1986.
  53. Varotsos et al. 1986; Varotsos & Lazaridou 1991.
  54. Bernard 1992; Bernard & LeMouel 1996.
  55. Mulargia & Gasperini 1992; Mulargia & Gasperini 1996; Wyss 1996; Kagan 1997b.
  56. Varotsos & Lazaridou 1991.
  57. Wyss & Allmann 1996.
  58. Geller 1996.
  59. See the table of contents.
  60. Varotsos, Sarlis & Skordas 2011.
  61. Fraser-Smith et al. 1990
  62. Campbell 2009
  63. Thomas et al. 2009
  64. Reid 1910, p. 22; ICEF 2011, p. 329.
  65. Wells & Coppersmith 1994, Fig. 11, p. 993.
  66. Zoback 2006 provides a clear explanation. Evans 1997, §2.2 also provides a description of the "self-organized criticality" (SOC) paradigm that is displacing the elastic rebound model.
  67. Castellaro 2003
  68. These include the type of rock and fault geometry.
  69. Schwartz & Coppersmith 1984; Tiampo & Shcherbakov 2012, p. 93, §2.2.
  70. UCERF 2008.
  71. Bakun & Lindh 1985, p. 619. Of course these were not the only earthquakes in this period. The attentive reader will recall that, in seismically active areas, earthquakes of some magnitude happen fairly constantly. The "Parkfield earthquakes" are either the ones noted in the historical record, or were selected from the instrumental record on the basis of location and magnitude. Jackson & Kagan (2006, p. S399) and Kagan (1997, pp. 211–212, 213) argue that the selection parameters can bias the statistics, and that sequences of four or six quakes, with different recurrence intervals, are also plausible.
  72. Bakun & Lindh 1985, p. 621.
  73. Jackson & Kagan 2006, p. S408 say the claim of quasi-periodicity is "baseless".
  74. Brown E. (May 11, 2012). "Quake study offers new clues on a California fault's mystery". Los Angelel Times. Retrieved 30 March 2015.
  75. Jackson & Kagan 2006.
  76. Kagan & Jackson 1991, p. 21,420; Stein, Friedrich & Newman 2005; Jackson & Kagan 2006; Tiampo & Shcherbakov 2012, §2.2, and references there; Kagan, Jackson & Geller 2012. See also the Nature debates.
  77. Young faults are expected to have complex, irregular surfaces, which impede slippage. In time these rough spots are ground off, changing the mechanical characteristics of the fault. Cowan, Nicol & Tonkin 1996; Stein & Newman 2004, p. 185.
  78. Stein & Newman 2004
  79. Scholz 2002, p. 284, §5.3.3; Kagan & Jackson 1991, p. 21,419; Jackson & Kagan 2006, p. S404.
  80. Kagan & Jackson 1991, p. 21,419; McCann et al. 1979; Rong, Jackson & Kagan 2003.
  81. Lomnitz & Nava 1983.
  82. Rong, Jackson & Kagan 2003, p. 23.
  83. Kagan & Jackson 1991, Summary.
  84. See details in Tiampo & Shcherbakov 2012, §2.4.
  85. 1 2 CEPEC 2004a.
  86. Hough 2010b, pp. 142–149.
  87. Zechar 2008; Hough 2010b, pp. 145.
  88. Zechar 2008, p. 7. See also p. 26.
  89. Tiampo & Shcherbakov 2012, §2.1. Hough 2010b, chapter 12, provides a good description.
  90. Hardebeck, Felzer & Michael 2008, par. 6
  91. Hough 2010b, pp. 154–155.
  92. Tiampo & Shcherbakov 2012, §2.1, p. 93.
  93. Hardebeck, Felzer & Michael (2008, §4) show how suitable selection of parameters shows "DMR": Decelerating Moment Release.
  94. Hardebeck, Felzer & Michael 2008, par. 1, 73.
  95. Mignan 2011, Abstract.
  96. Mignan 2013
  97. Hough 2010b
  98. Geller 1997, §4.
  99. E.g.: Davies 1975; Whitham et al. 1976, p. 265; Hammond 1976; Ward 1978; Kerr 1979, p. 543; Allen 1982, p. S332; Rikitake 1982; Zoback 1983; Ludwin 2001; Jackson2004, pp. 335, 344; ICEF 2011, p. 328.
  100. Whitham et al. 1976, p. 266 provide a brief report. The report of the Haicheng Earthquake Study Delegation (Anonymous 1977) has a fuller account. Wang et al. (2006, p. 779), after careful examination of the records, set the death toll at 2,041.
  101. Raleigh et al. (1977), quoted in Geller 1997, p 434. Geller has a whole section (§4.1) of discussion and many sources. See also Kanamori 2003, pp. 1210-11.
  102. Quoted in Geller 1997, p. 434. Lomnitz (1994, Ch. 2) describes some of circumstances attending to the practice of seismology at that time; Turner 1993, pp. 456–458 has additional observations.
  103. Measurement of an uplift has been claimed, but that was 185 km away, and likely surveyed by inexperienced amateurs. Jackson2004, p. 345.
  104. Kanamori 2003, p. 1211. According to Wang et al. 2006 foreshocks were widely understood to precede a large earthquake, "which may explain why various [local authorities] made their own evacuation decisions" (p. 762).
  105. Wang et al. 2006, p. 785.
  106. Geller (1997, §6) describes some of the coverage. The most anticipated prediction ever is likely Iben Browning's 1990 New Madrid prediction (discussed below), but it lacked any scientific basis.
  107. Near the small town of Parkfield, California, roughly half-way between San Francisco and Los Angeles.
  108. Bakun & McEvilly 1979; Bakun & Lindh 1985; Kerr 1984.
  109. Bakun et al. 1987.
  110. Kerr 1984, "How to Catch an Earthquake". See also Roeloffs & Langbein 1994.
  111. Roeloffs & Langbein 1994, p. 316.
  112. Quoted by Geller 1997, p. 440.
  113. Kerr 2004; Bakun et al. 2005, Harris & Arrowsmith 2006, p. S5.
  114. Hough 2010b, p. 52.
  115. It has also been argued that the actual quake differed from the kind expected (Jackson & Kagan 2006), and that the prediction was no more significant than a simpler null hypothesis (Kagan 1997).
  116. Varotsos, Alexopoulos & Nomicos 1981, described by Kagan 1997b, §3.3.1, p. 512, and Mulargia & Gasperini 1992, p. 32.
  117. Jackson 1996b, p. 1365; Mulargia & Gasperini 1996, p. 1324.
  118. Geller 1997, §4.5, p. 436: "VAN’s ‘predictions’ never specify the windows, and never state an unambiguous expiration date. Thus VAN are not making earthquake predictions in the first place."
  119. Jackson 1996b, p. 1363. Also: Rhoades & Evison (1996), p. 1373: No one "can confidently state, except in the most general terms, what the VAN hypothesis is, because the authors of it have nowhere presented a thorough formulation of it."
  120. Kagan & Jackson 1996,grl p. 1434.
  121. Geller 1997, Table 1, p. 436.
  122. Mulargia & Gasperini 1992, p. 37.
  123. Ms is a measure of the intensity of surface shaking, the surface wave magnitude.
  124. Harris 1998, p. B18.
  125. Garwin 1989.
  126. USGS staff 1990, p. 247.
  127. Kerr 1989; Harris 1998.
  128. E.g., ICEF 2011, p. 327.
  129. Harris 1998, p. B22.
  130. Harris 1990, Table 1, p. B5.
  131. Harris 1998, pp. B10–B11.
  132. Harris 1990, p. B10, and figure 4, p. B12.
  133. Harris 1990, p. B11, figure 5.
  134. Geller (1997, §4.4) cites several authors to say "it seems unreasonable to cite the 1989 Loma Prieta earthquake as having fulfilled forecasts of a right-lateral strike-slip earthquake on the San Andreas Fault."
  135. Harris 1990, pp. B21–B22.
  136. Hough 2010b, p. 143.
  137. Spence et al. 1993 (USGS Circular 1083) is the most comprehensive, and most thorough, study of the Browning prediction, and appears to be the main source of most other reports. In the following notes, where an item is found in this document the pdf pagination is shown in brackets.
  138. A report on Browning's prediction cited over a dozen studies of possible tidal triggering of earthquakes, but concluded that "conclusive evidence of such a correlation has not been found". AHWG 1990, p. 10 [62]. It also found that Browning's identification of a particular high tide as triggering a particular earthquake "difficult to justify".
  139. According to a note in Spence, et al. (p. 4): "Browning preferred the term projection, which he defined as determining the time of a future event based on calculation. He considered 'prediction' to be akin to tea-leaf reading or other forms of psychic foretelling." See also Browning's own comment on p. 36 [44].
  140. Including "a 50/50 probability that the federal government of the U.S. will fall in 1992." Spence et al. 1993, p. 39 [47].
  141. Spence et al. 1993, pp. 9–11 [17–19 (pdf)], and see various documents in Appendix A, including The Browning Newsletter for 21 November 1989 (p. 26 [34]).
  142. AHWG 1990, p. iii [55]. Included in Spence et al. 1993 as part of Appendix B, pp. 45–66 [53–75].
  143. AHWG 1990, p. 30 [72].
  144. Previously involved in a psychic prediction of an earthquake for North Carolina in 1975 (Spence et al. 1993, p. 13 [21]), Stewart sent a 13 page memo to a number of colleagues extolling Browning's supposed accomplishments, including predicting Loma Prieta. Spence et al. 1993, p. 29 [37]
  145. See Spence et al. 1993 throughout.
  146. Tierney 1993, p. 11.
  147. Spence et al. 1993, p. 40 [48] (p. 4 [12]).
  148. CEPEC 2004a; Hough 2010b, pp. 145–146.
  149. CEPEC 2004a.
  150. CEPEC 2004b.
  151. ICEF 2011, p. 320.
  152. Alexander 2010, p. 326.
  153. The Telegraph, 6 April 2009. See also McIntyre 2009.
  154. Hall 2011, p. 267.
  155. Kerr 2009.
  156. The Guardian, 5 April 2010.
  157. The ICEF (2011, p. 323) alludes to predictions made on 17 February and 10 March.
  158. Kerr 2009; Hall 2011, p. 267; Alexander 2010, p. 330.
  159. Kerr 2009; The Telegraph, 6 April 2009.
  160. The Guardian, 5 April 2010; Kerr 2009.
  161. ICEF 2011, p. 323, and see also p. 335.

References

  • The Ad Hoc Working Group on the December 2—3, 1990, Earthquake Prediction [AHWG] (18 October 1990), Evaluation of the December 2-3, 1990, New Madrid Seismic Zone Prediction . Included in App. B of Spence et al. 1993.
  • Aggarwal, Yash P.; Sykes, Lynn R.; Simpson, David W.; Richards, Paul G. (10 February 1975), "Spatial and Temporal Variations in ts/tp and in P Wave Residuals at Blue Mountain Lake, New York: Application to Earthquake Prediction", Journal of Geophysical Research 80 (5): 718–732, Bibcode:1975JGR....80..718A, doi:10.1029/JB080i005p00718 .
  • Allen, Clarence R. (December 1976), "Responsibilities in earthquake prediction", Bulletin of the Seismological Society of America 66 (6): 2069–2074 .
  • Allen, Clarence R. (December 1982), "Earthquake Prediction – 1982 Overview", Bulletin of the Seismological Society of Ammerica 72 (6B): S331–S335 .
  • Bernard, P.; LeMouel, J. L. (1996), "On electrotelluric signals", A critical review of VAN, London: Lighthill, S. J. World Scientific, pp. 118–154 .
  • Bolt, Bruce A. (1993), Earthquakes and geological discovery, Scientific American Library, ISBN 0-7167-5040-6 .
  • Fraser-Smith, A. C.; Bernardi, A.; McGill, P. R.; Ladd, M. E.; Helliwell, R. A.; Villard, Jr., O. G. (1990), "Low-Frequency Magnetic Field Measurements Near the Epicenter of the Ms 7.1 Loma Prieta Earthquake", Geophysical Research Letters 17 (9): 1465–1468, Bibcode:1990GeoRL..17.1465F, doi:10.1029/GL017i009p01465 .
  • Hough, Susan (2010b), Predicting the Unpredictable: The Tumultuous Science of Earthquake Prediction, Princeton University Press, ISBN 978-0-691-13816-9 .
  • Jolliffe, Ian T.; Stephenson, David B., eds. (2003), Forecast Verification: A Practitioner’s Guide in Atmospheric Science (1st ed.), John Wiley & Sons, Ltd., ISBN 0-471-49759-2 .
  • Jones, Lucille M. (December 1985), "Foreshocks and time-dependent earthquake hazard assessment in southern California", Bulletin of the Seismological Society of America 75 (6): 1669–1679 .
  • Kanamori, Hiroo (2003), "Earthquake Prediction: An Overview", International Handbook of Earthquake and Engineering Seismology, International Geophysics 616: 1205–1216, doi:10.1016/s0074-6142(03)80186-9, ISBN 0-12-440658-0 .
  • Lomnitz, Cinna (1994), Fundamentals of earthquake prediction, New York: John Wiley & Sons, ISBN 0-471-57419-8, OCLC 647404423 .
  • Lomnitz, Cinna; Nava, F. Alejandro (December 1983), "The predictive value of seismic gaps.", Bulletin of the Seismological Society of America 73 (6A): 1815–1824 .
  • Lott, Dale F.; Hart, Benjamin L.; Verosub, Kenneth L.; Howell, Mary W. (September 1979), "Is Unusual Animal Behavior Observed Before Earthquakes? Yes and No", Geophysical Research Letters 6 (9): 685–687, Bibcode:1979GeoRL...6..685L, doi:10.1029/GL006i009p00685 .
  • McCann, W. R.; Nishenko, S. P.; Sykes, L. R.; Krause, J. (1979), "Seismic gaps and plate tectonics: Seismic potential for major boundaries", Pure and Applied Geophysics 117 (6): 1082–1147, Bibcode:1979PApGe.117.1082M, doi:10.1007/BF00876211 .
  • McEvilly, T.V.; Johnson, L.R. (April 1974), "Stability of P an S velocities from Central California quarry blasts", Bulletin of the Seismological Society of America 64 (2): 343–353 .
  • Otis, Leon; Kautz, William (1979), "Proceedings of Conference XI: Abnormal Animal Behavior Prior to Earthquakes, II", U.S. Geological Survey, Open-File Report 80-453: 225–226  |chapter= ignored (help).
  • Reid, Harry Fielding (1910), "The Mechanics of the Earthquake.", The California Earthquake of April 18, 1906: Report of the State Earthquake Investigation Commission, Vol. 2 .
  • Rikitake, Tsuneji (1982), Earthquake Forecasting and Warning, Tokyo: Center for Academic Publications .
  • Scholz, Christopher H. (2002), The Mechanics of earthquakes and faulting, (2nd ed.), Cambridge Univ. Press, ISBN 0-521-65223-5 .
  • Schwartz, David P.; Coppersmith, Kevin J. (10 July 1984), "Fault Behavior and Characteristic Earthquakes: Examples From the Wasatch and San Andreas Fault Zones", Journal of Geophysical Research 89 (B7): 5681–5698, Bibcode:1984JGR....89.5681S, doi:10.1029/JB089iB07p05681 .
  • Varotsos, P.; Alexopoulos, K.; Nomicos, K. (1981), "Seven-hour precursors to earthquakes determined from telluric currents", Praktika of the Academy of Athens 56: 417–433 .
  • Varotsos, P.; Sarlis, N.; Skordas, E. (2011), Natural time analysis : the new view of time ; Precursory seismic electric signals, earthquakes and other complex time series, Springer Praxis, ISBN 364216448-X .
  • Ward, Peter L. (1978), "Ch. 3: Earthquake prediction", Geophysical predictions (PDF), National Academy of Sciences .
  • Wyss, M. (1996), "Brief summary of some reasons why the VAN hypothesis for predicting earthquakes has to be rejected", A critical review of VAN, London: Lighthill, S. J. World Scientific, pp. 250–266 .

External links

This article is issued from Wikipedia - version of the Thursday, May 05, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.