Richter scale: the original measure of earthquake magnitude
In 1935, Charles Richter and Beno Gutenberg developed the local magnitude (ML ) scale (popularly known as the Richter scale) with the goal of quantifying medium-sized earthquakes (between magnitude 3.0 and 7.0) in Southern California. This scale was based on the ground motion measured by a particular type of seismometer (a Wood-Anderson seismograph) at a distance of 100 kilometres (62 mi) from the earthquake's epicenter. Because of this, there is an upper limit on the highest measurable magnitude, and all large earthquakes will tend to have a local magnitude of around 7. Further, the magnitude becomes unreliable for measurements taken at a distance of more than about 600 kilometres (370 mi) from the epicenter. Since this ML scale was simple to use and corresponded well with the damage which was observed, it was extremely useful for engineering earthquake-resistant structures, and gained common acceptance.
Modified Richter scale
The Richter scale was not effective for characterizing some classes of quakes. As a result, Beno Gutenberg expanded Richter's work to consider earthquakes detected at distant locations. For such large distances the higher frequency vibrations are attenuated and seismic surface waves (Rayleigh and Love waves) are dominated by waves with a period of 20 seconds, corresponding to a wavelength of about 60 km. Their magnitude was assigned a surface wave magnitude scale (Ms ). Gutenberg also combined compressional P-waves and the transverse S-waves (which he termed "body waves") to create a body-wave magnitude scale (mb ), measured for periods between 1 and 10 seconds. Ultimately Gutenberg and Richter collaborated to produce a combined scale which was able to estimate the energy released by an earthquake in terms of Gutenberg's surface wave magnitude scale (Ms ).
Correcting weaknesses of the modified Richter scale
The Richter scale, as modified, was successfully applied to characterize localities. This enabled local building codes to establish standards for buildings which were earthquake resistant. However a series of quakes were poorly handled by the modified Richter scale. This series of "great earthquakes" included faults that broke along a line of up to 1000 km. Examples include the 1957 Andreanof Islands earthquake and the 1960 Chilean quake, both of which broke faults approaching 1000 km. The Ms scale was unable to characterize these "great earthquakes" accurately.
The difficulties with use of Ms in characterizing the quake resulted from the size of these earthquakes. Great quakes produced 20 s waves such that Ms was comparable to normal quakes, but also produced very long period waves (more than 200 s) which carried large amounts of energy. As a result, use of the modified Richter scale methodology to estimate earthquake energy was deficient at high energies.
The concept of seismic moment was introduced in 1966 by Keiiti Aki, a professor of geophysics at the Massachusetts Institute of Technology. Using detailed field studies of the 1964 Niigata earthquake and data from a new generation of seimographs in the World-Wide Standardized Seismograph Network (WWSSN), he first confirmed that an earthquake is "a release of accumulated strain energy by a rupture", and that this can be modeled by a "double couple". With further analysis he showed how the energy radiated by seismic waves can be used to estimate the energy released by the earthquake. This was done using seismic moment, defined as
- M0 = μūS
with μ being the rigidity (or resistance) of moving a fault with a surface areas of S over an average dislocation (distance) of ū (modern formulations use D).
In the mid-1970s Dziewonski started the Harvard Global Centroid Moment Tensor Catalog. After this advance, it was possible to introduce Mw and estimate it for large numbers of earthquakes. Hence the moment magnitude scale represented a major step forward in characterizing earthquakes.
Introduction of an energy-motivated magnitude Mw
Most earthquake magnitude scales suffered from the fact that they only provided a comparison of the amplitude of waves produced at a standard distance and frequency band; it was difficult to relate these magnitudes to a physical property of the earthquake. Gutenberg and Richter suggested that radiated energy Es could be estimated as
(in Joules). Unfortunately, the duration of many very large earthquakes was longer than 20 seconds, the period of the surface waves used in the measurement of Ms . This meant that giant earthquakes such as the 1960 Chilean earthquake (M 9.5) were only assigned an Ms 8.2. Caltech seismologist Hiroo Kanamori recognized this deficiency and he took the simple but important step of defining a magnitude based on estimates of radiated energy, Mw , where the "w" stood for work (energy):
Kanamori recognized that measurement of radiated energy is technically difficult since it involves integration of wave energy over the entire frequency band. To simplify this calculation, he noted that the lowest frequency parts of the spectrum can often be used to estimate the rest of the spectrum. The lowest frequency asymptote of a seismic spectrum is characterized by the seismic moment, M0 . Using an approximate relation between radiated energy and seismic moment (which assumes stress drop is complete and ignores fracture energy),
(where E is in Joules and M0 is in Nm), Kanamori approximated Mw by
Moment magnitude scale
The formula above made it much easier to estimate the energy-based magnitude Mw , but it changed the fundamental nature of the scale into a moment magnitude scale. Caltech seismologist Thomas C. Hanks noted that Kanamori's Mw scale was very similar to a relationship between ML and M0 that was reported by Thatcher & Hanks (1973)
Hanks & Kanamori (1979) combined their work to define a new magnitude scale based on estimates of seismic moment
where is defined in newton meters (N·m).
Although the formal definition of moment magnitude is given by this paper and is designated by M, it has been common for many authors to refer to Mw as moment magnitude. In most of these cases, they are actually referring to moment magnitude M as defined above.
Moment magnitude is now the most common measure of earthquake size for medium to large earthquake magnitudes, but in practice, seismic moment, the seismological parameter it is based on, is not measured routinely for smaller quakes. For example, the United States Geological Survey does not use this scale for earthquakes with a magnitude of less than 3.5, which includes the great majority of quakes.
Current practice in official earthquake reports is to adopt moment magnitude as the preferred magnitude, i.e., Mw is the official magnitude reported whenever it can be computed. Because seismic moment (M0 , the quantity needed to compute Mw ) is not measured if the earthquake is too small, the reported magnitude for earthquakes smaller than M 4 is often Richter's ML .
Popular press reports most often deal with significant earthquakes larger than M ~ 4. For these events, the official magnitude is the moment magnitude Mw , not Richter's local magnitude ML .