Global statistics

Careful data management

Hundreds of natural disasters cause colossal damage all over the world every year. The ten costliest catastrophes since 1950 occurred in the years 1992 to 2010. To permit detailed, accurate trend analyses, data needs to be professionally collected and managed. The major global databases are working on a standardisation of their methodologies.


[ By Angelika Wirtz ]

Databases that register the harm caused by natural catastrophes are consulted by researchers, governments and NGOs as well as by the insurance and finance industry. At present, there are three global damage databases: NatCatSERVICE (Munich Re), Sigma (Swiss Re) and EM-Dat (CRED, the Centre for Research on the Epidemiology of Disasters).

To some extent, the three databases use different standards and definitions. For instance, the qualifying thresholds for events to be entered in the databases are different:
– Sigma defines catastrophes on the basis of economic losses adjusted for inflation and a minimum of 20 persons dead or missing.
– EM-Dat assigns catastrophe status to all events that meet at least one of the following criteria: 10 fatalities, 100 persons affected, declaration of a state of emergency or request for international help.
– NatCatSERVICE registers events as soon as any injury or damage is caused and – depending on financial or humanitarian impact – assigns them to one of six catastrophe classes ranging from “small-scale loss” to “great natural catastrophe”.

Back to Pompeii

The NatCatSERVICE database currently includes around 28,000 data sets. The oldest relates to the eruption of Mount Vesuvius on 24 August 79 AD and the consequent destruction of Herculaneum and Pompeii. The event was reported by Pliny the Younger. But such historical records are rare.

So far, the database contains details of only 1,500 catastrophes for the years 79 to 1899. These records, however, cannot be used as a basis for trend analysis because they are incomplete and available only for certain regions and types of events. Earthquakes in China, for example, are amply described in records dating back as far as 2,000 years. In America, systematic recordkeeping only began around the middle of the 19th century.

Complete and reliable global data sets exist for events since 1980. They have permitted analyses at global, continental or country levels for the last three decades. For Germany and the United States, records have been complete since 1970. The best basis for long-term analyses is provided by data on the “great natural catastrophes” since 1950. They have been the subjects of increasingly detailed and consistent reporting.

In line with the definition adopted by the United Nations, a “great natural catastrophe” is one that exceeds the affected region’s ability to cope and requires interregional or international assistance. So a “great natural catastrophe” is one that leaves thousands dead and hundreds of thousands homeless, or where the total damage toll is extremely high. The NatCatSERVICE database records 285 such disasters between 1950 and 2009. The only years in which no such events occurred were 1952, 1958 and 2009.

Crucial triggering events

Hurricanes, typhoons, tsunamis or heat waves – every event needs to be stored in the database appropriately and correctly. But events are not always easy to ­categorise. Consider the torrential inundating rain that can come in the wake of a typhoon, for example. Should it be recorded as a flood or a storm? And what about a tsunami that claims hundreds of thousands of lives, like the one that struck the Indian Ocean in 2004? Should it be viewed as an inundation or – in line with its triggering event – an earthquake? These are questions that database operators address on a daily basis.

Uniform standards and terminology are also ­essential to enable natural hazard events to be ­assessed and compared with other databases. ­Researchers are working on this. In 2007, Munich Re, CRED and Swiss Re defined common terminology in consultation with the UNDP, the Asian Disaster ­Reduction Centre and the International Strategy for ­Disaster Reduction (ISDR). Since then, natural ­hazards have been divided into four “hazard families”:
– geophysical,
– meteorological,
– hydrological and
– climatological events.

These families are subdivided into main events (for instance, storms), which are divided in turn ­into perils (tropical cyclone, extratropical cyclones, convective storm) and sub-perils (e. g. tornado) (see box). Other categories concern biological or extraterrestrial events, such as meteorite strikes.

Databases need to register disasters strictly according to what triggered them. A typhoon that impacts mainly through flooding, for instance, appears in the database as a “meteorological event/storm/tropical cyclone/flood”. Analysis is thus possible at all levels. The number of meteorological events or the number of floods a tropical storm triggers can be established. Such a system is vital for comparing entries from different databases.

One can see from looking at all events covered by the databases NatCatSERVICE and Em-Dat that the only relevant differences concern meteorological and hydrological events. Because NatCatSERVICE gives weight to economic damage, it includes storms that do not meet the Em-Dat entry criteria (casualties and/or declaration of a state of emergency).

Credible data on losses

Financial loss is a crucial parameter for NatCat­SERVICE. It is factored into the records in two categories: insured losses and economic losses. The magnitude of such losses is registered for each event.

In most cases, credible information about the number of persons injured or killed is available at a fairly early stage in the aftermath of a catastrophe. Figures for insured losses are also quite reliable because they reflect claims actually paid by insurance companies. Economic loss, however, is quite difficult to assess.

The media are often quick to publish estimates, indeed so fast that the figures they use can hardly be right. Sometimes the damage is overestimated, moreover, in the hope of mobilising more aid. As a general rule, government agencies, the UN, the EU, universities, the World Bank and other development banks conduct a detailed loss analysis only after particularly large or spectacular natural catastrophes. So in the case of many smaller events, the figures published first end up as the ones accepted as correct in the long run or even forever.

The term “economic loss”, moreover, does not have a uniform definition. It is important to differentiate between “direct losses”, “indirect losses” and “consequential losses”. Direct losses are immediately visible and countable (consider the loss of homes, household property, schools, vehicles, machinery, livestock et cetera). They are always calculated on the basis of replacement cost including repairs. However, it is quite challenging to estimate the ­value of historical buildings.
Examples of indirect losses include higher transport costs due to damaged infrastructure, loss of jobs and loss of rental income. Consequential losses (secondary costs) concern economic impact, for instance in terms of diminished tax revenues, lower economic output, reduced GDP or a currency’s lower exchange rate.

NatCatSERVICE has registered around 18,000 loss events since 1980. The exact volume of economic losses was only verified officially in a quarter of these cases. In the other events, the scale of loss is estimated on the basis of insurance claims and in comparison with similar events elsewhere in the world. In other words: direct and indirect losses are generally recorded, but consequential losses are not taken into account.

Sources and data quality

In recent years, it has become much easier to ­investigate data on natural events – largely because of the internet. At the same time, it has become even more important to ensure that sources are sound. NatCatSERVICE draws on around 200 sources that have been identified as first rate for a particular region and/or type of event. The five main sources are:
– insurance industry information,
– meteorological and seismological services,
– reports and evaluations by aid organisations or NGOs, governments, the EU, the UN, the World Bank and other development banks,
– scientific analyses and studies as well as
– news agencies.

Despite first-class sources, the analysis process is fraught with problems. Typical challenges include false reports, incorrect conversion factors and double counting of casualties. Such data are often copied and further disseminated. Therefore, database operators must strictly test the quality of the figures they obtain. The validation process checks the quality and number of the sources referred to as well as the plausibility of the loss figures and the description of the event. Data sets that fail to meet the quality standard are not used.

Quality standards need to be high because many users rely on the data. Records and loss assessment methods should be consistent over time and transparent for users. Therefore, all databases should be reviewed continuously.

Database operators need to develop more benchmarks for the sake of better comparability and higher transparency and credibility of data sets. An international hierarchy and terminology for natural hazards already exists. Current initiatives are working on geocoding standards and loss estimation methods for natural catastrophes.

Related Articles

Governance

Achieving the UN Sustainable Development Goals will require good governance – from the local to the global level.

Sustainability

The UN Sustainable Development Goals aim to transform economies in an environmentally sound manner, leaving no one behind.