The software that estimates hurricane losses is flawed, but there’s hope on the horizon

Catastrophe modelling has been getting some bad press over the estimates for last year’s hurricane losses.

Risk Management Solutions (RMS), one of the leading vendors of cat modelling software, almost doubled its initial loss estimate for Ike, which turned out to be the third most destructive hurricane ever to hit the US. RMS raised its estimate to $13bn-$21bn (£8.7bn-£14bn), from earlier predictions of $6bn-$16bn and $7bn-$12bn.

The error in the initial estimate may have been caused in part by the fact that Ike was a category 2 storm on the Saffir-Simpson scale. The Saffir-Simpson scale is a useful method of classifying hurricanes by wind speed, but it tells little about the ability of a hurricane to cause storm surge (the rise in sea levels caused by a cyclone). The shape of the seabed also has a lot to do with it.

Ike made its final landfall in Baytown, Texas, as “only” a category 2 hurricane. But its path ended up being much wider than previous category 2 storms and it caused damage more on the scale of a category 4 hurricane.

Users of cat modelling software may simply be asking too much of the technology. All the systems currently in use have limitations in estimating hurricane risk:

• There is a time delay between an event and receipt of all the relevant data. This is longer for hurricanes than for an earthquake. Although a hurricane’s location is tracked in real time, other important data is not available for months. So the models rely on a predefined event that is similar. Sometimes no such predefined event is in the model, such as for European wind storm Kyrill

• There is a mismatch between insurers’ exposure data and the data used in the model. This is a point often made by the cat modelling vendors

• The models rely on historical data that is partial and may be inaccurate

• The models do not always incorporate all natural variability or take into account future climate change, despite the fact that global warming is likely to increase hurricane activity.

These problems are being addressed, however, and there are signs that the climate change issue could be closer to being solved than had been thought.

A group led by Dr Greg Holland of the US National Centre for Atmospheric Research (NCAR) recently completed a 350-terabyte-generating run of a new computer model on a supercomputer called Bluefire. The run was unusual, because it modelled feature-rich hurricanes not just from equations describing weather, but also directly using equations describing the underlying climate of the globe. Rowan Douglas, chairman of Willis Research Network, which is contributing to the NCAR research, believes the future unification of climate science with weather forecasting will bring about a new era in cat modelling and rational risk allocation.

But don’t hold your breath. Dr Holland’s model is likely to be used at first for predicting 10-year chunks of weather, although at small enough geographic scales to be useful for city officials planning buildings. Shorter timescale predictions, such as next season’s hurricanes, will take more research.

Until then, here is my prediction for this year’s North Atlantic hurricane activity: 16 named storms and eight hurricanes. How do I know? Because that’s the average of the past five years. No computer necessary. And unfortunately, current statistical models cannot predict much more accurately than that.

david.sandham@globalreinsurance.com

Key points

• Current catastrophe modelling software has limitations in estimating hurricane risk

• There are many reasons for this and improvements are needed across a broad front

• The future unification of climate science with weather forecasting could bring about a new era in cat modelling