partner login

BMS News

Insurance Industry Takeaways from 2014 Tropical Season

2014HurricaneSeason

Sourc:e NOAA 2014 Atlantic Storm Tracks

Now that Arctic air has made its way across a good deal of the lower 48 states and snow had already covered 50% of the nation, the Atlantic hurricane season, and the threat of landfall along the U.S. coastline, is effectively over.
The season was forecast to be below the long-term average, and that is exactly what happened, for the most part. We will likely end up with a total of eight named storms, six of which became hurricanes. Two out of the six hurricanes made it to Category 3 or higher, with one, Gonzalo, reaching Category 4 – the strongest storm in the basin since the 2011 season. While we fell short of the average number of named storms (usually 10 per year), the number of hurricanes and major hurricanes is on par with what is expected in a typical season.

A better method of gauging seasonal activity, in my opinion, is the Accumulated Cyclone Energy index (“ACE”). By measuring the proportional energy of a named storm, it provides an accurate picture of how active a season might be and puts greater weight on the severity of the season as stronger storm accumulate a higher ACE than weaker storms. A typical season has an ACE value of around 104. This year the ACE score was 65.1, 62.5% of which was produced by the two major hurricanes of the season (Edouard and Gonzalo). This below-normal ACE activity follows the predictions made by most of the reliable agencies that produce seasonal hurricane forecasts. It should be noted, however, that the El Niño, which many of these forecasts believed would be the cause for a below-normal season, also failed to materialize. So, as discussed in previous postings, other factors that can lead to less storm activity, like African dust and cooler sea surface temperatures, actually did occur.

As I had previously blogged, Hurricane Arthur ended the absence of a hurricane stronger than a Category 1 hitting the U.S. coastline that had existed since 2008, and came close to ending the long-standing major hurricane drought, which has lasted 3,317 days and counting. However, many have already forgotten about Arthur due to its low impact. It is also worth noting that Florida alone has not had a hurricane of any intensity since October of 2005. This is simply an incredible statistic that, unfortunately, could negatively affect the Sunshine State if and when a hurricane hits in the future due to the fact that many may have become complacent regarding hurricane risk.

No other Atlantic hurricanes threatened the United States this season. However, there were indirect effects, such as high surf and rip currents from Bertha and Cristobal east of Cape Hatteras. Amazingly, at odds of 500-1, the small 20 square mile island of Bermuda was the unlikely recipient of two direct hits from named storms this season. Fay and Gonzalo both passed right over the island, causing power outages and some damage to buildings. Bermuda fared very well considering the impact from two systems less than a week apart, partially due to the strict building codes that have been followed and enforced, which is an important lesson for the insurance industry.

Another lesson from this event is that hurricane clustering is real, as was also the case during the 2004 hurricane season. At any time during a given season, prevailing wind currents that steer storms in a direction can get locked in place. This may cause storms to track repeatedly over or near the same area, triggering multiple storm losses in a given season. This storm clustering was also experienced in the East Pacific hurricane basin, which was exceptionally busy and produced a few hurricanes that impacted the Baja Peninsula and Mexico with either direct hits or leftover moisture. Three of these events, Norbert, Odile, and Polo, provide another great example of storm clustering. Storm clustering also occurred near the Hawaiian Islands and the south coast of Japan, which saw four typhoons this season.

Important insurance lessons also come from Hurricane Odile, a major hurricane that directly hit the resort city of Cabo San Lucas, Mexico. Although the overall wind swath and configuration of the Baja Peninsula seemed to minimize Odile’s impact, resulting in an industry event below $1B USD, there is a likelihood that the storm’s strength, with a preliminary pressure of 930 mb, will trigger the MultiCat 2012 cat bond, making it only the second cat bond to be triggered due to a tropical cyclone. This shows that alternative risk transfer products can work given the right insurance structure. The other insurance lesson learned from Odile is related to losses that go beyond basic building damage. There were numerous examples of looting in big box stores like Costco and Wal-Mart, as well as other small stores.

Walmart_Looting2  Walmart_Looting

Looting of Cabo San Lucas Wal-Mart after Hurricane Odile

The business interruption also was significant. I personally canceled a holiday in the beginning of October because of a lack of facilities, which adds up when thousands of others do the same, resulting in hundreds of millions of dollars in lost income to the tourism industry.

HotelDamage_Cabo  HyattPlace_Cabo (2)

Riu and Hyatt Hotel Damage from Hurricane Odile

While it’s too early to suggest what might occur during the 2015 Atlantic hurricane season, and although the current hurricane drought is exceptional, it is important to remember the long-term hurricane risk remains the same as it has been over the last century. BMS Analytics is here to help clients prepare for an active season by providing the newest tools and knowledge gained from past seasons, regardless of the overall low impact the past several seasons have had on the industry.

Unusual Weather we’re Having, Ain’t It?

I have been saving this title for awhile, and with the recent 75th anniversary of the release of The Wizard of Oz, in which the cowardly lion says this line as he notices the fallen snow on the poppy field, I find it a fitting start to a discussion about extreme weather. Interestingly, this might also be the first case where a blockbuster movie promotes the idea that average weather can manifest into “extreme weather,” such as a garden-variety tornado in Kansas turning ugly and transporting people to alternate universes.

Images courtesy of Warner Bros. Entertainment

As a meteorologist, I often run into self-proclaimed armchair meteorologists all the time. It has never been easier to get weather information via a blog, Twitter, or on television, which now has at least four cable channels devoted solely to weather. Because weather impacts almost everyone on a daily basis and changes often, it is closely watched. However, with this accessibility of information, one can easily become brainwashed with the idea that normal weather is somehow extreme.

The Media Research Center has just released what I think is fascinating research. The Center analyzed broadcast television network transcripts for morning and evening shows looking for stories using the phrase “extreme weather” between July 1, 2004 and July 1, 2005, and also between July 1, 2013 and July 1, 2014. Ten years ago, ABC, CBS, and NBC barely used the phrase. Now, its use is prolific, despite scientific disagreement regarding extreme weather trends, as discussed in the most recent Intergovernmental Panel on Climate Change Fifth Assessment Report (Chapter 2).

According to the Media Research Center, between July, 2004 and July, 2005, the three networks only used the phrase “extreme weather” in 18 stories on morning and evening news shows in that entire year, even though there were several opportunities to use the phrase when reporting on the 13 named storms that impacted the U.S. during that period.

Now, the familiar phrase of the networks, “if it bleeds, it leads” has taken a backseat to “extreme weather.” In the past year (July, 2013 through July, 2014), the same network news shows discussed extreme weather 988 percent more often, in a whopping 196 stories. That is more than enough stories to see, on average, one every other day. Here is a short video montage to illustrate:

This is despite lower occurrences of severe weather (e.g., hail, wind, tornado) and hurricanes than were observed during the same period 10 years ago.

The Media Research Center study states that “extreme weather” was frequently used by the networks to describe fairly normal weather events, such as heat waves, droughts, tornadoes, hurricanes and winter storms, and they often included the phrase in onscreen graphics or chyrons during weather stories. ABC even has an extreme weather team, dedicated to covering such events. We also get footage from storm chasers that make a living driving into the worst weather.

Since some people still read the old-fashioned newspaper, let’s analyze the 162-year history of the New York Times, which can be done using a tool for graphing the frequency of use of certain words and phrases called the Chronicle.

It is interesting to note that the 1933 hurricane on Long Island or a major drought in 1988 were not considered extreme weather events. The disproportionally high use of the phrase “extreme weather” started after 2005.

The publishing of news is inherently an ephemeral act. A big story will consume public attention for a day, a month or a year, only to fade from memory as quickly as it erupted. There is no doubt that weather events get more attention in this day and age of instant communication and technology, and the speed with which this information is shared certainly has an influence on how people think. It is important to remember that extreme weather is completely natural and there will always be extreme weather somewhere, as the atmosphere is in a constant battle to reach equilibrium. In fact, it is less likely to have a day that is perfectly average than to have one that is one or two standard deviations above or below the average. However, the use of the phrase “extreme weather” in the media occurs with alarming regularity and is undoubtedly influencing the insurance industry.

Magnitude 6.0 Is Not The Big One

A magnitude 6.0 earthquake is big, but not “The Big One.” This blog looks at some interesting aspects of the recent California earthquake, as well as general issues the insurance industry should consider as we await “The Big One.”

The strongest earthquake to strike San Francisco’s Bay Area in 25 years was recorded on Sunday morning. The U.S. Geological Survey (“USGS”) registered a magnitude 6.0 tremor at 3:20 a.m. local time, with an epicenter located 5 miles south/southwest of Napa, California at a depth of 6.6 miles.

The insurance industry is just starting to grasp the complex nature of the Napa earthquake losses, but it is important to note that this might be the first earthquake in California to utilize some of the newest geospatial technologies, allowing companies to immediately understand the risks exposed and produce damage estimates based on the magnitude of shake intensity.

USGS ShakeMap within BMS iVision

Despite the shaking, damage, injuries, and fear, thankfully this earthquake wasn’t “The Big One.” But, the South Napa earthquake provides a good example of how magnitude is important when analyzing an earthquake’s impact to the insurance industry. Earthquake magnitudes are on a logarithmic scale. Each integer number increase in magnitude reflects 10 times more ground motion and 32 times more energy released. While it sounds like a magnitude 6.0 and a magnitude 6.9 are close enough to get lumped into the same category of earthquakes, the impacts of each are dramatically different. For example, the Loma Prieta earthquake (M6.9) of 1989 was more than 22 times stronger than Sunday’s magnitude 6.0 event, illustrated below in a comparison of two ShakeMaps from the USGS.

ShakeMaps from Loma Prieta Earthquake vs. South Napa Earthquake.

All the Potential Faults

It was only 20 years ago, but we often forget that the 1994 Northridge earthquake was on an unknown fault system. Early reports suggest the South Napa earthquake could have come from an unknown fault as well, which emphasizes that the focus should not always be placed solely on the well-known San Andreas Fault.

According to the California State Geologic Survey Map within the Bay Area, the main San Andreas Fault cuts through San Francisco and sections off Point Reyes. However, many other faults within the zone are also prominent and active enough to earn names. The Hayward Fault, Rodgers Creek, San Joaquin and Green Valley are the structural underpinnings of the long valleys characteristic to the region. What may be surprising, however, is that many of the small faults don’t have names at all, especially if they haven’t had major damaging activity in the recent past. These faults should also be considered by not only the modeling companies, but the insurance companies that write the risks. Further, the industry needs to keep in mind that the location of the epicenter is critical to determining expected damage and, so far, most of the major quakes in our lifetime have not been located under major population centers.

Sunday’s earthquake appears to have ruptured on or just west of mapped traces of the West Napa Fault, which has ruptured sometime in the past 11,000 years. The most seismically active areas have been between the longer Rodgers Creek Fault on the west and the Concord-Green Valley Fault to the east. It’s entirely possible that this earthquake occurred where the fault was covered by sediments, with recent movement that we didn’t know about until today. It’s important to reiterate that many faults have been active in the past 2.6 million years. However, numerous more are inactive and countless are still unknown.

California State Geologic Survey Map of Faults.

Massive Flooding

Early reports suggest damage is localized in the region surrounding Napa due to the rupture directivity to the northwest. River valley sediments in Napa Valley likely contributed to the amplification of shaking around Napa. Major river systems in the area are another factor that should be considered when analyzing the potential consequences of California earthquakes. While it seems unfair that California is getting hit with two disasters – the ongoing extreme drought and now a substantial earthquake – this overlap may actually be a good thing. As mentioned in other studies, if this earthquake had happened when water was more abundant, the aging levee system protecting islands within the Sacramento Delta would have been saturated and vulnerable to liquefaction during the earthquake. If those levees succumbed, their inundation would have drawn saltwater from the bay up into the delta system, which could have caused saltwater to reach the California State Water Project intakes. Considering the Delta is the water supply for two-thirds of Californians and supports central valley agriculture, contaminating the water intake would have been disastrous. That’s not the only relationship between the drought and earthquakes. Recently published research suggests that groundwater depletion in the San Joaquin Valley is linked to crustal flexing in the adjacent mountain ranges, potentially increasing seismicity of the region.

“The Big One”

Now, as a forecaster of the weather, sometimes I get asked “what is the latest state of forecasting earthquakes?” This is a common question, particularly after an earthquake such as this. There are currently various unknowns when trying to determine if this earthquake adds or reduces stress for “The Big One.” The bottom line is that it is impossible to predict the exact timing of an earthquake.

About every six years, the USGS updates its hazard maps to incorporate the latest geoscience research. The new USGS hazard map reveals that 16 states are at high risk of damaging earthquakes over the next 50 years, and these states have all historically experienced earthquakes with a magnitude 6.0 or higher. Some of the biggest changes have come in the Pacific Northwest and in California, where research has identified several areas capable of having the potential for larger and more powerful earthquakes than previously believed. A 2008 USGS study determined that the probability of a magnitude 6.7 or larger earthquake occurring within the greater Bay Area in the next 30 years was 63%. When the impact of the South Napa earthquake is included in the next batch of geophysical models for the region, those probabilities are likely going to stay the same. The earthquake released energy, but not enough to appreciably relieve tectonic stress within the region. It would take many more earthquakes of similar magnitude 6.0 to relieve the same amount of energy as just one magnitude 6.7 earthquake.

Arthur’s Amazing Facts Are a Positive for the Insurance Industry

Two weeks ago, Hurricane Arthur made landfall along the North Carolina Outer Banks. Arthur was the strongest hurricane to make U.S. landfall since Hurricane Ike in 2008 and was just 13 mph shy of ending the U.S. major hurricane drought. However, the overall impact of Hurricane Arthur was diminished due to the strongest winds being on the right side of the storm as it crossed eastern North Carolina, as discussed in my previous blog post, resulting in less overall damage. While damage was reported, and up to six feet of storm surge was observed in parts of the Outer Banks, most damage seemed to be flood-related and will be picked up by the NFIP, resulting in a loss level that falls below PCS CAT designation guidelines. This is notable for several reasons.

When reviewing the extensive PCS records of both U.S. hurricane landfall and hurricane loss, Hurricane Arthur is the second Category 2 hurricane to make landfall and not have a PCS designation. The only other storm in which this situation occurred was Hurricane Gerda, which made landfall in the extreme northeast portion of Maine in 1969, making the lack of designation understandable given the limited exposure across this region. However, according to Corelogic, there are an estimated 23,215 residential properties in Kill Devil Hills and Morehead City, NC where Arthur made landfall, with a total replacement cost of $4.7 billion. Based on Verisk Climate Respond weather data found in the BMS iVision Historical Events Library and using the unique PCS shapefile for Arthur, it is remarkable that a Category 2 hurricane in this area that had three-second wind gusts over 70 mph would not cause a PCS loss of at least $25 million. Particularly since there have been previous storms that have taken similar tracks and caused PCS-designated losses in the past.

BMS iVision with Arthur’s track and estimated three-second wind gust swath.

Although each named storm has special attributes that may cause insured loss, the general characteristics that drive loss are similar. However, as the image below illustrates, there were five hurricanes that occurred between 1955 and 2012 that tracked within 30 miles of Arthur’s path across North Carolina’s Outer Banks. These five storms all produced PCS losses, even though they had similar or weaker storm strengths than Arthur at landfall.

Five historical storms that have tracked within 30 miles of Arthur’s track and caused PCS loss.

More significantly, when looking at the past named storms from 1955 to 2012, 35 have caused PCS losses in North Carolina, with many of the named storms making impact at or below a Category 2, and several storms tracking hundreds of miles away from North Carolina, such as Hurricane Sandy (2012), which tracked 273 miles east of the Outer Banks. Click here for a linked table to these storms, which can be reviewed using NOAA’s Historical Hurricane Tracks tool. The image below provides a view of four of the named storms that caused PCS losses in North Carolina.

Tracks of four of the 35 historical storms that have caused PCS to North Carolina according to PCS data.

The examples above illustrate that North Carolina’s Outer Banks are no stranger to named storm activity, with the expected landfall return period for this area being five years, and major hurricane return period being 16 years, according to the National Hurricane Center. This has allowed the Outer Banks to better prepare for future named storm losses. The good news is that after years of storms, a Category 2 hurricane making U.S. landfall and having minimal impact demonstrates that insurance companies are becoming more risk-averse and policyholders are either constructing or reconstructing buildings at standards that reduce loss. One can only hope that future hurricanes making landfall along the U.S. coast will produce similar results.

PIAA Reserve and Profitability Update

In our fourth annual review of the Physician Insurers Association of America (“PIAA”) companies, BMS continues to look at:

1.   how the profitability of member companies has changed over the past decade;

2.   the profitability of their current business;

3.   the level of redundancies in loss and loss adjustment expense reserves that has been released in recent years; and

4.   how much redundancy remained as of December 31, 2013.

Click here to read the full article

The Right Side of a Storm

The insurance industry often focuses on media graphics that depict a storm’s path and the “cone of uncertainty,” but many of these graphics fail to explain the physical structure of a hurricane. The extent of hurricane damage doesn’t solely depend on the strength of the storm. It is also greatly influenced by the way the storm makes contact with land, and whether the left or right side of a hurricane strikes a given area.

The “right side of the storm” refers to the storm’s motion. For example, if the hurricane is moving to the west, the right side would be to the north of the storm; if the hurricane is moving to the north, the right side would be to the east of the storm. In the Northern Hemisphere, the strongest winds in a hurricane are generally found on the right side of the storm because the motion of the hurricane contributes to its swirling winds. Therefore, the right side of a hurricane packs more punch, since the wind speed and the hurricane’s speed of motion align. Conversely, on the left side, the hurricane’s speed of motion subtracts from the wind speed. The National Hurricane Center (“NHC”) forecasts take this asymmetry into account and often predict that the highest winds are generated on the right side of the storm.

The image above illustrates why the strongest winds in a hurricane are typically on the right side of the storm.

Hurricane Arthur is now less than 12 hours from impacting the North Carolina coastline, with a forecasted intensity of a strong Category 2 storm. Knowing the exact track of Arthur is critical to predicting the expected damage. If Arthur follows a more easterly track and skirts North Carolina’s Outer Banks, as suggested by the Geophysical Fluid Dynamics Laboratory (“GFDL”) model and current NHC forecast, it would mean the strongest winds (i.e., the right side) would remain away from the Outer Banks and offshore. However, forecast adjustments have been increasingly trending to the west, and with most U.S. models favoring a landfall near Morehead City, NC, the worst possible conditions would hit the Outer Banks as the storm tracks up Pamlico Sound.

Above is a view from BMS iVision, which, using model guidance from Verisk Respond, currently puts the right side of the storm and the strongest winds directly over the Outer Banks. This real-time wind forecasting information within iVision will enable clients to view the effect of Hurricane Arthur’s wind swath on their policy base, therefore providing a better estimate of exposed locations and possible losses. This westward track also increases concern for storm surge. The islands of the Outer Banks flood very easily, and the latest forecast by the NHC suggests up to three feet of water over US-64, which is one of two roads crossing the Outer Banks. However, Arthur’s forecasted approach along the North and South Carolina coastlines should limit the impact of a large storm surge.

While the Outer Banks is no stranger to hurricane-force winds, or even storms named Arthur (which occurred in both 1996 and 2002), this storm is forecasted to be one of the strongest to impact the area since Hurricane Emily in 1993. With an estimated return period of a hurricane passing within 50 miles of the Outer Banks occurring every five years, property has generally been upgraded to withstand such storms. However, the strongest winds staying to the right side of the current NHC track will determine the final outcome of damage and loss.

Tropical Update: Arthur

With a month of the Atlantic hurricane season in the books, one might think that the quiet Atlantic hurricane season is unusual. Historically, however, the year-to-date Atlantic hurricane season typically only experiences an Accumulated Cyclone Energy (ACE) index value of 1, based on the 1981 – 2010 climatology. Also, on average the first named storm typically does not form until the first week in July, with the first hurricane not showing up until mid-August. According to Roger Pielke Jr.’s normalized economic hurricane loss dataset, when looking at damage from tropical cyclones, historically only 2% of hurricane damage occurs in July, with 95% occurring in August and September. In fact, with the development of the first named tropical storm of the 2014 Atlantic hurricane season (Arthur) off the southeast coast of the U.S., the 2014 season is matching nicely with climatology, and by July 4 it should be ahead of climatology.

Earlier this spring, in our first look at the 2014 hurricane season, it was mentioned that not all El Niño seasons are the same. Even if an El Niño develops, it does not mean that the Atlantic hurricane season will have limited impact. In that post we highlighted past seasons, such as 2004, where El Niño had a high impact and we further detailed the importance the warmer- than-normal Sea Surface Temperatures (SST) off the East Coast could have on the upcoming season. Arthur is currently centered over these warmer-than-normal SSTs and is expected to strengthen into the first hurricane of the 2014 season.

Above is the National Hurricane Center (NHC) official track and intensity forecast, as of 11 AM EDT, showing Arthur tracking along the southeast coast of the U.S. over waters of at least 26 degrees Celsius. This water temperature is warm enough to support hurricane development. According to the NHC, Arthur is expected to just by pass the Outer Banks of North Carolina as a category 1 hurricane on Friday July 4th.

Another factor that will aid in hurricane development is the natural curve of the southeast coastline. Historically, the curve of the coastline has helped similar storms develop in this area, by providing a natural pressure/wind gradient that allows for counter-clockwise rotation. In 2004, Hurricane Alex battered the outer banks and strengthened in a 42-hour period from a minimal 35 kts tropical storm to a 85 kts hurricane, as it tapped into the warm waters of the Gulf Stream. Hurricane Alex produced light damage in the Outer Banks, primarily from flooding and high winds. Over 100 houses were damaged and damage totaled approximately $7.5 million (2004 USD) in economic loss.

As Arthur develops, an approaching trough of low pressure that is moving into the central U.S. will provide an atmospheric pattern conducive to low pressure development on the southeast side of the trough; this low pressure will allow for further intensification later this week. However, this approaching trough will not only keep the upper Midwest and parts of the East Coast cool for the July 4th holiday weekend, it will likely provide the steering flow to push Arthur off shore and provide minimal impact to insured property along the East Coast. This would be similar to the impact of Alex in 2004.

The greatest threat will be to the North Carolina Outer Banks on the 4th of July, as the storm tracks 50 – 100 miles east as a possible strong category 1 hurricane. It has been 1 year, 10 months and 1 day since the last hurricane hit the U.S. (Hurricane Isaac). With the understanding that Superstorm Sandy was officially downgraded miles off the NJ coastline, keep in mind that hurricane Sandy rapidly strengthened, due to a warm gulf stream and Arthur has access to similar warm waters to spur it on. It is these warmer-than-normal SSTs that need to be watched all season.

2014 Atlantic Hurricane Forecasted Activity

The 2014 Atlantic hurricane season officially begins on June 1. A lot of preseason forecasts are hyping the importance a developing El Niño will have on the overall tropical activity in the Atlantic Basin, which should lead to less storm formation. However, a word of caution: there are plenty of examples of years with El Niños that had significant landfall activity across the U.S. Below is a list of the climate forcers that can influence named storm activity and how they will impact the 2014 season.

  • A weak to moderate El Niño is expected to develop, reducing named storm activity across the main development region in the Atlantic Basin.
  • A westerly to neutral Quasi-Biennial Oscillation will likely result in increased named storm development closer to the U.S. coastline, versus the development of Cape Verde-type storms.
  • Saharan dust can limit overall development of named storms, but conditions across North Africa are not favorable for large Saharan dust outbreaks and should not reduce named storm activity this year, but this climate forcer can change rapidly over the season.
  • Atlantic sea surface temperatures are warmer than the long-term average, but this temperature is slightly below-average relative to the current period of heightened sea surface temperatures that began in the mid-1990s. This will likely reduce activity in the main development region.
  • The sea surface temperatures are significantly above normal along the East Coast, which could increase development of named storms closer to the East Coast, increasing the threat of landfall.

The climate forcers above can provide an idea on the overall hurricane season activity, but, truthfully, there is little skill in predicting the total number of named storms and where they might make landfall. The best way for the insurance industry to prepare is to carefully consider the risks and their potential impact. BMS’ new weather risk management module in iVision can help carriers better understand their risk and manage portfolio accumulation in areas prone to hurricanes. iVision also has tools to track forecasted hurricanes, including detailed hurricane wind fields, which can help carriers understand the range of potential loss outcomes from landfalling hurricanes.  Learn more about the Hurricane Risk Management Module.

2014 Atlantic Hurricane Season and an El Niño

When the 2014 hurricane season officially starts on June 1, it will have been 3,142 days since the last Category 3 hurricane made landfall along the U.S coastline (Hurricane Wilma, 2005). This shatters the old record for the longest stretch between U.S. intense hurricanes since 1900. In fact, landfalls in general have been down since 2005, with a rate of 0.75 landfalls occurring per year since 2006, versus the rate of 1.78 that had been experienced since the warming of the Atlantic Multidecadal Oscillation in 1995.

Although Superstorm Sandy is still fresh in the minds of many insurers in the Northeast, insurers in hurricane-prone states could become complacent due to the lack of storms since 2005. The “doom and gloom” forecasts for the 2013 hurricane season failed to materialize, and early predictions for 2014 have already hinted at below-normal named storm activity, contributing to such complacency. These Atlantic hurricane forecasts call for hostile conditions across the deep tropics due to the development of an El Niño, which brings increased wind shear across the Main Development Region (MDR) of the Atlantic and could lead to less overall named storm formation.

There is a lot of chatter about the possible development of a “super El Niño” similar to that which occurred in 1997–1998. This type of event would drastically limit overall hurricane development. However, the Pacific Ocean is in an overall cold phase (the Pacific Decadal Oscillation (PDO)), a state which often makes it difficult to have strong, long-lived El Niño events. Instead, the PDO suggests a short-lived El Niño, but the specific manifestations of any given El Niño event greatly depend on its strength. Every El Niño event is different, but overall the phenomenon has become associated with the following:

* An uptick in the average global temperature

* Increased rainfall in Peru

* Drought in Australia

* Warmer than average temperatures in Alaska

* Elevated rainfall in California during moderate and strong events

* Dry weather in the Pacific Northwest states

* Increased snowfall in the Mid-Atlantic, especially for moderate El Niño events

* Cooler and wetter than average conditions in the Southeast U.S.

* Increased hurricane activity in the eastern tropical Pacific basin

* Depressed hurricane activity in the tropical Atlantic

While El Niño years generally have lower instances of named storms that make landfall, there are plenty of examples of El Niño-influenced hurricane seasons that have impacted the U.S. coast. Below is a look at such years, as well as the number of storms that made landfall and the adjusted insured loss in 2014 dollars.

Year # of Landfalling Storms Adjusted 2014 Insurance Loss
1957 2 $1,489,000,000
1965 2 $11,177,500,000
1969 1 (Camille) $8,250,000,000
1976 5 $300,000,000
1991 1 (Bob) $1,730,000,000
1992 1 (Andrew) $28,005,000,000
2002 6 $902,050,000
2004 6 $28,387,500,000

As we learned last year, seasonal forecasting has its challenges. Currently, there is a 75% chance of an El Niño developing this summer during the peak of the Atlantic hurricane season. However, in 2012 when an El Niño watch was issued, an El Niño never formed. In fact, since 1997 there have been five threats of a super El Niño that never developed. Therefore, taking into account the uncertainty in any seasonal climate forecast and the history as shown in the chart above, there can be an increased threat from tropical storms even in El Niño years. The 2014 seasonal forecast might also focus on other regional climate forces. One of these forces might be that the Sea Surface Temperatures (SST) off of the Eastern Seaboard of the U.S. are warmer than normal, which not only adds fuel to storms like Superstorm Sandy, but also could lead to deepening of pressures if any tropical disturbances tap into this potential fuel source later this summer. This warmer water also likely means that storms could develop closer to the U.S. coastline.

The new seasonal hurricane forecasts, which will roll out around June 1, tend to have increased accuracy as compared to the spring projections. These forecasts will continue to reflect the evolution of the El Niño, which can be followed on the Climate Prediction Center’s website (El Niño/La Niña Current Conditions and Expert Discussions). BMS will also provide updates throughout the season, but expect new seasonal forecasts to call for named storm formation to be below normal for the 2014 Atlantic hurricane season.

Peak of Thunderstorm Season Approaching

Although we are approaching the start of May, which is the peak month for thunderstorm development, the 2014 thunderstorm season has been off to a historically slow start. One advantage to this inactivity is that the insurance industry benefits from low thunderstorm losses not seen since 2004. In fact, the insurance industry has reported only $780 million of wind and thunderstorm event losses over three events (with two events yet to be estimated), according to Property Claims Service (PCS). This is far below the $4.6 billion in wind and thunderstorm event losses that have occurred on average over the last 10 years.

Not including the tornadoes that have occurred over the last few days as designated in PCS Event #40, the Storm Prediction Center has recorded 109 tornadoes as of April 24 for the 2014 calendar year which, according to BMS’ in-house tornado database from the Storm Prediction Center, indicates that this year is the slowest start to a tornado season in the 62 years of recorded data. Although the recent outbreak of tornadoes will add to the tornado count, the official count will still be in record-low territory. Harold Brooks at the National Severe Storms Laboratory, who has examined nearly 100 years of past tornado records, states that he is “challenged” to find a year that started with less tornado activity than 2014. Of the nearly 100 tornadoes reported this year, only 20 of them had been rated EF1 or higher, with the first EF3 or higher rated tornadoes only recently being recorded with this latest outbreak. This breaks a streak of 159 days, which currently stands as the fourth-longest streak on record between major tornadoes.

Despite the massive tornado that carved a swath of damage across Moore, OK during the 2013 tornado season, overall tornado statistics show that the U.S. has been in a tornado drought since the second half of 2012, with a record low number of tornadoes in 2013. Part of the explanation for the drought in intense tornadoes that has occurred since October 2013 is the persistent dip in the jet stream over the eastern half of the nation. This has unlocked the floodgates for arctic air, essentially shutting down the instability that is needed to develop explosive thunderstorms, which are often fueled by heat and moisture from the Gulf of Mexico.

The long-term forecast suggests much of the same cold will continue across the North Central Plains into the East Coast through the start of May, which should aid in putting a lid on thunderstorm development. But an extremely quiet start to the tornado season guarantees nothing about its future course, since May and June, which average 116 and 60 tornadoes, respectively (based on records from 2003 – 2013 of EF1-rated tornadoes or greater), are usually the two busiest tornado months of the year in the U.S. Despite the historically slow start, when looking at the tornado data recorded since 1953, 37 of the 62 years, or 59%, have started with below-average tornado counts of EF1 or greater. Of the 37 years that started below average, 6 years, or 16%, ended up having an above-average tornado season, The most recent years with slow starts but above-average tornado activity are 2010 and 2004, which resulted in $12.7 billion and $3.5 billion, respectively, in wind and thunderstorm event losses, according to PCS. As we saw with the recent PCS #40 declaration, there will be tornado outbreaks that cause billions of dollars in damages, but a major year like 2011 or 2008 could almost be ruled out and this recent trend should make one rethink the claims of the “new normal” back in 2011.