partner login

BMS News

Magnitude 6.0 Is Not The Big One

A magnitude 6.0 earthquake is big, but not “The Big One.” This blog looks at some interesting aspects of the recent California earthquake, as well as general issues the insurance industry should consider as we await “The Big One.”

The strongest earthquake to strike San Francisco’s Bay Area in 25 years was recorded on Sunday morning. The U.S. Geological Survey (“USGS”) registered a magnitude 6.0 tremor at 3:20 a.m. local time, with an epicenter located 5 miles south/southwest of Napa, California at a depth of 6.6 miles.

The insurance industry is just starting to grasp the complex nature of the Napa earthquake losses, but it is important to note that this might be the first earthquake in California to utilize some of the newest geospatial technologies, allowing companies to immediately understand the risks exposed and produce damage estimates based on the magnitude of shake intensity.

USGS ShakeMap within BMS iVision

Despite the shaking, damage, injuries, and fear, thankfully this earthquake wasn’t “The Big One.” But, the South Napa earthquake provides a good example of how magnitude is important when analyzing an earthquake’s impact to the insurance industry. Earthquake magnitudes are on a logarithmic scale. Each integer number increase in magnitude reflects 10 times more ground motion and 32 times more energy released. While it sounds like a magnitude 6.0 and a magnitude 6.9 are close enough to get lumped into the same category of earthquakes, the impacts of each are dramatically different. For example, the Loma Prieta earthquake (M6.9) of 1989 was more than 22 times stronger than Sunday’s magnitude 6.0 event, illustrated below in a comparison of two ShakeMaps from the USGS.

ShakeMaps from Loma Prieta Earthquake vs. South Napa Earthquake.

All the Potential Faults

It was only 20 years ago, but we often forget that the 1994 Northridge earthquake was on an unknown fault system. Early reports suggest the South Napa earthquake could have come from an unknown fault as well, which emphasizes that the focus should not always be placed solely on the well-known San Andreas Fault.

According to the California State Geologic Survey Map within the Bay Area, the main San Andreas Fault cuts through San Francisco and sections off Point Reyes. However, many other faults within the zone are also prominent and active enough to earn names. The Hayward Fault, Rodgers Creek, San Joaquin and Green Valley are the structural underpinnings of the long valleys characteristic to the region. What may be surprising, however, is that many of the small faults don’t have names at all, especially if they haven’t had major damaging activity in the recent past. These faults should also be considered by not only the modeling companies, but the insurance companies that write the risks. Further, the industry needs to keep in mind that the location of the epicenter is critical to determining expected damage and, so far, most of the major quakes in our lifetime have not been located under major population centers.

Sunday’s earthquake appears to have ruptured on or just west of mapped traces of the West Napa Fault, which has ruptured sometime in the past 11,000 years. The most seismically active areas have been between the longer Rodgers Creek Fault on the west and the Concord-Green Valley Fault to the east. It’s entirely possible that this earthquake occurred where the fault was covered by sediments, with recent movement that we didn’t know about until today. It’s important to reiterate that many faults have been active in the past 2.6 million years. However, numerous more are inactive and countless are still unknown.

California State Geologic Survey Map of Faults.

Massive Flooding

Early reports suggest damage is localized in the region surrounding Napa due to the rupture directivity to the northwest. River valley sediments in Napa Valley likely contributed to the amplification of shaking around Napa. Major river systems in the area are another factor that should be considered when analyzing the potential consequences of California earthquakes. While it seems unfair that California is getting hit with two disasters – the ongoing extreme drought and now a substantial earthquake – this overlap may actually be a good thing. As mentioned in other studies, if this earthquake had happened when water was more abundant, the aging levee system protecting islands within the Sacramento Delta would have been saturated and vulnerable to liquefaction during the earthquake. If those levees succumbed, their inundation would have drawn saltwater from the bay up into the delta system, which could have caused saltwater to reach the California State Water Project intakes. Considering the Delta is the water supply for two-thirds of Californians and supports central valley agriculture, contaminating the water intake would have been disastrous. That’s not the only relationship between the drought and earthquakes. Recently published research suggests that groundwater depletion in the San Joaquin Valley is linked to crustal flexing in the adjacent mountain ranges, potentially increasing seismicity of the region.

“The Big One”

Now, as a forecaster of the weather, sometimes I get asked “what is the latest state of forecasting earthquakes?” This is a common question, particularly after an earthquake such as this. There are currently various unknowns when trying to determine if this earthquake adds or reduces stress for “The Big One.” The bottom line is that it is impossible to predict the exact timing of an earthquake.

About every six years, the USGS updates its hazard maps to incorporate the latest geoscience research. The new USGS hazard map reveals that 16 states are at high risk of damaging earthquakes over the next 50 years, and these states have all historically experienced earthquakes with a magnitude 6.0 or higher. Some of the biggest changes have come in the Pacific Northwest and in California, where research has identified several areas capable of having the potential for larger and more powerful earthquakes than previously believed. A 2008 USGS study determined that the probability of a magnitude 6.7 or larger earthquake occurring within the greater Bay Area in the next 30 years was 63%. When the impact of the South Napa earthquake is included in the next batch of geophysical models for the region, those probabilities are likely going to stay the same. The earthquake released energy, but not enough to appreciably relieve tectonic stress within the region. It would take many more earthquakes of similar magnitude 6.0 to relieve the same amount of energy as just one magnitude 6.7 earthquake.

Arthur’s Amazing Facts Are a Positive for the Insurance Industry

Two weeks ago, Hurricane Arthur made landfall along the North Carolina Outer Banks. Arthur was the strongest hurricane to make U.S. landfall since Hurricane Ike in 2008 and was just 13 mph shy of ending the U.S. major hurricane drought. However, the overall impact of Hurricane Arthur was diminished due to the strongest winds being on the right side of the storm as it crossed eastern North Carolina, as discussed in my previous blog post, resulting in less overall damage. While damage was reported, and up to six feet of storm surge was observed in parts of the Outer Banks, most damage seemed to be flood-related and will be picked up by the NFIP, resulting in a loss level that falls below PCS CAT designation guidelines. This is notable for several reasons.

When reviewing the extensive PCS records of both U.S. hurricane landfall and hurricane loss, Hurricane Arthur is the second Category 2 hurricane to make landfall and not have a PCS designation. The only other storm in which this situation occurred was Hurricane Gerda, which made landfall in the extreme northeast portion of Maine in 1969, making the lack of designation understandable given the limited exposure across this region. However, according to Corelogic, there are an estimated 23,215 residential properties in Kill Devil Hills and Morehead City, NC where Arthur made landfall, with a total replacement cost of $4.7 billion. Based on Verisk Climate Respond weather data found in the BMS iVision Historical Events Library and using the unique PCS shapefile for Arthur, it is remarkable that a Category 2 hurricane in this area that had three-second wind gusts over 70 mph would not cause a PCS loss of at least $25 million. Particularly since there have been previous storms that have taken similar tracks and caused PCS-designated losses in the past.

BMS iVision with Arthur’s track and estimated three-second wind gust swath.

Although each named storm has special attributes that may cause insured loss, the general characteristics that drive loss are similar. However, as the image below illustrates, there were five hurricanes that occurred between 1955 and 2012 that tracked within 30 miles of Arthur’s path across North Carolina’s Outer Banks. These five storms all produced PCS losses, even though they had similar or weaker storm strengths than Arthur at landfall.

Five historical storms that have tracked within 30 miles of Arthur’s track and caused PCS loss.

More significantly, when looking at the past named storms from 1955 to 2012, 35 have caused PCS losses in North Carolina, with many of the named storms making impact at or below a Category 2, and several storms tracking hundreds of miles away from North Carolina, such as Hurricane Sandy (2012), which tracked 273 miles east of the Outer Banks. Click here for a linked table to these storms, which can be reviewed using NOAA’s Historical Hurricane Tracks tool. The image below provides a view of four of the named storms that caused PCS losses in North Carolina.

Tracks of four of the 35 historical storms that have caused PCS to North Carolina according to PCS data.

The examples above illustrate that North Carolina’s Outer Banks are no stranger to named storm activity, with the expected landfall return period for this area being five years, and major hurricane return period being 16 years, according to the National Hurricane Center. This has allowed the Outer Banks to better prepare for future named storm losses. The good news is that after years of storms, a Category 2 hurricane making U.S. landfall and having minimal impact demonstrates that insurance companies are becoming more risk-averse and policyholders are either constructing or reconstructing buildings at standards that reduce loss. One can only hope that future hurricanes making landfall along the U.S. coast will produce similar results.

PIAA Reserve and Profitability Update

In our fourth annual review of the Physician Insurers Association of America (“PIAA”) companies, BMS continues to look at:

1.   how the profitability of member companies has changed over the past decade;

2.   the profitability of their current business;

3.   the level of redundancies in loss and loss adjustment expense reserves that has been released in recent years; and

4.   how much redundancy remained as of December 31, 2013.

Click here to read the full article

The Right Side of a Storm

The insurance industry often focuses on media graphics that depict a storm’s path and the “cone of uncertainty,” but many of these graphics fail to explain the physical structure of a hurricane. The extent of hurricane damage doesn’t solely depend on the strength of the storm. It is also greatly influenced by the way the storm makes contact with land, and whether the left or right side of a hurricane strikes a given area.

The “right side of the storm” refers to the storm’s motion. For example, if the hurricane is moving to the west, the right side would be to the north of the storm; if the hurricane is moving to the north, the right side would be to the east of the storm. In the Northern Hemisphere, the strongest winds in a hurricane are generally found on the right side of the storm because the motion of the hurricane contributes to its swirling winds. Therefore, the right side of a hurricane packs more punch, since the wind speed and the hurricane’s speed of motion align. Conversely, on the left side, the hurricane’s speed of motion subtracts from the wind speed. The National Hurricane Center (“NHC”) forecasts take this asymmetry into account and often predict that the highest winds are generated on the right side of the storm.

The image above illustrates why the strongest winds in a hurricane are typically on the right side of the storm.

Hurricane Arthur is now less than 12 hours from impacting the North Carolina coastline, with a forecasted intensity of a strong Category 2 storm. Knowing the exact track of Arthur is critical to predicting the expected damage. If Arthur follows a more easterly track and skirts North Carolina’s Outer Banks, as suggested by the Geophysical Fluid Dynamics Laboratory (“GFDL”) model and current NHC forecast, it would mean the strongest winds (i.e., the right side) would remain away from the Outer Banks and offshore. However, forecast adjustments have been increasingly trending to the west, and with most U.S. models favoring a landfall near Morehead City, NC, the worst possible conditions would hit the Outer Banks as the storm tracks up Pamlico Sound.

Above is a view from BMS iVision, which, using model guidance from Verisk Respond, currently puts the right side of the storm and the strongest winds directly over the Outer Banks. This real-time wind forecasting information within iVision will enable clients to view the effect of Hurricane Arthur’s wind swath on their policy base, therefore providing a better estimate of exposed locations and possible losses. This westward track also increases concern for storm surge. The islands of the Outer Banks flood very easily, and the latest forecast by the NHC suggests up to three feet of water over US-64, which is one of two roads crossing the Outer Banks. However, Arthur’s forecasted approach along the North and South Carolina coastlines should limit the impact of a large storm surge.

While the Outer Banks is no stranger to hurricane-force winds, or even storms named Arthur (which occurred in both 1996 and 2002), this storm is forecasted to be one of the strongest to impact the area since Hurricane Emily in 1993. With an estimated return period of a hurricane passing within 50 miles of the Outer Banks occurring every five years, property has generally been upgraded to withstand such storms. However, the strongest winds staying to the right side of the current NHC track will determine the final outcome of damage and loss.

Tropical Update: Arthur

With a month of the Atlantic hurricane season in the books, one might think that the quiet Atlantic hurricane season is unusual. Historically, however, the year-to-date Atlantic hurricane season typically only experiences an Accumulated Cyclone Energy (ACE) index value of 1, based on the 1981 – 2010 climatology. Also, on average the first named storm typically does not form until the first week in July, with the first hurricane not showing up until mid-August. According to Roger Pielke Jr.’s normalized economic hurricane loss dataset, when looking at damage from tropical cyclones, historically only 2% of hurricane damage occurs in July, with 95% occurring in August and September. In fact, with the development of the first named tropical storm of the 2014 Atlantic hurricane season (Arthur) off the southeast coast of the U.S., the 2014 season is matching nicely with climatology, and by July 4 it should be ahead of climatology.

Earlier this spring, in our first look at the 2014 hurricane season, it was mentioned that not all El Niño seasons are the same. Even if an El Niño develops, it does not mean that the Atlantic hurricane season will have limited impact. In that post we highlighted past seasons, such as 2004, where El Niño had a high impact and we further detailed the importance the warmer- than-normal Sea Surface Temperatures (SST) off the East Coast could have on the upcoming season. Arthur is currently centered over these warmer-than-normal SSTs and is expected to strengthen into the first hurricane of the 2014 season.

Above is the National Hurricane Center (NHC) official track and intensity forecast, as of 11 AM EDT, showing Arthur tracking along the southeast coast of the U.S. over waters of at least 26 degrees Celsius. This water temperature is warm enough to support hurricane development. According to the NHC, Arthur is expected to just by pass the Outer Banks of North Carolina as a category 1 hurricane on Friday July 4th.

Another factor that will aid in hurricane development is the natural curve of the southeast coastline. Historically, the curve of the coastline has helped similar storms develop in this area, by providing a natural pressure/wind gradient that allows for counter-clockwise rotation. In 2004, Hurricane Alex battered the outer banks and strengthened in a 42-hour period from a minimal 35 kts tropical storm to a 85 kts hurricane, as it tapped into the warm waters of the Gulf Stream. Hurricane Alex produced light damage in the Outer Banks, primarily from flooding and high winds. Over 100 houses were damaged and damage totaled approximately $7.5 million (2004 USD) in economic loss.

As Arthur develops, an approaching trough of low pressure that is moving into the central U.S. will provide an atmospheric pattern conducive to low pressure development on the southeast side of the trough; this low pressure will allow for further intensification later this week. However, this approaching trough will not only keep the upper Midwest and parts of the East Coast cool for the July 4th holiday weekend, it will likely provide the steering flow to push Arthur off shore and provide minimal impact to insured property along the East Coast. This would be similar to the impact of Alex in 2004.

The greatest threat will be to the North Carolina Outer Banks on the 4th of July, as the storm tracks 50 – 100 miles east as a possible strong category 1 hurricane. It has been 1 year, 10 months and 1 day since the last hurricane hit the U.S. (Hurricane Isaac). With the understanding that Superstorm Sandy was officially downgraded miles off the NJ coastline, keep in mind that hurricane Sandy rapidly strengthened, due to a warm gulf stream and Arthur has access to similar warm waters to spur it on. It is these warmer-than-normal SSTs that need to be watched all season.

2014 Atlantic Hurricane Forecasted Activity

The 2014 Atlantic hurricane season officially begins on June 1. A lot of preseason forecasts are hyping the importance a developing El Niño will have on the overall tropical activity in the Atlantic Basin, which should lead to less storm formation. However, a word of caution: there are plenty of examples of years with El Niños that had significant landfall activity across the U.S. Below is a list of the climate forcers that can influence named storm activity and how they will impact the 2014 season.

  • A weak to moderate El Niño is expected to develop, reducing named storm activity across the main development region in the Atlantic Basin.
  • A westerly to neutral Quasi-Biennial Oscillation will likely result in increased named storm development closer to the U.S. coastline, versus the development of Cape Verde-type storms.
  • Saharan dust can limit overall development of named storms, but conditions across North Africa are not favorable for large Saharan dust outbreaks and should not reduce named storm activity this year, but this climate forcer can change rapidly over the season.
  • Atlantic sea surface temperatures are warmer than the long-term average, but this temperature is slightly below-average relative to the current period of heightened sea surface temperatures that began in the mid-1990s. This will likely reduce activity in the main development region.
  • The sea surface temperatures are significantly above normal along the East Coast, which could increase development of named storms closer to the East Coast, increasing the threat of landfall.

The climate forcers above can provide an idea on the overall hurricane season activity, but, truthfully, there is little skill in predicting the total number of named storms and where they might make landfall. The best way for the insurance industry to prepare is to carefully consider the risks and their potential impact. BMS’ new weather risk management module in iVision can help carriers better understand their risk and manage portfolio accumulation in areas prone to hurricanes. iVision also has tools to track forecasted hurricanes, including detailed hurricane wind fields, which can help carriers understand the range of potential loss outcomes from landfalling hurricanes.  Learn more about the Hurricane Risk Management Module.

2014 Atlantic Hurricane Season and an El Niño

When the 2014 hurricane season officially starts on June 1, it will have been 3,142 days since the last Category 3 hurricane made landfall along the U.S coastline (Hurricane Wilma, 2005). This shatters the old record for the longest stretch between U.S. intense hurricanes since 1900. In fact, landfalls in general have been down since 2005, with a rate of 0.75 landfalls occurring per year since 2006, versus the rate of 1.78 that had been experienced since the warming of the Atlantic Multidecadal Oscillation in 1995.

Although Superstorm Sandy is still fresh in the minds of many insurers in the Northeast, insurers in hurricane-prone states could become complacent due to the lack of storms since 2005. The “doom and gloom” forecasts for the 2013 hurricane season failed to materialize, and early predictions for 2014 have already hinted at below-normal named storm activity, contributing to such complacency. These Atlantic hurricane forecasts call for hostile conditions across the deep tropics due to the development of an El Niño, which brings increased wind shear across the Main Development Region (MDR) of the Atlantic and could lead to less overall named storm formation.

There is a lot of chatter about the possible development of a “super El Niño” similar to that which occurred in 1997–1998. This type of event would drastically limit overall hurricane development. However, the Pacific Ocean is in an overall cold phase (the Pacific Decadal Oscillation (PDO)), a state which often makes it difficult to have strong, long-lived El Niño events. Instead, the PDO suggests a short-lived El Niño, but the specific manifestations of any given El Niño event greatly depend on its strength. Every El Niño event is different, but overall the phenomenon has become associated with the following:

* An uptick in the average global temperature

* Increased rainfall in Peru

* Drought in Australia

* Warmer than average temperatures in Alaska

* Elevated rainfall in California during moderate and strong events

* Dry weather in the Pacific Northwest states

* Increased snowfall in the Mid-Atlantic, especially for moderate El Niño events

* Cooler and wetter than average conditions in the Southeast U.S.

* Increased hurricane activity in the eastern tropical Pacific basin

* Depressed hurricane activity in the tropical Atlantic

While El Niño years generally have lower instances of named storms that make landfall, there are plenty of examples of El Niño-influenced hurricane seasons that have impacted the U.S. coast. Below is a look at such years, as well as the number of storms that made landfall and the adjusted insured loss in 2014 dollars.

Year # of Landfalling Storms Adjusted 2014 Insurance Loss
1957 2 $1,489,000,000
1965 2 $11,177,500,000
1969 1 (Camille) $8,250,000,000
1976 5 $300,000,000
1991 1 (Bob) $1,730,000,000
1992 1 (Andrew) $28,005,000,000
2002 6 $902,050,000
2004 6 $28,387,500,000

As we learned last year, seasonal forecasting has its challenges. Currently, there is a 75% chance of an El Niño developing this summer during the peak of the Atlantic hurricane season. However, in 2012 when an El Niño watch was issued, an El Niño never formed. In fact, since 1997 there have been five threats of a super El Niño that never developed. Therefore, taking into account the uncertainty in any seasonal climate forecast and the history as shown in the chart above, there can be an increased threat from tropical storms even in El Niño years. The 2014 seasonal forecast might also focus on other regional climate forces. One of these forces might be that the Sea Surface Temperatures (SST) off of the Eastern Seaboard of the U.S. are warmer than normal, which not only adds fuel to storms like Superstorm Sandy, but also could lead to deepening of pressures if any tropical disturbances tap into this potential fuel source later this summer. This warmer water also likely means that storms could develop closer to the U.S. coastline.

The new seasonal hurricane forecasts, which will roll out around June 1, tend to have increased accuracy as compared to the spring projections. These forecasts will continue to reflect the evolution of the El Niño, which can be followed on the Climate Prediction Center’s website (El Niño/La Niña Current Conditions and Expert Discussions). BMS will also provide updates throughout the season, but expect new seasonal forecasts to call for named storm formation to be below normal for the 2014 Atlantic hurricane season.

Peak of Thunderstorm Season Approaching

Although we are approaching the start of May, which is the peak month for thunderstorm development, the 2014 thunderstorm season has been off to a historically slow start. One advantage to this inactivity is that the insurance industry benefits from low thunderstorm losses not seen since 2004. In fact, the insurance industry has reported only $780 million of wind and thunderstorm event losses over three events (with two events yet to be estimated), according to Property Claims Service (PCS). This is far below the $4.6 billion in wind and thunderstorm event losses that have occurred on average over the last 10 years.

Not including the tornadoes that have occurred over the last few days as designated in PCS Event #40, the Storm Prediction Center has recorded 109 tornadoes as of April 24 for the 2014 calendar year which, according to BMS’ in-house tornado database from the Storm Prediction Center, indicates that this year is the slowest start to a tornado season in the 62 years of recorded data. Although the recent outbreak of tornadoes will add to the tornado count, the official count will still be in record-low territory. Harold Brooks at the National Severe Storms Laboratory, who has examined nearly 100 years of past tornado records, states that he is “challenged” to find a year that started with less tornado activity than 2014. Of the nearly 100 tornadoes reported this year, only 20 of them had been rated EF1 or higher, with the first EF3 or higher rated tornadoes only recently being recorded with this latest outbreak. This breaks a streak of 159 days, which currently stands as the fourth-longest streak on record between major tornadoes.

Despite the massive tornado that carved a swath of damage across Moore, OK during the 2013 tornado season, overall tornado statistics show that the U.S. has been in a tornado drought since the second half of 2012, with a record low number of tornadoes in 2013. Part of the explanation for the drought in intense tornadoes that has occurred since October 2013 is the persistent dip in the jet stream over the eastern half of the nation. This has unlocked the floodgates for arctic air, essentially shutting down the instability that is needed to develop explosive thunderstorms, which are often fueled by heat and moisture from the Gulf of Mexico.

The long-term forecast suggests much of the same cold will continue across the North Central Plains into the East Coast through the start of May, which should aid in putting a lid on thunderstorm development. But an extremely quiet start to the tornado season guarantees nothing about its future course, since May and June, which average 116 and 60 tornadoes, respectively (based on records from 2003 – 2013 of EF1-rated tornadoes or greater), are usually the two busiest tornado months of the year in the U.S. Despite the historically slow start, when looking at the tornado data recorded since 1953, 37 of the 62 years, or 59%, have started with below-average tornado counts of EF1 or greater. Of the 37 years that started below average, 6 years, or 16%, ended up having an above-average tornado season, The most recent years with slow starts but above-average tornado activity are 2010 and 2004, which resulted in $12.7 billion and $3.5 billion, respectively, in wind and thunderstorm event losses, according to PCS. As we saw with the recent PCS #40 declaration, there will be tornado outbreaks that cause billions of dollars in damages, but a major year like 2011 or 2008 could almost be ruled out and this recent trend should make one rethink the claims of the “new normal” back in 2011.

 

BMS launches Severe Weather Analytics

BMS Group announces a new weather risk management module as part of its iVision™ suite of analytical tools and services.
The unique new analytical tools allow carriers to better understand their risk and manage portfolio accumulations in areas prone to tornadoes, hail, straight-line winds and hurricanes.
The new module introduces expanded weather analytics features that make it even easier for insurers to manage severe storm risk. These features include:

  • Live weather feeds from NOAA
  • Daily severe storm shape files featuring AER Respond weather data, highlighting tornado paths, active hail areas, hail size and density
  • Active and forecasted hurricane tracks including detailed hurricane wind-field shapes
  • Historical PCS event library with one-of-a-kind PCS cat event shape files, available exclusively from BMS

“iVision’s new analytical tools augment traditional cat modeling results by enabling users to modify and alter damage ratio and track assumptions for tangible, definable events, which allows them to arrive at a view of loss they can have confidence in,” says Julie Serakos, head of BMS’ Cat Analytics group.
These new weather analytics features facilitate the understanding of the loss potential in a portfolio (thereby stress-testing its vulnerability to loss) by allowing for custom damage ratios to be applied against storm attributes. Additionally, testing portfolio sensitivity to the hurricane track increases confidence in the range of potential loss outcomes for landfalling events.

About iVision
BMS’ iVision is an easy-to-use catastrophe risk management system carriers can access online. Built on the latest GIS technologies, it helps today’s insurance companies increase efficiency and effectiveness in managing their catastrophic risk. iVision’s other analytical features include BMS’ proprietary ScenarioView™ for DIY event analysis, and RiskReveal™ location cat modeling (featuring AIR and RMS cat models) for underwriting. These features let carriers manage large loss exposures and ensure adequate premium before a policy is bound.

BMS enters into exclusive partnership with RTI International, one of the world’s leading research institutes

BMS Group recently entered into an exclusive partnership with RTI International, one of the world’s leading research institutes, to offer clients award-winning Enterprise Risk Management (ERM) services that have a proven track record in managing complex business operations.

The concept of ERM is nothing new. For decades, organizations have been using a variety of methods and processes to intelligently weigh and manage risks against opportunities. ERM provides a framework for this risk management – which typically involves identifying particular events or circumstances relevant to an organization’s objectives, assessing them in terms of likelihood and magnitude of impact, determining a response strategy and monitoring progress. By identifying and proactively addressing risks and opportunities, businesses protect their interests and create value for stakeholders.

But, as highlighted by Solvency II in Europe and ORSA in the U.S., in recent years ERM has evolved in response to the needs of an increasingly sophisticated global marketplace, with more varied and complex businesses and stakeholders who want to understand the broader spectrum of risks so they can be managed effectively. No longer limited to owners, customers and employees, these stakeholders have come to include regulators and rating agencies, which have also increased their scrutiny of risk management processes.

What does all this mean? ERM, which used to be considered a global business trend, has become a recognized best practice. And that’s where the BMS/RTI relationship comes into play.

Close collaboration between BMS brokers and our analytical and technical professionals lets us give clients a superior level of analysis, modeling and strategic guidance,” says David Spiegler, Executive Vice President and Chief Actuary at BMS. “That talent, combined with sophisticated analytical tools, methods, and award-winning risk management processes as provided by RTI, means that we are able to take a truly holistic look at a client’s risk, which is what businesses need today.”

The strength of a business and its reputation is based in large part on management’s ability to properly identify, assess and manage risks. A properly implemented ERM program will help deliver a better-performing organization by allowing us to identify and address risks before they become problems,” says Ward Sax, Vice President, Treasurer and Chief Risk Officer at RTI. “Our applied research in ERM has resulted in award-winning best practices we can share with clients.”

“ERM absolutely includes the capital modeling and analytics, but that’s just one part of the equation,” says Kurt Johnson, EVP of Analytical Services at BMS. He explains that beyond making sure a building and computers are safe, true ERM is about smart business planning and breaking down silos within an organization. “There’s a huge element of common sense to ERM, and we can help demystify it for clients,” Johnson adds. “You can’t eliminate chaos, but you can plan for it.”

 

Click here to visit the BMS Analytical Services ERM page

 

 

About RTI:

RTI International is one of the world’s leading research institutes, dedicated to improving the human condition by turning knowledge into practice. Our staff of more than 3,700 provides research and technical services to governments and businesses in more than 75 countries in the areas of health and pharmaceuticals, education and training, surveys and statistics, advanced technology, international development, economic and social policy, energy and the environment, and laboratory testing and chemical analysis. For more information, visit rti.org.