x

RSS Newsfeeds

See all RSS Newsfeeds

Dec 4, 2015 7:45 EST

Reserving And Capital Setting: Sizing The Problem, Part II: Quantifying Emerging Risks

iCrowdNewswire - Dec 4, 2015
December 3rd, 2015

Reserving And Capital Setting: Sizing The Problem, Part II: Quantifying Emerging Risks

Once the risks have been identified and ranked, the next step is how to quantify the likely impact on the financial results of the firm. The first and most obvious question is what available quantification techniques are available for each risk on the list. This will depend on the availability of relevant data and commercially produced models.

Where claims or external market data is available the likelihood is that the timespan of datasets will be limited. In this case standard actuarial reserving techniques can exacerbate the problem as there potentially will be no tail to construct a chain ladder-type exercise fully. Looking carefully for any calendar year trends in frequency or severity of claims will pay dividends, as will talking to claims professionals about the likely uncertainty surrounding individual case estimates and obtaining their views on duration to settlement.

For some risks, models may be commercially available. So what do these models typically offer? Well, in short, they might provide the sort of information expected from a catastrophe type model, such as loss amounts with associated return periods and, in some cases, a view on the correlation between lines of business. This helps with best estimate reserving by using the Average Annual Loss (AAL) as an initial loss pick and for capital modeling by using a statistical benchmark such as the 1-in-200 net of reinsurance Value-at-Risk (VaR) amount. What they do not provide in most cases is an estimate of the likely emergence pattern of that risk. This is important for casualty lines and so this is currently a large missing piece of what is a complex jigsaw puzzle.

While these models are helpful as ever, care needs to be taken in their use just “off the shelf.” They are all fairly new in their construction and so one must ask:

  • What is the data source?
  • How has the data source been used in model construction?
  • What are the key model assumptions?
  • Is expert judgment used in the model methodology or parameterization?
  • How are dependencies introduced and parameterized?
  • Is my data good enough to produce reliable results?
  • How sensitive is model output to the data input and changes in the core assumptions?

Fortunately, these should all be familiar questions as they all need to be addressed whenever using an external model. However, as these models are in their relative infancy, much more care is needed in this validation exercise. Also, it is likely new data and therefore new model versions will follow in the footsteps of the first models quickly as new information emerges. This means results can be volatile as models evolve; so a robust model change policy will be required.

Data quality and availability should also be examined in depth. Because the risks are new, the data may not be captured correctly to power the model, which will lead to further uncertainty and may even preclude the use of a model altogether.

There will be risks on the list for which there is no data and no available model. This absence of information does not mean the industry can just ignore these risks, particularly if they are highly ranked in terms of materiality. Companies do not need to resort to “finger in the air” estimates when they can leverage expert judgment. The use of expert judgment has become much more robust in recent years with the advent of Structured Expert Judgment (SEJ). In fact, SEJ has been used for quantifying many hard to analyze risks such as estimating the probability of volcanic eruptions and cyber aggregations. The process involves taking a panel of experts through a series of interviews to get loss scenarios and estimates of loss quantum and likelihood. These results are used as data points to create a pseudo probable maximum loss curve which will have a mean estimate for each scenario and also provide a range of estimates which can inform uncertainty.

When it comes to casualty catastrophes or indeed any emerging risk that can be systemic in its effects, it is crucial to consider correlations and dependencies. This is where past data again is not always that useful. While events from emerging risks are scarce in past data for individual lines of business, the instances of a conflagration across lines of business in terms of liability with a simultaneous impact on assets are virtually non-existent. Whatever dependency structure is assumed within an external model or the internal capital model (whether copula-based, using correlation matrices or a risk driver approach) it should be capable of being stressed to reflect that the future could be more unusual than the past.

The key issue in modeling is the timescale over which we realize that the risk is manifesting itself and how this view changes until an ultimate understanding of the loss quantum is reached and all liabilities are discharged. This is the missing dimension from most models but why does it matter? Well, for a natural catastrophe the event usually happens quickly, can be estimated fast and is settled swiftly. An emerging or latent risk can lurk in the balance sheet undiscovered for a long time. Even when discovered it can take even longer to comprehend the full extent of the loss. The best historic example is the liability from exposure to asbestos. The reserves have crept up and up, been impacted by various judicial decisions and are still not fully concluded in many cases and jurisdictions. It could be presumed that as long as there is an estimate about the potential overall quantum of loss, and capital is held to back that ultimate liability at inception, then it should not really matter whether the realization of the loss is tomorrow or 20 years in the future. But, that would be wrong as we will explain.

Contact Information:

GCCapitalIdeas.com

View Related News >
support