Assessing the Quality of Risk Measures

In this chapter, we discuss ways through which errors can be introduced in models. The arising of model risk via the implementation of models of VaR and the mapping of risk factors to portfolio positions will be explained.

The chapter will also unravel the causes for the failure of the long-equity tranche, short mezzanine credit trade in 2005, and provide a description of the ways through which the said mezzanine errors can be avoided. Finally, we will explore the major demerits of assumptions in models that necessitated the underestimation of systematic risk for residential mortgage-backed securities.

Model Risk

Model risk is fact that the actual distribution doesn’t conform to the model assumption of normality for calculating VaR. A good example of model risk is the application of a VaR implementation that depends on normality and fails to appreciate the model’s devaluation from reality.

In the process of trading and investing, models are not only applicable to risk measurement alone, but also to other parts of the process. The likelihood of making incorrect decisions on trading or risk management due to the erosion of the said model or its applications is best described by the term model risk.

There are various ways through which errors can be introduced to models, with the most trivial being the possibility of programming a model’s algorithm containing bugs. There are very heavy consequences that can result from these programming errors as was the case for Moody’s.

A wrongly programmed software can as well be applied in a way that is not consistent with the model to be implemented in the software.

The mapping of positions to risk factors is one of the common types of inconsistencies.

Valuation Risk

The process of securities valuation or hedging is usually exposed to model errors. Losses hidden within the company or from stakeholders can be as a result of errors in valuation. Hedging errors can leave a portfolio exposed to one or more risk factors than the portfolio manager realizes.

An example of a risk that is both market risk and operational risk is valuation errors as a result of inaccurate data. In principle, model errors can be avoided by using market prices rather than applying model prices, thereby reducing valuation risk.

This marking-to-market model can be problematic as opposed to marking-to-model since some positions are not easily marked-to-market as they are not frequently traded.

Variability of VaR Estimates

A wide range of practical challenges affects VaR. Computer systems are crucial in the management of risk as they automate the data combining and calculation process and generate reports. However, data preparation is a problem associated with the implementation of risk-measurement systems, and the following three types of data are involved:

  1. Market data: is sometimes called time series data on asset prices and is the data applied in forecasting future portfolio returns’ distribution.
  2. Security master data: is the data used in the description of securities, and it can be dates of maturity, currency, etc.
  3. Position data: There must be a verification of this data for it to match books and records, and it may be collected from most trading systems across various geographical locations within the company.

Data should be correctly matched and presented to a calculation engine using a software in order to calculate a risk measure. All the applied formulas or calculation procedures will be incorporated in the computation engine, and the results conveyed to a reporting layer to be read by managers. The variability of the resulting measures and the challenge of appropriate application of data are the issues to be focused on.

A risk manager should be very discreet when calculating the VaR. Different methods can be used to mix and match VaR models of calculation and the user-defined parameters. Therefore, the risk manager has to be flexible and can adapt his/her VaR computation model to suit the company’s needs, investor’s needs, or the nature of the portfolio.

However, the following challenges may arise:

  1. Lack of uniformity in the confidence level and time horizon; and
  2. Different results can be obtained by applying different methods to evaluate VaR, despite there being standardized confidence intervals and time horizons. The following are the calculation and modeling decisions through which VaR results are greatly affected:
    • Historical simulation or moments’ estimation time series
    • Moments’ estimation techniques
    • The choice of risk factors and mapping techniques
    • When using EWMA, the decay factor
    • The number of simulations and randomization techniques in Monte Carlo simulation

When these parameters are varied, changes obtained in VaR can be dramatic. VaR estimates published by some large banks are generally accompanied by backtesting results and are generated for the purposes of regulation.

Mapping Issues

VaR results can be affected by the mapping of the assignment of risk factors to positions. There are some mapping decisions that are pragmatic choices among alternatives with each having its pros and cons.

Difficulties in the handling of data that addresses some risk factors may be experienced in some cases. Such challenges merely reflect the actual difficulties of hedging or expressing some risk ideas. Convertible bond trading can, for example, be mapped to a set of risk factors like implied volatilities and credit spreads.

The basis of such mappings is the theoretical price of convertible bonds. Sometimes, however, there can be a dramatic divergence of the theoretical and market prices of converts. The divergences are liquidity risk events which are not easily captured by market data. This implies that the risk can be drastically understated since the VaR is based solely on the replicating portfolio.

The same risk factors or set of risk factors can, in some instances, map a position and its hedge. Despite there being a significant basis risk, the result will be a measured VaR of zero. A good example of basis risk is risk modeling of securitization exposures.

VaR for some strategies can be misleading due to the distribution of returns and VaR’s dependence on specific modeling choices. Yet, for other strategies, outcomes are close to binaries.

Case Study: The 2005 Credit Correlation Episode

The 2005 credit markets’ volatility episode is a perfect case study of model risk stemming from the misinterpretation and misapplication of models. According to the model, one dimension of risk was hedged, yet neglecting other risk dimensions. This caused large losses to various traders.

Description of the Trade and its Motivation

The selling of protection on the equity tranche and the purchasing of protection on the junior mezzanine tranche of the CDX.N.A.I.G was a popular trade among various market players. The trade was therefore long credit and credit-spread risk through the equity tranche and short credit and credit-spread risk through the mezzanine.

The design of the model was that at initiation, it should be default-risk-neutral, by sizing equally the two legs of the trade for their credit spread sensitivities.

Profiting from a view on credit spread was not the trade’s motivation, despite it being market-risk oriented. It was instead designed to achieve a positively convex payoff profile. The volatility of the credit spread would then benefit the portfolio of the two positions.

Using the previously developed tools, we now can better understand the trade and its risks. Setting up a trade in tranches of the illustrative CLO, structure-wise, and motivation-wise was similar to the previously described standard tranche.

A long credit risk position in the equity tranche and an offsetting short credit position in the mezzanine bond can be taken, and protection on the mezzanine tranche could be can through a CDS.

The standard tranches are synthetic CDS with their collateral pools consisting of CDS. Being more liquid than most other structured products enables the traders to take both long and short positions in these products.

The default sensitivities, the default 01s, are applied in the determination of the hedge ratio. Changes in the default rate in either direction is of value to the trade, with the actual CDX benefitting from large changes in credit spreads.

Contrary to a typical option, when expressing this option through CDX standard tranches at the prevailing market prices, a premium was paid to its owner instead of having a negative net carry.

There were slightly different mechanics in the actual standard tranche. Rather than actuarial default 01s, spread sensitivities were used by traders due to the fact that the securities were synthetic CDO liabilities.

In the actual trade, the hedge ratio was the P&L impact ratio of a 1bp widening of CDX.NA.I.G on the equities and the junior mezzanine tranches.

The recovery amount was at risk since it was not was not fixed but was a random variable, in case one or more names defaulted.

Credit Environment in Early 2005

Too much pressure was directed to the credit markets in the spring of 2005, with the three largest U.S.-domiciled original equipment manufacturers (OEMs), Ford, General Motors, and Chrysler, being troubled. Investors were disoriented by the likelihood of their bonds being downgraded to junk.

Obtaining relief from the UAW autoworkers union from commitments to pay health benefits to retired workers was the OEMs’ immediate priority from a financial perspective.

The two events that marked the beginning of the 2005 financial crisis were (1) the inability of GM and UAW to reach an accord on benefits, and (2) GM announcing large losses. S&P downgraded GM and FORD to junk, with Moody’s following suit soon after.

This was followed by a sharp rise in some corporate spreads like GMAC and FMCC. There was the likelihood of several defaults in the IG3 and IG4 being experienced by the markets.

Very large changes were experienced by the pricing of standard tranches due to the panicky unwinding of the equity-mezzanine tranche trade.

The following were the behavior of the credit spreads and the price of the standard equity tranche in the episode:

  1. A sharp drop in the mark-to-market value of the equity tranche;
  2. A sharp drop of the implied correlation of the equity tranche;
  3. A small widening was experienced by the junior mezzanine tranche, and some tightening sometimes, as market players sold protection on the tranche to cover positions; and
  4. Large losses were experienced by relative value trade as a whole.

The falling of the implied correlation was due to the following two reasons:

  1. All of them were in the IG4, hence nearly 10% of the portfolio was near a state of default; and
  2. The widening of the IG4 itself was constrained by the hedging, thus leading to the falling of the correlation.

Selling protection in a modest multiple of the mezzanine tranche could hedge the short credit position through the equity tranche.

Modeling Issues in the Setup of the Trade

The earlier described analytics were used to set up the relative value trade of the standard copula model. The individual default’s timing was well modeled. A normal copula was generally used by traders.

Relative frequencies of joint defaults might have been the basis of the equity return correlations or prevailing equity implied correlations.

The hedge ratio between the equity and mezzanine tranche was drastically altered by the changing correlation. The flaw could have been easily corrected if the correlation had been recognized.

Case Study: Subprime Default Models

The failure of subprime residential mortgage-backed securities and risk models was among the costliest model risk episodes. Despite the wide variations in the models, the following were the important widespread defects:

  1. Positive future house price appreciations were generally assumed by the model; and
  2. Low correlation among the regional housing markets.

In the securitization and structured credit products, there were a number of instances of mapping problems leading to seriously misleading risk measurement results. Until recently, there was little time series data to cover securitized credit products. During the subprime crisis, the ABX index (created by Markit to represents 20 subprime residential mortgage-backed securities) lost most of its value. The corporate-bond and ABX mappings were misleading and had underestimated potential losses.

Practice Questions

1) In the implementation of risk-measurement systems, data preparation has always been challenging. The following three types of data are usually involved. Which data type is wrongly described?

  1. Market data: is the data applied in forecasting future portfolio returns’ distribution.
  2. Security master data: is the data used in the description of VaR and LVaR.
  3. Position data: There must be verification of this data for it to match books and records, and it may be collected from most trading systems across various geographical locations.
  4. None of the above.

The correct answer is B.

Security master data includes the descriptive data on securities such as maturity dates, currency, and its units.


Leave a Comment

X