The objective of this chapter is to identify and explain modeling assumption errors through which model risk can be introduced. The arising of model risk in a model’s implementation will be studied. Furthermore, in risk mitigation, risk managers can apply various methods and procedures which will be explained in this chapter. Lastly, the chapter provides an explanation of how model risk and poor risk governance affected the 2012 London Whale trading losses and the 1998 Long-Term Capital Management collapse.
To explain the significance of model risk, we will use the market risk example and evaluate:
- The challenge’s extent;
- The model error;
- The challenges faced in implementation;
- Model risk mitigation; and
- A case history that is detailed.
The Importance of Model Risk: The Market Risk Example
Model risk is, at times, rendered relatively insignificant when dealing with simple instruments like stocks and straight bonds. In such circumstances, the best indicator of the asset’s value is market prices. However, with institutions trading over-the-counter exotic derivative products, and those others executing complex arbitrage strategies, model risk happens to be a compelling issue.
In financial positions valuation, there are theoretical models of valuation if liquid markets and price discovery mechanisms are not present. Currently, accounting boards and regulatory bodies accept the mark-to-market model approach.
Another application of models is risk exposure assessment and deriving a hedging strategy that is appropriate. However, dependence on models is dangerous as has been the case from derivatives markets’ early history. The 2007-2009 financial crisis made it clear after trading positions incurred severe losses on a totally unexpected scale. Therefore, it became necessary for financial institutions to evaluate and examine the model risks linked to their trading activities.
Complexity posed part of the challenge. Theories applied to support financial innovations have become more complex and sophisticated since the Black-Scholes-Merton option pricing models were published, and a further parallel rose in the threat of model risk.
Since 2004, new financial products have been constantly streaming in. However, our ability to price new instruments or hedge associated risks has been overtaken by product innovation.
Technology is also to blame because there has been an increased temptation to create more complicated models which management can hardly understand due to the fact that computers are becoming more and more powerful. The chance of incurring losses or generating profits has been substantially increased by the available technology.
How Widespread is Model Risk?
In a modern financial system, model risk is widespread. Substantial losses can be experienced by all kinds of trading companies, both when things are calm and when the market environment is stormy.
A strategy may be made to look very profitable on paper by a faulty model despite the bank incurring economic losses since the models are applied for valuation. This may take some time to correct and, within that time, a significant amount of money will not be accounted for.
The main causes of model risk are:
- Model error;
- Wrongful implementation of a model.
Mathematical models with sophisticated equations and advanced mathematics are heavily relied on by derivatives trading. Therefore, mistakes in the analytical solution render the model inaccurate.
If wrong assumptions about the underlying asset price process are applied as the basis of the model, then that model is rendered incorrect. This is perhaps a common and dangerous risk.
Assuming that the distribution of the underlying asset is constant has been cited as the most frequent error when models are built – the fact is that it changes over time. A sting case for that matter is volatility.
Ideally, volatility should be considered as a variable and, hence, the option pricing model that is created should be based on this fact. The challenge arising in the difficulty in computing option valuation models can be attributed to the inclusion of any sort of stochastic volatility.
Most of the time, derivative practitioners struggle to find the best compromise between complexity and simplicity.
Despite the fact that most traders are aware that simplifying assumptions are made about the behavior of prices, an assessment of the effect of this kind of simplification is difficult to them or to the risk managers.
When the number of risk factors taken to account by the model is underestimated, then the model is oversimplified. For simple investment products, a one-factor term structure model (where the spot short-term rate is represented by the factor) may be sufficient for the production of accurate prices and hedge ratios. For complex products, a two- or three-factor model may be applicable.
Almost always, the assumption used in the deriving model is that existing capital market is perfect, and this is problematic. The reality of the matter is that most markets, particularly those in countries that are less developed, are hardly efficient. Furthermore, there lacks a public trading of OTC derivative products in developed markets and so positions cannot be perfectly hedged.
The basis of most derivative products is the assumption that a delta-neutral hedging strategy can be applied to the instruments in question. Practically, an option’s delta neutral hedge against its underlying asset can hardly be risk-free, and therefore a very active rebalancing strategy is necessary if such a position needs to be kept delta-neutral over time.
Another major source of model risk is the absence of liquidity. The assumption by models is that the trading of underlying assets can be long or short at current market prices and that there will be no dramatic changes in prices.
A mathematically correct model can be generally useful and yet misapplied to a specified situation. A good example is the assumption that forward rates are lognormal, depending upon by some term structure models that are widely applicable to fixed income-instruments. Apart from Japan, the U.S., and Europe, the model is performing relatively well when used in most developed markets.
Similarly, when used to price different instruments, models considered safe to use for certain product types might not perform well with other more complex products. Many OTC products have embedded options that are ignored in the standard option pricing model.
Implementing a Model Wrongly
The danger of a model being wrongly implemented is always prevalent despite the model being correct and applied in the tackling of an appropriate problem. The likelihood of a programming bug affecting a model’s output is always present for sophisticated models needing extensive programming.
If insufficient simulation runs or time steps get implemented, large price and hedge ratio inaccuracies in models requiring Monte Carlo simulations can creep in.
Data from many sources that differ gets collected for models that evaluate complex derivatives. Implicitly, the assumption is that for each time period, the data for all relevant assets and rates pertain to exactly the same instance of time or, in other words, simultaneous prices.
Statistical tools are applied by researchers during the implementation of the pricing model for the approximation of model parameters like volatility and correlation.
Errors of estimation are prevalent in all statistical estimators and they involve inputs to the pricing model. However, the treatment of the outliers or extreme observations poses a major challenge to the estimation procedure. Depending on the treatment of observations, there will be vast differences in the estimation’s results.
Finally, the accuracy of inputs and parameter values feeding models is heavily relied upon by the model’s quality. Traders, especially in markets that are relatively new, tends to easily make mistakes since the best-practice procedures and controls are still evolving.
Volatilities and correlations are the most difficult input parameters to accurately judge. The challenge of making assumptions based on correlations and the fact that return distributions are stationary was cruelly highlighted by the subprime crisis. Correlations during that time moved towards the extremes, leading to a full movement by all risk factors in the same or fully opposite directions.
In values estimation, the challenges that occur most frequently, on one hand, are the assessment of potential valuation error, and on the other hand:
- Inaccurate data;
- Insufficient time allocated for sampling; and
- Liquidity and bid/ask spread challenges.
How Model Risk is Mitigated
A proper investment in research for model advancements and statistical tools bettering is a crucial way of model risk mitigation, either internally or externally.
Processes should be established to independently vet both the selection and construction of models to vitally reduce model risk. Through vetting, assurance is given to the management of the company that any valuation model of a specified security proposed is reasonable. The phases constituting vetting are:
- Documentation: A model’s full documentation is necessary and ranges from the model’s underlying assumption to its mathematical expression. It should not rely on any particular implementation and should have:
- The term sheet or the transaction’s complete description;
- The model’s mathematical statement;
- Implementation features; and
- The implementation’s version that is working.
- The model’s soundness: The mathematical model should have a reasonable representation of the valued instrument. This requirement should be verified by an independent model vetter.
- Independent access to financial rates: The middle office should have independent access to the database of an independent market risk financial rates management.
- Benchmark modeling: A benchmark model should be developed based on the assumptions being made and on the deal’s specification.
- Health check and stress test the model: Basic features possessed by all derivatives models should also be possessed by the model.A formal treatment of the model risk should be built into the overall risk management procedures and a periodic evaluation of the models. Through the application of best practice statistical procedures, the parameters should be reevaluated.
LTCM and Model Risk: How a Hedge Became Ineffective During a Liquidity Crisis
The financial community was shocked by the failure of LTCM in 1998, not only due to LTCM’s principles but also the unprecedented amounts of capital represented by the positions of the company. The devaluation of the ruble and the declaration of a debt moratorium by Russia triggered the LTCM crisis in 1998.
This was followed by a fall in the value of LTCM’s equity by 44%. The Federal Reserve Bank of New York had to take an unprecedented step to facilitate a bailout of the fund and avoid a meltdown risk in the global markets, due to the magnitude of the hedge fund’s positions.
The basis of LTCM’s arbitrage strategy was market-neutral or relative value trading, involving the purchase of one instrument and a simultaneous sell of the other.
The positioning of LTCM’s portfolios was based on particular bets despite the bets seemingly pretty safe at the time. Such seemingly low-risk strategies had returns that tended to be quite small and grew smaller with the entry of more players in the market to benefit from the opportunity. Therefore, leverage was aggressively used by hedge funds to boost their performance.
Due to the failure by both the trading models and risk management models to anticipate the vicious cycle of losses in an extreme crisis, LTCM failed.
There is a tendency by price relationships holding during normal market condition to collapse in times of market crises. Due to a breakdown of the correlation and volatility patterns observed in the past, LTCM incurred huge losses.
Because of the flight to quality and a disappearance of liquidity, the following mechanisms were introduced in the course of this market turmoil:
- Investors exiting the stock market exit by and purchasing U.S. government bonds in a flight to quality; and
- The unwinding of positions becoming impossible due to a dry up of liquidity in many markets simultaneously.
Therefore, LTCM lost on most of its trading positions, therefore facing the danger of becoming insolvent. The high leverage by the company made matters even worse.
Risk Measurement Models and Stress Testing
A VaR model was relied on by the risk control team at LTCM. The worst case loss that can come from the portfolio of a company is represented by VaR, under normal market conditions and at a specified confidence level and over a given time horizon.
Some of the common assumptions in the computation of regulatory VaR are not realistic to hedge funds:
- Economic capital time horizon should be time taken for new capital to be raised or the time period for the unfolding of the crisis scenario;
- Traditional static VaR models do not cover liquidity risk; and
- Stress testing is the only way to capture correlation and volatility risks.
The following are frequently occurring errors in model building. Which of them best explains why there always errors in the estimation of volatility?
- The assumption of perfect capital markets exists
- The assumption that the distribution of the underlying asset is stationery yet it constantly varies
- Wrong assumption concerning the underlying asset price process
- Underestimating the number of risk factors taken into account by the model
The correct answer is B.
Market instruments are volatile and assuming constant volatility will always wrong-foot a model. Assuming a constant distribution of the underlying asset is wrong since all variables changes over time, and volatility is no exception.