After completing this reading, you should be in a position to: Explain and... Read More
After completing this reading, you should be able to:
At the time when model risk management was new, a model was defined as a tool used for forecasting based on complex statistical techniques, known as quantitative models today. Given that these were new techniques and had unknown risks, this definition, which was limited to statistical models, made sense at that time. However, as model risk management evolves, the definition of a model has expanded to include a wider range of estimation techniques, including qualitative models.
According to the Fed, “the term model refers to a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates. The definition of a model also covers quantitative approaches whose inputs are partially or wholly qualitative or based on expert judgment, provided that the outputs are quantitative in nature.”
Currently, the industry consensus is that a model is any estimation method based on data and a set of assumptions that generates an uncertain estimate. Rather than focusing on the method or technique used to estimate the forecast, the emphasis is on its uncertainty.
A firm can be exposed to two broad risks as a result of model risk:
Execution errors mostly seem to be nothing serious to worry about. However, over time, such insignificant errors coupled with bad luck can lead to significant material losses. Among the examples covered in this reading are the acquisition of Lehman Brothers by Barclays and the NASA Mars orbiter case studies. These errors may be due to coding errors, implementation errors, or the use of wrong data. Tools that are not considered models can also have such errors.
Unlike execution errors, conception errors are not always based on a matter of right or wrong, and therefore, they are difficult to identify. Different modelers may have different but valid opinions about a model assumption. In such cases, model risk management should ensure transparency. To ensure models are used appropriately, model users should be informed about the limitations of a model. A model could be “right” in a particular context but “wrong” in another. Modelers often have this knowledge and even discuss it in their documentation. However, it should not be assumed that model users know and understand this. And thus, these assumptions should be precisely explained to the model users. A good example is the CDO case study from the financial crisis of 2007–2009, which we will discuss later in this reading.
Typically, model risk management (MRM) comprises independent experts who are not involved in model development. The MRM function responsibilities cover all aspects of a model throughout its lifecycle. The MRM specifies model documentation standards, data quality expectations, and versioning criteria. Most importantly, MRM is responsible for reviewing and challenging models to minimize risks.
MRM functions assign models to different tiers based on the risk they pose to the firm to balance the cost of model validation and the necessity to ensure the model risk is sufficiently addressed. When assessing model tiering, the materiality of the output is usually a factor to consider. The model tier determines the frequency of its validation.
The validation team pays more attention to the models in the highest tier: performing a detailed review and comprehensive backtesting and assessing the reliability of the model’s output. High-tier models are also validated more frequently, usually every two or three years, while lower-tier models undergo full-scope validation, say after 4 or 5 years.
Regardless of the tier, all models undergo an annual review of the environment, data, and other important elements to ensure no material changes have occurred since the last full-scope validation. The model can continue to be used if no changes are observed since the last full-scope validation.
Apart from these validations and reviews, the MRM functions monitor the models’ performance through monitoring reports produced by model owners. Reports are produced at varying intervals depending on the frequency at which models are used. Different models attract different monitoring frequencies.
Model risk management should be perceived as a continuous process rather than a point-in-time validation and review exercise. Many firms focus on periodic rather than continuous validation. However, periodic validation is more manageable and predictable, and therefore staffing needs are easily predictable. Periodic validation allows banks to operate much smaller validation teams because validators can move to the next scheduled task once done with the task at hand.
The point-in-time model was widely adopted during the early days of model risk management. Regulatory models with long development and deployment cycles influenced this move.
As the reliance on models increases and the environments in which models are deployed become more dynamic, MRM functions should adopt a more continuous risk management approach.
The three lines of defense model apply in MRM:
There is a risk that modeling teams may become complacent with regard to risk management practices due to the presence of large and competent validation teams. The second line is intended to serve as an independent backstop in case the first line fails to catch errors. However, its existence should not result in the first line abdicating its own responsibilities. In the context of model risk, model developers and model owners form the first line of defense. Hence, they generate the risk to which the organization is exposed. Consequently, the first line owns the risk and should take all necessary steps to mitigate it, while the second line independently assesses the risk and risk management practices of the first line.
In more mature institutions, the modeling teams typically form a quality control/quality assurance team. First-line QA/QC teams play a pivotal role in mitigating model risk, especially execution risk.
This case study focuses on the collapse of CDO markets in 2008. In the early 2000s, David X. Li published a paper on pricing CDOs and how to price pools of assets without considering their correlations. Li’s approach was based on the Gaussian copula and the use of CDS prices to infer the correlation of assets.
Li’s model applied CDS prices rather than observed historical correlation to price CDOs. When people started using the pricing formula in the early 2000s, CDSs had been around for only about a decade. As a result, the sample was relatively benign (housing prices rose consistently, and defaults were at an all-time low). Correlations implied by CDS prices in this environment were very low and extremely sensitive to the trajectory of house prices. When house prices reversed course, correlations implied by CDS prices shot up with dramatic consequences to CDO prices.
Li’s pricing model was widely adopted, despite these limitations.
When signs of weaknesses started to materialize in 2008, the correlation implied by the CDSs and the CDO prices increased dramatically, leading to the collapse of the CDO market.
The blame falls not on the formula but on those who applied it blindly, i.e., banks, which should have warned users, and users who should have tried to understand the formula better before adopting it. Banks were also to blame because they did not update the copula model with the new correlation estimates implied by the higher CDC prices. Instead, they continued pricing CDOs based on old assumptions.
In this context, the role of risk management is to ensure transparency by creating awareness or informing market users on issues relating to new models, etc. MRM should assess the model to ensure we don’t have coding errors and that it produces the prices as intended. Most importantly, MRM should challenge the assumptions and ensure users understand related limitations. Effective communication is key for MRM to handle this challenge successfully. Most model users at banks do not have a quantitative background and end up depending on assumptions they don’t quite understand and assume that it’s the modelers’ responsibility to ensure the model works accurately. A good MRM should help minimize the misuse of models by helping users understand the limitations accompanying a model.
In September 2008, Lehman Brothers collapsed, sparking the 2008 global financial crisis. In one incident not known to many, Barclays Capital almost bought 179 trading contracts from Lehman Brothers by accident.
Lehman Brothers filed for bankruptcy on September 15, 2008. On September 18, 2008, Barclays Capital offered to acquire a portion of the assets of the US bank, including some of Lehman’s trading positions. Barclays hired the Cleary Gottlieb Steen & Hamilton law firm to represent Barclays. The law firm was to submit the purchase offer to the US Bankruptcy Court for the Southern District of New York’s website by midnight on September 18. A few hours before the deadline (at 7.50 pm), Cleary Gottlieb received an Excel file from Barclays containing information on assets they wished to acquire. The spreadsheet file had 1000 rows and 2400 cells, including those listing the 179 trading contracts that Barclays did not want to buy. They were, however, hidden rather than deleted.
A junior law associate was asked to convert the Excel files into a PDF for uploading on the court’s website. Unaware of the hidden rows coupled with the tighter schedule, he directly converted the files to PDF files.
The mistake was identified later in the PDF files after the contract had already been approved. To exclude those contracts from the deal, Cleary Gottlieb had to file a legal motion.
The Excel error here is a good example of an implementation error. The Excel spreadsheet is just a tool that does not qualify as a model; there is no uncertainty about the information contained in the spreadsheet since the list of assets, and their values are also known. Yet, a simple mistake – forgetting to delete the hidden rows almost costed Barclays millions of dollars. Even though the loss did not materialize in this case, it could materialize in some other cases. Thus, even tools and models that seem so simple should be challenged and reviewed properly.
The Lockheed Martin engineering team used English units of measurement, while the agency’s team used metric units for a key spacecraft operation, which cost NASA $125 million. It is hard to imagine that an error as simple as inconsistent units could have resulted in the destruction of a multimillion-dollar satellite. This is an error related to model assumptions and, more specifically, the choice of units. In the financial world, errors similar to this could be the use of the wrong currency, discount factor, or other simple assumptions that are often taken for granted.
The development of a robust MRM function that thoroughly reviews all models and double-checks all work is often perceived by executives to be costly. Some mistakes are perceived as benign, and reviewing models just to catch them is a waste of resources. However, in most cases, these benign mistakes would have resulted in benign losses, if any. It can also be agreed that a small subset of these mistakes could result in catastrophic losses, such as the loss of the Mars orbiter. Unfortunately, it is not possible to distinguish between mistakes that might cause catastrophic losses from those that might not
In September 2008, Lehman Brothers collapsed, sparking the 2008 global financial crisis. In one incident not known to many, Barclays Capital almost bought 179 trading contracts from Lehman Brothers by accident. Which of the following lessons can be learned from this incident?
- MRM should challenge the assumptions and ensure users understand related limitations.
- Even tools and models that seem so simple should be challenged and reviewed properly.
- Even small errors, such as the use of wrong units, can lead to massive losses.
- A good MRM should help minimize the misuse of models by helping users understand the limitations accompanying a model.
The correct answer is B.
B is correct. A simple mistake – forgetting to delete the hidden rows almost costed Barclays millions of dollars. Even though the loss did not materialize in this case, it could materialize in some other cases. Thus, even tools and models that seem so simple should be challenged and reviewed properly.
A is incorrect. This is a lesson associated with the collapse of CDO markets in 2008, where users widely adopted Li’s model without assessing it.
C is incorrect. This lesson relates to the NASA Mars Orbiter incident, where the use of inconsistent or wrong units costed NASA $125 million.
D is incorrect. This lesson is drawn from the collapse of CDO markets in 2008, in which users blindly adopted Li’s pricing model.
After completing this reading, you should be in a position to: Explain and... Read More
External loss data is used in the operational risk framework to provide input... Read More
After completing this reading, you should be able to: Describe elements of an... Read More