Message from the Academic Literature on Risk Management for the Trading Book

In this chapter, fundamental issues of a highly technical nature in current \(VaR\)-based approaches in risk management is addressed by offering a preview of implementation issues such as questions of the necessity of adding time variation in volatility, the appropriate time horizon for backtesting of \(VaR\). We also study the merits and demerits of \(VaR\) as a risk metric and alternative metrics. Management aspects like inter-risks aggregation banking trading books borderline are also tackled.

Selected lessons on \(VaR\) implementation

Overview

The time horizon for \(VaR\) estimation, how time-varying volatility of \(VaR\) risk factors is recognized, and \(VaR\) backtesting are the main categories of implementation issues that are reviewed here. Liquidity and positions nature are the factors affecting variation in appropriate \(VaR\) horizon. The horizon should be long as common square root for time scaling approach for short horizon are likely to generate biased long-horizon \(VaR\).If models of \(VaR\) incorporate time-varying volatility, regulatory \(VaR\) suffers from instability and pro-cyclicality. Finally, sources on \(VaR\) backtesting and regulatory issues are surveyed.

Time Horizon for Regulatory \(VaR\)

The horizon over which \(VaR\) is calculated is a crucial issue affecting the use of \(VaR\) for regulatory capital. The regulatory body set the appropriate horizon to ten days and is estimated by the square root of a one-day \(VaR\) time scaling. However, it is clear that this horizon should be subject to characteristics of the position. The ten days horizon at 99% implies that an event roughly happens 25 times in ten years. At the same time, a liquidity crisis is unlikely to happen even once within the ten years thereby mismatching the probability and the problem.

Longer horizons \(VaR\) computation brings about the question of accounting for time variation in portfolio composition. Sidestepping portfolio composition changes by computing short-horizons \(VaR\) is the solution to the question. Moreover, direct focus on portfolio \(VaR\) calculation over the relevant horizon is another way to deal with that problem. We can also extend the \(VaR\) models by incorporating future trading activities.

Scaling by the square root of time, long-horizon \(VaR\) can be calculated as the short-horizon \(VaR\) provided that the interest object is the unconditional \(VaR\). Sadly, at high frequencies, time scaling square root assumptions are rarely confirmed. The accuracy of square root of time scaling depends on the statistical properties of the data generating process of the risk factor.

Intra-horizon \(VaR\) is also another aspect that is important. In this risk measure, \(VaR\) over the regulatory horizon is combined with short-term P&L fluctuations while focusing on the models that incorporate price process jumps. In as much as daily \(VaR\) carries high-frequency P&L information, the extent of accumulation of losses cannot be determined by that information.

Time-Varying Volatility in \(VaR\).

Large and complex trading portfolios are explored while accounting for time-varying volatility in \(VaR\) models. Using historical simulation \(VaR\) without incorporating the time-varying volatility will underestimate the risk. Compared to short horizon \(VaR\), it may not be necessarily important to capture time-varying volatility in the case of long \(VaR\) horizon. \(VaR\) time-varying volatility can only be incorporated if its prevalence in financial risk factors is high. \(VaR\) models should be made to accommodate stochastic volatility features.

Methods to Incorporate Time-Varying Volatility in \(VaR\) for Large and Complex Portfolios.

One of the industry standards to incorporate time-varying volatility in \(VaR\) is the Exponentially Weighted Moving Average approach (EMWA) which is a constrained version of an IGARCH (1,1) model whose parameter is set to 0.97 in RiskMetrics cases. Historical data is weighted according to Boudoukh weights, in another approach, where:

$$ W\left( i \right) =\frac { { \theta }^{ i }\left( 1-\theta \right) }{ 1-{ \theta }^{ n } } ,0\le \theta \le 1 $$

\(\theta \) is the weight of an observation \(i\) days ago and \(n\) is the total number of days in the historical window.

Historical simulations can also be used to capture time-varying volatility. Filtered Historical Simulation (FHS) is an even simpler approach, where risk factors are first filtered through a GARCH model and volatility updated via the model while adhering to the filtered risk factors to constructed \(VaR\).

For portfolios with large numbers of risk factors, the FSH approach entails separately filtering each risk factor and building volatility forecasts for each factor. The weight can be applied similarly with the assumption that correlation structure across risk factor stays constant over time. Time-varying volatilities and correlations can also be estimated by multivariate GARCH models; only that estimation becomes increasingly hard as the number of risk factors gets bigger.

As a result, industry players have resorted to taking less burdensome alternatives like applying simple weighting observations or even shortening the data window for \(VaR\) estimation. These alternatives are less accurate but can compute large and complex portfolios.

Backtesting \(VaR\) models.

One can use backtesting exceptions or even a \(VaR\) multiplier depending on the number of backtesting exceptions experienced to draw inferences. To test whether a \(VaR\) model generates the right amount of exceptions, the unconditional coverage likelihood ratio test is applied. For limited trading days used in \(VaR\) evaluation, or high confidence level, many tests have low power.

A conditional backtesting exception test introduced by Christoffersen accounts for the timing as well as the number of exceptions. With the accurate number of exception in a \(VaR\) model, indicator variables that represent the exceptions are IID Bernoulli Random Variables.

The magnitude of exceptions is another natural measure of \(VaR\) performance. It involves a quadratic loss function with the difference between actual P&L and variance constitutes the loss when an exception occurs. Mean Squared Error (MSE) is also a measure of \(VaR\) performance in backtesting. In this method, based on the model, the risk manager measures the MSE between the true \(VaR\) and the \(VaR\) estimate.

Before putting the \(VaR\) model in application, define the data generating processes, then simulate position P&L enough times to create a P&L distribution and determine the true \(VaR\) based on this simulated distribution. Finally, apply the \(VaR\) model to the data generated.

Backtesting Issues

Determining the P&L series, either actual or hypothetical, to be compared to the \(VaR\) is a crucial backtesting issue. Moreover, the appropriate backtesting horizon is another issue where a good one-day \(VaR\) is assumed to validate the regulatory \(VaR\) which is ten-day \(VaR\), by banks.

Conclusion

To ensure models are more realistic, it is crucial to incorporate time-varying volatility in \(VaR\) measures. By incorporating this feature, models may generate pro-cyclical \(VaR\) and be unstable due to estimation issues. Moreover, Characteristic portfolios may also influence whether to evaluate a \(VaR\) model based on hypothetical of actual backtesting.

Incorporating Liquidity

Overview

Let us begin by differentiating between exogenous and endogenous liquidity. Exogenous liquidity is the cost of transaction for average sized traders. On the other hand, endogenous liquidity is associated with the cost of unwinding portfolios large enough for the bid-ask spread to be considered as given but affected by the trades specifically.

The average transaction cost set by the market corresponds to the exogenous liquidity component for standard transaction sizes. The endogenous component will correspond to the tight market liquidation impact on prices hence applying to orders big enough to shift market prices. New risk factors used to model liquidity risk is a way of incorporating liquidity risk into \(VaR\) measures.

Exogenous Liquidity

The relative risk spread, \(S={ \left( Ask-Bid \right) }/{ Mid }\) price has sample mean and variance given by \(\hat { \mu } \) and \(\widehat { { \sigma }^{ 2 } } \).Given that the 99% quantile of the normalized distribution is \(\widehat { { a }_{ 0.99 } } \), we then have the liquidity ratio as;

$$ { COL }_{ t }={ P }_{ t }\left( \frac { \hat { \mu } +\widehat { { a }_{ 0.99 } } \hat { \sigma } }{ 2 } \right) $$

Where \({ P }_{ t }\) is the position of the value as of today.

\({ COL }_{ t }\) is added to \(VaR\) for liquidity-adjusted \(VaR\) formation.

Endogenous Liquidity: Motivation

When market makers’ inventory becomes imbalanced, flight to liquid and high-quality assets can be compounded thereby reducing the probability to find the counterparty. Assets with identical payoffs differ in prices based on the margin specifications and the capital opportunity cost. Hedging also affects the dynamics of the underlying assets, which will be important provided:

  • Non-liquid underlying asset
  • With respect to the market, the investors hedging position sizes are important
  • Many small-scale traders following the same hedging strategy
  • Asymmetric information affects the market for the underlying of the derivative thus magnifying price sensitivity to similar trade clusters

The connection between implied volatiles strike prices that can be observed inoptions markets is known to be derivative hedging.

Endogenous Liquidity and Market Risk for Trading portfolios

Before executing \(VaR\) calculations, it has been suggested that endogenous liquidity costs be added to position returns. Failure to apply most liquidity risk adjustments to the trading books has been largely attributed to noncompliance with actual accounting standards by the valuation methods. Moreover, specifically for OTC markets, the difficultyin estimating market liquidity ratio has been cited. However, some authors have integrated the liquidity risk with market and credit risk.

Adjusting the \(VaR\) Time Horizon To Account for Liquidity Risk

Position size and the liquidity of the market are recommended to determine temporal horizon as the application of unique horizon to all positions by ignoring their liquidity level and size is undesirable. Variation of liquidation horizons is mainly due to an increase during market stress times. A trade execution strategy depends on many other factors such as risk aversion and expected price volatility, apart from the size of position relative to the market and transaction costs.

Risk Measures

Overview

Risk measures are functions of random variables which in most scenarios are portfolio losses or returns. This implies that these variables cannot be attributed to risk measures since the probability distribution of the variables is specified in a preceding step and risk measure analysis is not subject to whether the random variables are specified correctly. We focus on alternative \(VaR\) measures due to its relevance on the \(ES\) Spectral measures in the current industry.

VaR

Given some random loss \(L\) at the probability \(\alpha \), \({ VaR }_{ \alpha }\left( L \right) \) is the quantile for \(L\) at the probability for \(\alpha \). Therefore, \(VaR\) is defined as the smallest, most optimistic quantile:

$$ Va{ R }_{ \alpha }\left( L \right) =inf\left\{ I:{ F }_{ L }\left( I \right) \ge \alpha \right\} $$

However, \(VaR\) measures only quantiles of losses thereby ignoring losses beyond that level. On the flipside, this negligence makes backtesting easier since empirical quantities are robust to extreme outliers. In addition, critics cite lack of axiomatic foundation in \(VaR\) hence not being coherent. Coherent risk measures adhere tothe following axioms:

  • Subadditivity; \(R\left( { L }_{ 1 }+{ L }_{ 2 } \right) \le R\left( { L }_{ 1 } \right) +R\left( { L }_{ 2 } \right) \)
  • Positive homogeneity \(R\left( \lambda L \right) =\lambda R\left( L \right) \forall \lambda >0\)
  • Monotonicity \(R\left( { L }_{ 1 } \right) <R\left( { L }_{ 2 } \right) \) If \({ L }_{ 1 }<{ L }_{ 2 }\)
  • Transition Property \(R\left( 1+a \right) <R\left( L \right) -a\)

Subadditivity is important since:

  • It reflects the idea that diversification reduces risk.
  • An institution where regulators use nonsubadditive risk measure to determine the regulatory capital, incentives to legally break up are available to the institution up to various subsidiaries.
  • It ensures risk management systems are possible to decentralize.

Expected Shortfall

Expected shortfall corrects \(VaR\) shortfall. To define \(ES\), let \(L\) be a random loss with a distribution function \({ F }_{ L }\) and \(\alpha \in \left( 0,1 \right) \) a confidence level. Recall that the \(\alpha -VaR\) is defined as the \(\alpha -quantile\) function of \({ F }_{ L }\). At level \(\alpha \), the \(ES\) is defined as:

$$ E{ s }_{ \alpha }\equiv \frac { 1 }{ 1-\alpha } \int _{ \alpha }^{ 1 }{ Va } { R }_{ u }\left( L \right) du $$

If the loss distribution is continuous, then:

$$ E{ s }_{ \alpha }=E\left( L|\quad L\ge Va{ R }_{ \alpha } \right) $$

\(ES\) is then the expected loss belonging to the \(100\left( 1-\alpha \right) \) percent worst losses and is often referred to as Tail Conditional Expectation (\(TCE\)) or Conditional \(VaR\) (CVAR).

Backtesting ES

The \(VaR\) at the same \({ \alpha }\), can be generated as a by-product with low additional effort. The test statistic for \(ES\) involves the calculation of the squared loss \(ES\), apart from forecasted \(ES\) and \(VaR\).Backtest statistic for \(ES\) perform better than those for \(VaR\). However, a comparison between \(\alpha -VaR\) and \(\alpha -ES\) is unfair as \(E{ s }_{ \alpha }\ge Va{ R }_{ \alpha }\) for equal confidence level \({ \alpha }\). Lowering the confidence level to \({ \alpha }^{ , }\) such that \(ES\left( { \alpha }^{ , } \right) \approx VaR\left( \alpha \right) \) becomes paramount.

Spectral Risk Measures (SRM)

As opposed to \(\alpha -ES\) giving equal weight to all \(\beta -VaR\)s with \(\beta \ge \alpha \) but zero to all others SRM allows more freedom in the choice of these weights.

A weight function: \(W:\left[ 0,1 \right] \rightarrow \left[ 0,\infty \right] \) and intergrates to 1. Thus:

\(SRM=\int _{ 0 }^{ 1 }{ W\left( U \right) Va{ R }_{ u }\left( L \right) du } \) Restricted to \(W\) that increase over \(\left[ 0,1 \right] \)

\(ES\) happens to be a special spectral measure as:

$$ W\left( U \right) ={ \left( 1-\alpha \right) }^{ -1 }{ 1 }_{ \left\{ \alpha \le u\le 1 \right\} } .$$

Distortion Risk Measures

Let \(D\) be a distribution function on \(\left[ 0,1 \right] \) with \(D\left( 0 \right) =0\) and \(D\left( 1 \right) =1\). This \(D\) is the distortion function. A distortion risk measure of \(L\) is defined as:

$$ DM\left( L \right) \equiv \int _{ 0 }^{ 1 }{ Va{ R }_{ u }\left( L \right) dD\left( U \right) }. $$

As a distortion risk measure, the \(VaR\) is given as:

$$ { D }_{ VaR }\left( U \right) ={ 1 }_{ \left\{ u\ge \alpha \right\} } $$

The Wang transform

$$ { D }_{ \theta }^{ Wang }\left( U \right) =\phi \left( { \phi }^{ -1 }\left( U \right) +\log { \theta } \right) $$

Where \({ \phi }\) is the Gaussian distribution function and \({ \theta }\)<1.

Variance

Variance is the most important risk measure only for symmetric distribution with many good characteristics and 2 limitations. To use variance, we let the second moment of the loss distribution exist.

The mean deviation

The mean deviation is defined by:\(MD\left( L \right) \equiv E|L-EL|\) ans is associated with similar limitations for skewed distributions as variance, but less accessible to analytical treatment than the variance thereby making it seldom used as a risk variance.

Upper Partial Moments

With \({ F }_{ L }\) as a loss distribution function, exponent \(k\ge 0\) and \(q\) as a reference point the upper partial moment \(UPM\left( k,q \right) \) is defined as:

$$ UPM\left( k,q \right) =\int _{ q }^{ \infty }{ { \left( 1-q \right) }^{ k }d{ F }_{ L }\left( l \right) } $$

For \(k>1\), losses measured are beyond the threshold \(q\) with increasing weight.

A higher \(k\) implies a more conservative \(UPM\). \(K = 1\) and a continuous loss distribution involve a close relationship with \(ES\):

$$ UPM\left( 1,{ VaR }_{ \alpha } \right) =\left( 1-\alpha \right) \left( { ES }_{ \alpha }-{ VaR }_{ \alpha } \right) $$

Left-tail Measure (LTM)

\(LTM\) is the conditional standard deviation of \(VaR\) Exceedances;

$$ LTM\equiv \sqrt { E\left\{ { \left[ L-E\left( L|L\ge Va{ R }_{ \alpha } \right) \right] }^{ 2 }|L\ge Va{ R }_{ \alpha } \right\} } $$

Stress Testing Practices for Market Risk

A stress is a risk management tool for the evaluation of potential portfolio values impact of unlikely events in a set of variables. These exercises are designed on an ad hoc compartmentalized basis, with un-integrated stress results. The resulting risk estimates incorporate traditional market risk estimates and stress test outcomes in addition to the probabilities of each.

Incorporating Stress Testing Into Market-Risk Modeling

The following scenarios’ constructions are different from traditional stress testing exercises:

  • Historical scenarios
  • Set-piece scenarios
  • Mechanical search stress test

Of which are all dependent on the scenario choices. A unified coherent risk measurement system is obtained once the scenarios are presented in a probabilistic form. In this method, the plausibility of scenario is quantified by considering the distance of stress scenario from an average scenario. When determining P&L impact of stress shock factors, the general assumption has always been that the occurrence of shock is instantaneous.

Stressed \(VaR\)

The interpretation of this specification is that assumed volatilities of the security portfolio are increased. Stressing the correlation matrix used in all \(VaR\) methodologies is an important action if stressed \(VaR\) is calculated. In the conditional stress approach, extreme value realization of one or more risk factors causes the risk factor distribution to be conditional.

The conditional factor covariance matrix will among the remaining factors exhibit higher correlation, while the conditional correlations remain unchanged. Alternatively, the unconditional correlation matrix of the risk factor should be stressed while in a more sophisticated approach, not only linear transforms of the multivariate risk factors are included, but also the employment of fat-tailed distributions to model more accurately the extreme loss events.

Unified Versus Compartmentalized Risk Measurement.

We use either a compartmentalized approach, thesumof risk measured separately, or a unified approach which explicitly considers the interaction between these risks to study the implications of modeling aggregate risk present across a bank’s trading. Aggregate economic capital needs are determined by a two-step process in many banks. First is the calculation of capital for individual risk types. Then, we add up the stand-alone economic capital requirements to obtain the overall capital requirements for the banks.

Aggregation of Risk: Diversification Versus Compounding Effects

Diversification refers to the mix of a variety investment within a portfolio. As suggested by the diversification intuition, when analyzing risk by looking it at the sub-portfolio level and the risk measures are summed up, a conservative risk measure for a bank should be arrived at.

For a portfolio of foreign currency loans, the credit risk is always a function of the market risk,and in a foreign currency loan portfolio, the risk of each position has a simultaneous market and credit risk component. While the diversification intuition is inviting, it lacks the banking and trading books interaction since there lacks pure markets sub-portfolio, credit or operational risk.

To estimate the quantitative dimension, we turn our attention to papers working with a bottom-up approach.

Papers Using the Bottom-Up approach

The assumption that market risks are separable and can be addressed independently is common of most current risk measurement models yet lacks support. A study by Jobst et al., suggests a simulation model in which the risk underlying the future value of the bond portfolio is decomposed into:

  • Borrower’s rating change risk;
  • The risk that credit rate will change; and
  • The risk that risk-free interest rate will change.

The authors simulate only rating migration and default events as well as the recovery rates so as to concentrate on pure credit risk contributions to portfolio losses. Future credit spreads and future interest rates are then allowed to be stochastically determined.

Papers Using the Top-Down Approach

Aggregating the risk calculated for different business lines or risk types using the top-down approaches is another method for finding the total firm risk more so for enterprise risk management. The difference between the top-down and bottom down approaches is that in top-down, the reference for an institution is always a whole, while for bottom-up, it ranges from the portfolio level up to the institutional level.

In this approach, the assumption is that risks are separable and in some ways can be aggregated with respect to market risk. Moreover, there is no requirement for common scenario across risk types. Since the correct aggregation form is unknown, the top-down loses the merit of logical coherence.

Risk Management and Value at Risk in a Systemic Context

We are going to examine the research literature about systemic consequences of individual risk management systems and regulatory capital charges relying on them. The notion of risk where new regulations relied on was closer to the notions in use for the financial, economic, statistical research literature, in the conceptual innovation.

Intermediation, Leverage and Value at Risk: Empirical formula

Since models of risk and economical capital require that as a response to changes in financial market prices and measured risk, there should be balance sheet adjustments, as a balance sheet for an intermediary is actively managed hence leverage becomes procyclical.

Leverage is the assets to equity ratio. Total leverage is given by:

$$ L=\frac { Assets }{ Equity } $$

Hence the relationship between market value of total assets and leverage is inverse.

Relation to \(VaR\) Regulation:

Take the future bank assets value to be random variable \(A\), then the confidence level \(c\) of \(VaR\) is defined as:

$$ Pr\left( A<{ A }_{ 0 }-VaR \right) \le 1-c $$

Adjusting the balance sheet target to target a \(VaR\) ratio to economic capital, then to meet \(VaR\):

$$ K=\lambda \times VaR $$

\(\lambda \) being the capital proportion to be held per total \(VaR\).

Since this proportion may vary with time, then leverage \(L\):

$$ L=\frac { A }{ K } =\frac { 1 }{ \lambda } \times \frac { A }{ VaR } , $$

is procyclical since \(VaR\) per asset value is countercyclical.

Practice Questions

1) Let a balance sheet for an institution be given such that the liabilities side is $1.2 billion in debts and $82 million in equity. Calculate the total leverage.

  1. 14.63
  2. 0.014
  3. 15.63
  4. 0.016

The correct answer is C.

Recall that leverage is the total assets to equity ratio. This is given by the expression

$$ L=\frac { Assets }{ Equity } $$

But from the accounting equation:

$$ Assets=Total\quad Equities+Total\quad Liabilities $$

$$ \Rightarrow Assets=$1,200\quad million+$82\quad million=1,282\quad million $$

$$ \Rightarrow L=\frac { 1,282 }{ 82 } =15.63 $$

2) Find the weight of an observation 20 days ago if the total number of days in the historical window is 5 and a 0.9 control rate of memory decay.

  1. 0.2969
  2. 0.2198
  3. 0.1216
  4. 0.0297

The correct answer is D.

The weight of observation \(i\)-days ago is given by:

$$ W\left( i \right) =\frac { { \theta }^{ i }\left( 1-\theta \right) }{ 1-{ \theta }^{ n } } $$

Where \(n\) is the number of days in historical window and \(\theta \) is the control rate of memory decay,

Therefore:

$$ W\left( 13 \right) =\frac { { 0.9 }^{ 20 }\left( 1-0.9 \right) }{ 1-{ 0.9 }^{ 5 } } =\frac { 0.011 }{ 0.488 } =0.0297 $$

3) Alkbatros Bank has the proportion of capital to be held per total \(VaR\) as $4.3 million and the future value of assets is $87 million.If the Value at Risk is $0.8 million, the Leverage for Albatros Bank is closest to:

  1. 25.29
  2. 467.63
  3. 0.0021
  4. 0.0395

The correct answer is A.

We were given that Leverage \(L\):

$$ L=\frac { A }{ K } =\frac { 1 }{ \lambda } \times \frac { A }{ VaR } $$

\(\left\{ Where\quad A\quad is\quad the\quad future\quad value\quad of\quad assets,\quad and\quad \lambda \quad is\quad the\quad proportion\quad of\quad capital\quad to\quad be\quad held\quad pertotal\quad VaR. \right\} \)

$$ \Rightarrow L=\frac { 1 }{ 4.3 } \times \frac { 87 }{ 0.8 } =25.29 $$

4) The relative spread \(S\) of a portfolio has a sample mean of 20.6 and variance of 40.2. If the 99% quantile of distribution is 0.50, find the expression of cost of liquidity of the portfolio if \({ P }_{ t }\) is today’s position value.

  1. \({ P }_{ t }23.77\)
  2. \({ P }_{ t }11.89\)
  3. \({ P }_{ t }20.35\)
  4. \({ P }_{ t }40.7\)

The correct answer is B.

Remember that:

$$ { COL }_{ t }={ P }_{ t }\left( \frac { \hat { \mu } +\widehat { { a }_{ 0.99 } } \hat { \sigma } }{ 2 } \right) $$

So if \(variance=36.13=\hat { { \sigma }^{ 2 } } \) , Then: \(\sqrt { 40.2 } =\hat { \sigma } =6.3403\)

Therefore:

$$ { COL }_{ t }={ P }_{ t }\frac { \left( 20.6+0.50\times 6.3403 \right) }{ 2 } =\frac { 23.77 }{ 2 } ={ P }_{ t }11.89 $$


Leave a Comment

X