### Backtesting VaR

In this chapter, the accuracy of $$VaR$$ models is verified by backtesting techniques. Backtesting is a formal statistical framework that verifies that actual losses are in line with the projected losses. This is achieved by systematically comparing the history of $$VaR$$ forecasts with their associated portfolio returns. $$VaR$$ risk managers and users find these procedures, also called reality checks, very essential when checking that their $$VaR$$ forecasts are well calibrated.

# Setup for Backtesting

To ensure that $$VaR$$ models are reasonably accurate, we systematically check the validity of the underlying valuation and risk models by comparing actual to predicted levels of losses.

The number of observations falling outside $$VaR$$ should be in line with the confidence level if the model is calibrated perfectly. One limitation is that a lot of expectations cause the model to underestimate the risks. This is so, since a small capital may be allocated for risk-taking units and penalties imposed by the regulator.

## Model Backtesting With Expectations

To achieve model backtesting, historical $$VaR$$ measure should be systematically compared with the subsequent returns. But since $$VaR$$ is reported at a specific confidence level, in some instances, it is expected that the figure be exceeded.

## Model Verification Based Failure Rates

To verify the accuracy of a model, we simply record the failure rate. This rate willgive the proportional of times $$VaR$$ in a given sample is exceeded. Imagine an institution provides $$VaR$$ figure at 1% – tail level $$\left( P=1-c \right)$$ in $$T$$ days. Define $$N$$ as the number of expectations and $$N/T$$ be the failure rate which should be the unbiased measure of $$p$$. In a sample of size $$T$$, we seek to discover, at some confidence level, if $$n$$ is excessively large or small, considering the null hypothesis that makes $$p = 001$$. In the null hypothesis that the model is correctly calibrated, $$x$$-the number of exceptions follows a binomial distribution:

$$f\left( x \right) =\left( \begin{matrix} T \\ x \end{matrix} \right) { p }^{ x }{ \left( 1-p \right) }^{ T-x }$$

Moreover, $$E\left( x \right) =pT$$ And $$V\left( x \right) =p\left( 1-P \right) T$$. For a large $$T$$, we approximate the binomial distribution by the normal distribution function:

$$Z=\frac { x-pT }{ \sqrt { p(1-p)T } } \approx N\left( 0,1 \right)$$

For a decision rule defined at 2 tailed 95% test confidence level, then modulus of $$z$$ is 1.96. The following model backtesting table reports the 95% non-rejection test confidence regions.

$$\begin{array}{c} Nonrejection \quad Regions \quad In \quad Number \quad of \quad Failures \quad N \\ \hline \end{array}$$

$$\begin{array}{|c|c|c|c|c|} \hline Probability \quad Level \quad p & VaR\quad confidence\quad Level\quad c & T=252 \quad Days & T=510\quad Days & T=1000\quad Days \\ \hline 0.01 & 99 & N<7 & 1 <N <17 & 4 <N <17 \\ 0.025 & 97.5 & 2< N<12 & 6 <N <21 & 15 <N <36 \\ 0.05 & 95 & 6 <N <20 & 16 <N <36 & 37 <N <65 \\ 0.075 & 92.5 & 11 <N <28 & 27 <N <51 & 59 <N <92 \\ 0.10 & 90 & 16 <N <36 & 38 <N <65 & 81 <N <120 \\ \hline \end{array}$$

These regions are defined by the tail points of the log-likelihood ratio:

$$LR_{ uc }=-2ln\left[ { \left( 1-p \right) }^{ T-N }{ p }^{ N } \right] +2ln\left\{ { \left[ 1-\left( { N }/{ T } \right) \right] }^{ T-N }{ \left( { N }/{ T } \right) }^{ N } \right\}$$

The ratio is asymptomatically distributed chi-square having 1 degree of freedom in the null-hypothesis that $$p$$ is the true probability which will be rejected in the event that $$LR$$<$$3.841$$. Values of $$N\le 1$$ indicates that the $$VaR$$ model is overly conservative. This interval expressed as $${ N }/{ T }$$ proportion shrinks as the sample size increases. For small values of $$VaR$$ parameter $$p$$, it grows increasingly difficult to confirm deviations.

# The Basel Rules

In this section, we analyze in detail the Basel committee rules for backtesting. The Basel rules for backtesting the internal-models approach are directly derived from failure rest test. For this test, you first choose the type 1 error rate which is the probability of the model being rejected when correct. The type 1 error rate should be picked from the test is supposed to be low,e.g., 5%.

The verification procedure involves recording expectations of 99% $$VaR$$ over the last year. On average 2.5 instances of expectations are expected over the last year. The number of expectations accepted is a maximum of four which is the green light zone and above that is a yellow or red zone.

The Basel committee uses the following categories for yellow zone penalties.

1. Basic integrity model for deviations occurring as a result of wrongly stated positions or a programming code error.
2. Possibility of model accuracy being improved. Here, the model fails to measure risk appropriately leading to a deviation.
3. Interday trading where positions open during the day.
4. Bad luck. Particularly volatile markets and changing correlation.

The following table displays the probabilities of obtaining a given number of expectations for a correct and incorrect model:

$$\begin{array}{|ccccccc|} \hline Coverage = 99\% & {} & Coverage = 97\% & { \quad \quad \quad \quad \quad } & { \quad \quad \quad \quad } & { \quad \quad \quad \quad \quad } & { \quad \quad \quad \quad \quad \quad \quad} \\ Model \quad is \quad correct & {} & {} & { \quad \quad \quad \quad \quad } & { \quad \quad \quad \quad } & { \quad \quad \quad \quad \quad } & { \quad \quad \quad \quad \quad \quad \quad \quad } \\ \hline \end{array}$$

$$\begin{array}{|c|c|c|c|c|c|c|} \hline Zone & Number \quad of & Probability & Cumulative & Probability & Cumulative & Power \\ {} & Expectations & P\left( X=N \right) & (Type \quad 1) & P\left( X=N \right) & (Type 2) & (Reject) \\ {} & N & {} & (Reject) & {} & (Do\quad not\quad Reject) & P\left( X\ge N \right) \\ {} & {} & {} & P\left( X\ge N \right) & {} & P\left( X<N \right) \\ \hline Green & 0 & 8.1 & 100.0 & 0.0 & 0.0 & 100.0 \\ {} & 1 & 20.5 & 91.9 & 0.4 & 0.0 & 100.0 \\ {} & 2 & 25.7 & 71.4 & 1.5 & 0.4 & 99.6 \\ {} & 3 & 21.5 & 45.7 & 3.8 & 1.9 & 98.1 \\ Green & 4 & 13.4 & 24.2 & 7.2 & 5.7 & 94.3 \\ Yellow & 5 & 6.7 & 10.8 & 10.9 & 12.8 & 87.2 \\ {} & 6 & 2.7 & 4.1 & 13.8 & 23.7 & 76.3 \\ {} & 7 & 1.0 & 1.4 & 14.9 & 37.5 & 62.5 \\ {} & 8 & 1.3 & 0.4 & 14.0 & 52.4 & 47.6 \\ Yellow & 9 & 0.1 & 0.1 & 11.6 & 66.3 & 33.7 \\ Red & 10 & 0.0 & 0.0 & 8.6 & 77.9 & 21.1 \\ {} & 11 & 0.0 & 0.0 & 5.8 & 86.6 & 13.4 \\ \hline \end{array}$$

# Conditional coverage Models

Every year, with a 95% $$VaR$$, it is expected that we have about 13 exceptions which should be evenly spread over time. A verified system should proper conditional coverage. Chstofferersen came up with a test that is set up as follows:

• Set a deviation indicator to 0 each day if $$VaR$$ is not exceeded otherwise set to 1,
• Define $${ T }_{ ij }$$ as the number of days state $$j$$ occurred in 1 day whileit was at $$i$$the previous day and $${ \pi }_{ i }$$ the probability of observing an exception conditional on state Ithe previous day.

The relevant test statistic is:

$${ LR }_{ ind }=-2\quad ln\left[ { \left( 1-\pi \right) }^{ \left( { T }_{ 00 }+{ T }_{ 0 } \right) }{ \pi }^{ \left( { I }_{ 01 }+{ T }_{ 11 } \right) } \right] +2\quad ln\left[ { \left( 1-{ \pi }_{ 0 } \right) }^{ { T }_{ 00 } }{ \pi }_{ 0 }^{ { T }_{ 01 } }{ \left( 1-{ \pi }_{ 1 } \right) }^{ { T }_{ 10 } }{ \pi }_{ T }^{ 1 } \right]$$

Note,

$$\pi ={ \pi }_{ 0 }={ \pi }_{ 1 }={ \left( { T }_{ 01 }+{ T }_{ 11 } \right) }/{ T }$$

Therefore, the combined test statistic for the conditional coverage is:

$${ LR }_{ cc }={ LR }_{ uc }+{ LR }_{ ind }$$

Each component being independently distributed as $${ \chi }^{ 2 }\left( 1 \right)$$ and the sum distributed as $${ \chi }^{ 2 }\left( 2 \right)$$ .

## Extensions

When the $$VaR$$ confidence level is high and in case of a low number of observations, the standard exception tests often lack power. Statistical decision theory has demonstrated that this exception test happens to be more powerful in its class. Lastly, backtests may apply parametric information. However, the only available data is the daily $$VaR$$.

## Applications

The first empirical study of the accuracy of $$VaR$$ was provided using data reported to U.S. regulators. The distributions of $${ P }/{ L }$$ which are compared with $$VaR$$ estimates are described by the study. The $${ P }/{ L }$$ estimates despite displaying abnormally fat tails are generally asymmetric. Relative to their actual risks, conservative $$VaR$$ measures are too large. U.S. banks prefer to report high $$VaRs$$ to avoid regulatory intrusion since the amount of economic capital held by them happens to exceed their regulatory capital.

## Conclusion

$$VaR$$ digits backtested provides users with valuable feedback on their model credibility and can be used to point out possible improvements. When choosing $$VaR$$ quantitative parameters, the horizons should be minimum short so as to increase the observations to mitigate the effects of changes in the composition of the portfolio.

To check if the count is aligned to the selected $$VaR$$ confidence level, verification tests are based on the counts. Trading portfolios do not change over the horizons and as a result, models evolve over time as risk managers improve their techniques.

# Practice Questions

1) A risk manager observed the following pattern exceptions on a particular year. A fraction $$\pi$$ which was as a result of 23 exceptions of 252 days. 7 of these exceptions occurred following an exception the previous day. Alternatively, 16 exceptions occurred when non was there the previous day. Write an expression to represent the relevant test statistic $${ LR }_{ ind }$$.

1. $${ LR }_{ ind }=-2\quad ln\left\{ { \left( 1-0.091 \right) }^{ \left( { T }_{ 00 }+{ T }_{ 10 } \right) }{ \pi }^{ \left( { T }_{ 01 }+{ T }_{ 10 } \right) } \right\} +2ln\left\{ { \left( 1-0.0635 \right) }^{ { T }_{ 00 } }{ 0.0635 }^{ { T }_{ 01 } }{ \left( 1-0.304 \right) }^{ { T }_{ 10 } }{ 0.304 }^{ { T }_{ 11 } } \right\}$$
2. $${ LR }_{ ind }=-2\quad ln\left\{ { \left( 1-0.091 \right) }^{ \left( { T }_{ 00 }+{ T }_{ 10 } \right) }{ \pi }^{ \left( { T }_{ 01 }+{ T }_{ 10 } \right) } \right\} +2ln\left\{ { \left( 1-0.0699 \right) }^{ { T }_{ 00 } }{ 0.0699 }^{ { T }_{ 01 } }{ \left( 1-0.304 \right) }^{ { T }_{ 10 } }{ 0.304 }^{ { T }_{ 11 } } \right\}$$
3. $${ LR }_{ ind }=-2\quad ln\left\{ { \left( 1-0.091 \right) }^{ \left( { T }_{ 00 }+{ T }_{ 10 } \right) }{ \pi }^{ \left( { T }_{ 01 }+{ T }_{ 10 } \right) } \right\} +2ln\left\{ { \left( 1-0.0635 \right) }^{ { T }_{ 00 } }{ 0.0635 }^{ { T }_{ 01 } }{ \left( 1-0.438 \right) }^{ { T }_{ 10 } }{ 0.438 }^{ { T }_{ 11 } } \right\}$$
4. $${ LR }_{ ind }=-2\quad ln\left\{ { \left( 1-0.091 \right) }^{ \left( { T }_{ 00 }+{ T }_{ 10 } \right) }{ \pi }^{ \left( { T }_{ 01 }+{ T }_{ 10 } \right) } \right\} +2ln\left\{ { \left( 1-0.0699 \right) }^{ { T }_{ 00 } }{ 0.0699 }^{ { T }_{ 01 } }{ \left( 1-0.438 \right) }^{ { T }_{ 10 } }{ 0.438 }^{ { T }_{ 11 } } \right\}$$

Solution: Recall that the relevant test statistic is:

$${ LR }_{ ind }=-2\quad ln\left[ { \left( 1-\pi \right) }^{ \left( { T }_{ 00 }+{ T }_{ 0 } \right) }{ \pi }^{ \left( { I }_{ 01 }+{ T }_{ 11 } \right) } \right] +2\quad ln\left[ { \left( 1-{ \pi }_{ 0 } \right) }^{ { T }_{ 00 } }{ \pi }_{ 0 }^{ { T }_{ 01 } }{ \left( 1-{ \pi }_{ 1 } \right) }^{ { T }_{ 10 } }{ \pi }_{ 1 }^{ { T }_{ 11 } } \right]$$

Note,

$$\pi ={ \pi }_{ 0 }={ \pi }_{ 1 }={ \left( { T }_{ 01 }+{ T }_{ 11 } \right) }/{ T }$$

Therefore:

$$\pi ={ \left( { T }_{ 01 }+{ T }_{ 11 } \right) }/{ T }={ \left( 7+16 \right) }/{ 252 }=0.091$$

And:

$${ \pi }_{ 0 }={ 16 }/{ 229 }=0.0699$$ which is $$6.99$$ percent,

$${ \pi }_{ 1 }={ 7 }/{ 23 }=0.304$$ which is $$30.4$$ percent.

$$\Rightarrow { LR }_{ ind }=-2\quad ln\left\{ { \left( 1-0.091 \right) }^{ \left( { T }_{ 00 }+{ T }_{ 10 } \right) }{ \pi }^{ \left( { T }_{ 01 }+{ T }_{ 10 } \right) } \right\} +2ln\left\{ { \left( 1-0.0699 \right) }^{ { T }_{ 00 } }{ 0.0699 }^{ { T }_{ 01 } }{ \left( 1-30.4 \right) }^{ { T }_{ 10 } }30.4^{ { T }_{ 11 } } \right\}$$