 ### Alpha (and the Low-Risk Anatomy)

From alpha, we learn more about the set of factors applicable to the construction of a benchmark as compared to the skills involved in beating it. Under one set of factors, there is the possibility of a positive turning negative. Statistically, alpha is hardly detectable, whatever the benchmark.

# Active Management

Alpha is defined as the average return in excess of a benchmark. In other words, alpha is interpreted as a measure of skill. The excess return, $${ r }_{ t }^{ ex }$$, is defined as an asset’s return in excess of a benchmark as given in the equation $$\left( i \right)$$ for $${ r }_{ t }$$, the asset’s return, and $${ r }_{ t }^{ bmk }$$, the benchmark return.

$${ r }_{ t }^{ ex }={ r }_{ t }-{ r }_{ t }^{ bmk }\quad \quad \quad \quad \quad \left( i \right)$$

The excess return is also referred to as active returns. The assumption in this terminology is that the benchmark is passive and that it can be produced without prior knowledge on investment or even human intervention.

To compute the alpha, the average excess return in equation $$\left( i \right)$$ is taken as:

$$\alpha =\frac { 1 }{ T } { \Sigma }_{ t=1 }^{ T }{ r }_{ t }^{ ex }\quad \quad \quad \quad \quad \left( ii \right)$$

Where $$T$$ is the number of observations in the sample.

The standard deviation of excess return is called the tracking error, and it is an evaluation of how disperse the returns of a manager are in relation to the benchmark:

$$Tracking\quad Error=\bar { \sigma } =stdev\left( { r }_{ t }^{ ex } \right) \quad \quad \quad \quad \quad \left( iii \right)$$

To ensure that a manager does not stray too much from the benchmark, then tracking error constraints are imposed. The freedom of the manager increases with the size of the tracking error. Tracking error is popularly referred to as idiosyncratic volatility by academics if the benchmark risk is adjusted. The ratio of the alpha to the tracking error is called the information ratio and is defined as:

$$Information\quad Ratio=IR=\frac { \alpha }{ \sigma } \quad \quad \quad \quad \left( iv \right)$$

A manager taking large amounts of risk can produce alpha. The risk taken is divided by the information ratio and it, therefore, is the average excess return per unit of risk.

When the benchmark is the risk-free rate,$${ r }_{ t }^{ f }$$, which is known at the beginning of the period and applied from $$t-1$$ to $$t$$, then a special case of equation $$\left( iv \right)$$ comes about where the alpha is the average return in excess of the risk-free rate:

$$\alpha =\overline { { r }_{ t }-{ r }_{ ft } }$$

And the information ratio coincides with the sharp ratio:

$$Sharpe\quad Ratio=SR=\frac { \overline { { r }_{ t }-{ r }_{ ft } } }{ \sigma }$$

Where the asset’s volatility is given as $$\sigma$$.

## Benchmarks Matter

The Russell 1000 universe of large stocks is the basis of Martingale’s strategy, where you increase your bet after every loss, so when you eventually win, you get your lost money back and start betting with the initial amount again. Therefore, the Russell 1000 was naturally taken by Jacques as his benchmark for his experiment. The active strategy ran by Jacques has relatively high fees and compelling results must be offered by his volatility strategy relative to the Russell 1000 to attract investors.

A benchmark can also be a combination of assets or asset classes. However, relative to the Russell 1000, this volatility strategy has a high tracking error of 6.16%. A regression of excess returns of the fund on excess returns over the Russell 1000 measures beta as:

$${ r }_{ t }-{ r }_{ t }^{ f }=0.00344+0.7272\left( { r }_{ t }^{ R1000 }-{ r }_{ t }^{ f } \right) +{ \varepsilon }_{ t }$$

The return of Russell 1000 is $${ r }_{ t }^{ R1000 }$$, and the residual of the regression is given as $${ \varepsilon }_{ t }$$.

The CAPM regression can also be rewritten using a benchmark portfolio of a risk free-asset and 0.73 of the Russell 1000:

$${ r }_{ t }=0.0344+{ 0.2728r }_{ t }^{ f }+0.7272{ r }_{ t }^{ R1000 }+{ \varepsilon }_{ t }$$

Where

$${ r }_{ t }^{ bmk }=0.2728{ r }_{ t }^{ f }+{ 0.7272r }_{ t }^{ R1000 }$$

Let’s assume a naive benchmark of the Russell 1000 only and the beta of the low volatility strategy is one. Then, the Russell 1000 benchmark gives:

$${ r }_{ t }=0.0150+{ r }_{ t }^{ R1000 }+{ \varepsilon }_{ t }$$

And alpha is therefore 1.5% p.a.

## Ideal Benchmarks

The following are characteristics of a sound benchmark:

1. Well defined: The benchmark should be verifiable and free of ambiguity about its contents. Therefore, it should be able to define the market portfolio of the company.
2. Tradeable: The measure of alpha should be relative to tradable benchmarks or else the calculated alphas will not be a representation of the implementable returns on investment strategies.
3. Replicable: The asset owner and the fund manager should both display the ability to replicate the benchmark. Nonreplicable assets are never viable choices due to the difficulty to evaluate how much value has been added by a portfolio manager since the benchmark itself can hardly be achieved by the asset owner.
4. Adjusted for risk: Most benchmarks applied in the business of risk management are not risk-adjusted. This should not be the case.

## Creating Alpha

By making bets that deviate from a given benchmark, a portfolio manager creates alpha relative to a benchmark. Increased success of these bets implies a higher alpha.

According to the Grinold’s fundamental law of active management, the maximum information ratio attainable is given by:

$$IR\approx IC\times \sqrt { BR }$$

Where $$IC$$ is the information coefficient and represents the manager’s focus correlation with actual returns, and the breadth of the strategy is given by $$BR$$.

Since the fundamental law offers a guideline as to how good asset managers should be at forecasting and the bets they should make, the fundamental law has been quite influential in active quantitative portfolio management.

Since it is derived under the mean-variance utility, the Grinold’s fundamental law is exposed to all the shortcomings of mean-variance utility. However, to frame the active management process, the fundamental law is useful.

The beginning of alpha is raw information which is processed into forecasts and then portfolios are optimally and efficiently constructed to balance return forecasts against risk.

The two most crucial limitations of the fundamental law are:

1. The assumption that the ICs are constant across the BR. An increase of assets under management leads to the diminishing of the ability to generate ICs.
2. Truly independent forecasts are hardly found in BR. There is a tendency by the decisions of managers to be correlated; consequently, BR is reduced by correlated bets.

Sadly, Grinold’s fundamental law makes only scant appearances in the academic literature despite its influence in the industry.

# Factor Benchmarks

Consider the following CAPM applicable to asset, strategy, or fund $$i$$:

$$E\left( { r }_{ i } \right) -{ r }_{ f }={ \beta }_{ i }\left( E\left( { r }_{ m } \right) -{ r }_{ f } \right)$$

Where $$E\left( { r }_{ i } \right)$$, is the expected return of asset $$i$$, $${ r }_{ f }$$ is the risk-free rate, the beta of asset $$i$$ is $${ \beta }_{ i }$$, and $$E\left( { r }_{ m } \right)$$ is the expected return of the market portfolio.

Assuming that $${ \beta }_{ i }=1.3$$, then:

$$E\left( { r }_{ i } \right) =\left( 1.3E\left( { r }_{ m } \right) -0.3{ r }_{ f } \right)$$

The asset’s alpha is any expected return generated in excess of being short 30% in T-bills while being long 130% in the market portfolio:

$$E\left( { r }_{ i } \right) ={ \alpha }_{ i }+\left( 1.3E\left( { r }_{ m } \right) -0.3{ r }_{ f } \right)$$

In this situation, the benchmark consists of the risk-adjusted amount that is held in equities and the risk-free rate:

$${ r }^{ bmk }=-0.3{ r }_{ f }+1.3{ r }_{ m }$$

The average return in excess of the predicted return by CAPM is $$\alpha$$ because the benchmark in this scenario is drawn from CAPM.

## Factor Regressions

Factor regressions can be applied in the estimation of the risk-adjusted benchmark, or equivalently the mimicking portfolio.

## CAPM Benchmark

An investor takes monthly returns on Berkshire Hathaway over a period of time. The following CAPM regression is run:

$${ r }_{ it }-{ r }_{ ft }=\alpha +\beta \left( { r }_{ mt }-{ r }_{ ft } \right) +{ \varepsilon }_{ it }$$

The assumption by ordinary least squares is that the market factor does not affect the residuals $${ \varepsilon }_{ it }$$.

Statistically, the alpha is significant also, with a high $$t$$-statistic above two. Relative to the CAPM risk-adjusted benchmark, the excess return that Berkshire Hathaway generates is:

$${ r }^{ ex }={ r }_{ i }-{ r }^{ bmk }$$

$$={ r }_{ i }-\left( 0.49{ r }_{ f }+0.51{ r }_{ m } \right)$$

The average of the excess returns is:

$$\alpha =E\left( { r }^{ ex } \right) =0.72\%\quad per\quad month$$

### Size and Value-Growth Benchmarks

In 1993, a benchmark was introduced by Eugene Fama and Kenneth French, extending CAPM for factors capturing a size effect and value growth effect. These factors were labeled SMB and HML, for “Small stocks Minus Big stocks” and “High book-to-market stocks Minus Low book-to-market stocks.”

These two factors, HML and SBM, are long-short factors. They mimic portfolios made up of simultaneous $1 long and$1 short positions in different stocks:

$$SMB=1\quad in\quad small\quad caps-1\quad in\quad large\quad caps$$

$$HML=1\quad in\quad value\quad stocks-1\quad in\quad growth\quad stocks$$

This benchmark holds for positions in the $$SMB$$ and $$HML$$ factor portfolios alongside a position in the market portfolio like in the traditional CAPM.

The following regression is run to estimate the Fama-French benchmark:

$${ r }_{ it }-{ r }_{ ft }=\alpha +\beta \left( { r }_{ mt }-{ r }_{ ft } \right) +s{ SMB }_{ t }+hHM{ L }_{ t }+{ \varepsilon }_{ it }\quad ,$$

This adds the SMB and HML factors to the standard market factor. The coefficients $$s$$ and $$h$$ gives the SMB and HML factor loadings respectively. The assumption always when running factor regression is that a factor benchmark portfolio can be created.

The momentum effect can be added to the factor benchmark. Momentum is a systematic factor observed in most asset classes. A momentum factor, $$UMD$$ – created when positions taken in stocks have gone Up Minus stocks that have gone Down – is added to the Fama-French benchmark:

$${ r }_{ it }-{ r }_{ ft }=\alpha +\beta \left( { r }_{ mt }-{ r }_{ ft } \right) +s{ SMB }_{ t }+hHM{ L }_{ t }+u{ UMD }_{ t }+{ \varepsilon }_{ it }$$

The UMD factor has a beta of $$u$$.

# Doing Without Risk-Free Assets

Risk-free assets are not necessarily included in benchmark portfolios.

## CaIPERS

CaIPERS’ benchmark regression can be written as:

$${ r }_{ it }=\propto +{ \beta }_{ s }{ r }_{ st }+{ \beta }_{ b }{ r }_{ bt }+{ \varepsilon }_{ it }$$

Where $${ r }_{ it }$$ is the return of CaIPERS, $${ r }_{ st }$$ is the S&P 500 equity market return, and $${ r }_{ bt }$$ is a bond portfolio return.

The following restriction is a requirement to obtain a benchmark portfolio:

$${ \beta }_{ s }+{ \beta }_{ b }=1$$

## Real Estate

The complication in the real estate returns is due to the fact that they are not traceable. However, we can take the quarterly real estate returns over a period of time, then consider the factor benchmark regressions applying the S&P 500 stock returns, and the FTSE NAREIT index returns, and finally run the following factor regressions:

$${ r }_{ it }=\alpha +{ \beta }_{ REIT }{ REIT }_{ t }+{ \beta }_{ b }{ r }_{ bt }+{ \varepsilon }_{ it } \quad ,$$

$${ r }_{ it }=\alpha +{ \beta }_{ b }{ r }_{ bt }+{ \beta }_{ s }{ r }_{ st }+{ \varepsilon }_{ it }\quad ,$$

$${ r }_{ it }=\alpha +{ \beta }_{ REIT }{ REIT }_{ t }+{ \beta }_{ b }{ r }_{ bt }+{ \beta }_{ s }{ r }_{ st }+{ \varepsilon }_{ it }\quad ,$$

Where $$REIT$$ happens to be the return to the NAREIT portfolio made up of traded REITS, $${ r }_{ bt }$$ is the bond return, and $${ r }_{ st }$$ is the stock return, with factor loadings of $${ \beta }_{ REIT }$$,$${ \beta }_{ b }$$,and $${ \beta }_{ s }$$, respectively.

It is a requirement that, always, the factor loadings should add up to one for them to be interpreted as a factor portfolio benchmark.

## Time-Varying Factor Exposures

In 1992, a style analysis introduced by William Sharpe is a powerful framework to handle time-varying benchmarks. The style analysis is a factor benchmark where there is an evolution of factor exposures through time.

The following two potential shortcomings of our analysis can be rectified by style analysis:

1. The Fama-French portfolios are not tradable; and

## Style Analysis with no Shorting

By investing passively in low-cost index funds, style analysis tries to replicate the fund. Style weight refers to the collection of index funds that replicates the fund.

The following is our benchmark factor regression for fund $$i$$:

$${ r }_{ t+1 }={ \alpha }_{ t }+{ \beta }_{ SPY,t }{ SPY }_{ t+1 }+{ \beta }_{ SPYV,t }{ SPY }V_{ t+1 }+{ \beta }_{ SPYG,t }{ SPYG }_{ t+1 }+{ \varepsilon }_{ t+1 }\quad ,$$

The following restriction is imposed:

$${ \beta }_{ SPY,t }{ +\beta }_{ SPYV,t }+{ \beta }_{ SPYG,t }{ =1 }$$

The factor weights or loadings should sum up to one. With style analysis, the main idea is that the actual tradable funds are applied in the factor benchmark. The weights can change over time.

# Non-Linear Payoffs

A manager can appear to have skills which they may lack in reality, due to alphas and information ratios. The calculations of alphas are usually in a linear framework.

Many non-linear strategies exist that can masquerade as alphas. The reason why dynamic, nonlinear strategies yield false alpha measures is that the distribution of returns is changed by buying and selling options. Only particular components of the whole return distribution are captured by static measures like alpha, information ratios, and Sharpe ratios. Alphas and information ratios do not account for skewness.

Non-linear payoffs can be accounted for in the following two ways:

1. Tradeable Nonlinear Factors should be IncludedAn easy way through which short volatility strategies can be included is to include volatility risk factors. The inclusion of other non-nonlinear factors in factor benchmarks is also a solution. The assumption is the asset owners can trade these nonlinear factors by themselves.
2. Examine Non-tradeable NonlinearitiesIncluding nonlinear terms on the right-hand side of the factor regressions is an easy way to evaluate whether nonlinear patterns are exhibited by fund returns. However, if we want evaluation measures that are robust to dynamic manipulations, the following evaluation measure introduced by Goetzman et. al. should be applied: $$\frac { 1 }{ 1+\gamma } ln\left( \cfrac { 1 }{ T } { \Sigma }_{ t=1 }^{ T }{ \left( 1+{ r }_{ t }-{ r }_{ ft } \right) }^{ 1-\gamma } \right) ,$$ Where $$\gamma$$ is set to three. According to Goetzman et. al, Morningstar applies a variant of the following measure: $${ \left( \cfrac { 1 }{ T } \sum _{ t=1 }^{ T }{ \frac { 1 }{ { \left( 1+{ r }_{ t }-{ r }_{ ft } \right) }^{ 2 } } } \right) }^{ -0.5 }-1$$ Which also happens to be a constant relative risk aversion (CRRA) utility function with risk aversion $$\gamma=2$$.

# Low-Risk Anomaly

A low-risk anomaly combines the following three effects, where the third effect is a consequence of the first two:

1. There is a negative relation between volatility and future returns;
2. There is a negative relation between realized beta and future returns; which implies
3. The market performs poorly as compared to minimum variance portfolios.

The risk anomaly is negatively related to returns and is measured by market beta or volatility.

## Volatility Anomaly

It was discovered that the returns of high-volatility stocks were abysmally low so that their average returns was zero. The purpose of CAPM and any multifactor extension was to ensure that stock return volatility should not matter. According to these models, expected returns are determined by how assets covary with factor risks.

## Logged Volatility and Future Returns

To observe volatility anomalies, the U.S. stocks were taken, quarterly rebalanced from September 1963 to December 2011, and quantile portfolios formed. Monthly frequency returns are constructed and sort on idiosyncratic volatility applying the Fama-French factors with daily data over the past quarter. Then, the market weight within each quantile is similar to Ang et. al.

## Contemporaneous Volatility and Returns

To determine whether stocks with high volatilities also have high returns over a similar period applicable in the evaluation of those volatilities, we form portfolios at the end of the period based on realized idiosyncratic volatilities. The realized returns are then measured over a similar period. However, there are not tradable portfolios.

## Beta Anomaly

In 1992, a major paper was written by Fama and French, striking at the core of CAPM. It was observed that it shows no ability to explain average returns, despite the main results of Fama and French showing that beta in individual stocks is dominated by size and value effects.

## Lagged Beta and Future Returns

The beta anomaly is that there is a tendency by stocks with high betas to possess lower risk-adjusted returns. It is not the case for the beta anomaly that stocks having high betas have low returns.

High volatilities are usually observed on stocks with high betas, thus making Sharpe ratios of high beta stocks to be lower as compared to the Sharpe ratios of low-beta stocks.

The beta over the previous three months is known as the pre-ranking beta and is applicable in the ranking of stocks in into the portfolios. The realized beta over the next three months after the formation of the portfolio is referred to as post ranking beta.

## Contemporaneous Beta and Returns

It is not predicted by CAPM that lagged betas should lead to higher returns. Actually, according to CAPM, there should be a contemporaneous relation between beta and expected returns. Higher average returns should, therefore, be exhibited by stocks whose average returns are higher over similar periods applied in the measure of betas and the returns.

To understand whether there can be a reconciliation between negative past betas and the positive contemporaneous relation between beta and realized returns, we should find the future beta, since they line up with future returns based on CAPM.

There is a tendency by studies that estimating betas from other information to discover positive risk relations. The real mystery in the low-beta anomaly is actually not so much that beta fails to work, but rather our difficulty in the prediction of future betas, particularly with past betas.

## Risk Anomaly Factors

Using these portfolio results is a forward extension to construct a benchmark factor for the risk anomaly.

## Betting against Beta

A betting-against-beta (BAB) factor was created by Frazzini and Pedersen in 2010 that goes long-low beta stocks and short high-beta stocks.

According to Frazzini and Pedersen:

$${ BAB }_{ t-1 }=\frac { { r }_{ L,t+1 }-{ r }_{ f } }{ { \beta }_{ L,t } } -\frac { { r }_{ H,t+1 }-{ r }_{ f } }{ { \beta }_{ H,t } }$$

The return of the low-beta portfolio is given by $${ r }_{ L,t+1 }$$, and the return of the high-beta portfolio is given by $${ r }_{ H,t+1 }$$. Their betas at the start of the period are $${ \beta }_{ L,t }$$ and $${ \beta }_{ H,t }$$, respectively.

Just two beta portfolios are used by Frazzini and Pedersen to construct their $$BAB$$ factors. One is therefore forced to create very small numbers of portfolios in the $$BAB$$ factors (at most two or three). One advantage of the volatility portfolio is that their trading can be direct and eliminates risk-free assets due to the lack of pronounced differences in expected returns, not only volatilities, across the volatility quantiles.

### Volatility Factor

The following is a created volatility factor, $$VOL$$, that is similar to Frazzini and Pedersen’s $$BAB$$:

$${ VOL }_{ t+1 }={ \sigma }_{ target }\times \left( \frac { { r }_{ L,t+1 }-{ r }_{ f } }{ { \sigma }_{ L,t } } -\frac { { r }_{ H,t+1 }-{ r }_{ f } }{ { \sigma }_{ H,t } } \right)$$

Where the pre-ranking volatilities of the low- and high-volatility portfolios are given by $${ \sigma }_{ L,t }$$ and $${ \sigma }_{ H,t }$$, respectively. The $$VOL$$ factor scales to a target volatility and the BAB factor scales to unit betas.

### Betting-against-Beta and Volatility Factors

A comparison between the BAB and the VOL factors from 1963 to 2011 indicates that there are higher cumulative returns of the VOL factors as compared to BAB, and the Sharpe ratio of the volatility factor is slightly higher.

However, to a large extent, the two factors are comparable. Surprisingly, the correlation between the beta and volatility factors is very low, implying that there are distinct anomalies between volatility and beta.

# Explanations

## Data Mining

Some sensitivity has been observed in the results to different portfolio weighing schemes and effects of illiquidity. However, there is a fairly robust low-risk anomaly, for the most part. A common phenomenon is an idiosyncratic volatility which is not a result of microstructure or liquidity biases.

The low-risk effect witnessed in most other contexts is the best argument against data mining. As shown by Ang et. al, the appearance of the effect is during recessions and expansions and in periods of stability and volatility. This happens in international stock markets.

As shown by Frazzini and Pedersen, high Sharpe ratios are witnessed in low-beta portfolios in U.S. stocks, international stocks, Treasury bonds, corporate bonds, Forex, and credit derivatives markets.

## Leverage Constraints

Since most investors wish to take on more risk but are unable to take on leverage, they become leverage constrained. They hold stocks having built-in leverage since they cannot borrow.

The bidding up of the high-beta stock’s price by investors goes on until the shares are overpriced and deliver low returns. The security market line is flattened out by the voracious demand of leverage-constrained investors for high beta stocks, in CAPM parlance.

The leverage-constrained institutions should be attracted to stocks of high risks, but in reality, there is a tendency by institutional investors to underweight high-risk stocks. This is by predominantly holding stocks with high volatility and their trading is often done by retail investors.

## Agency Problems

The risk anomaly can’t and won’t be played by most institutional managers. The application of the market-weighted benchmark itself may, especially, cause the low volatility anomaly.

## Preferences

Asset owners may bid up certain stocks for which they have a preference due to their high volatility and high beta, until their returns lower.

On the other hand, safe stocks may be shunned by these investors due to their low volatilities and low betas, causing the prices of these stocks to lower and their returns to soar.

# Practice Questions

1) Barbara Flemings is a retail investor who expects her stock portfolio to return 11% in the following year. If the portfolio carries a standard deviation of 0.07, and the returns on risk-free Treasury notes are 3%, then which of the following is closest to the Sharpe ratio of Flemings’ portfolio?

1. 1.14
2. 0.08
3. 0.0056
4. 0.98

Recall that the Sharpe ratio is given as:

$$Sharpe\quad Ratio=SR=\frac { \overline { { r }_{ t }-{ r }_{ ft } } }{ \sigma }$$

$${ r }_{ t }=0.11$$,

$${ r }_{ ft } =0.03$$,

$$\sigma =0.07$$

Therefore:

$$Sharpe\quad Ratio=SR=\frac { 0.11-0.03 }{ 0.07 }$$

$$SR=1.14$$

This means that for every point of return, Flemings is shouldering 1.14 “units” of risk.