The efficient translation of research into portfolios is known as implementation. Both portfolio construction and trading are included in the implementation. In this chapter, we take the given investment constraints of a manager and construct the best possible portfolio subject to those limitations.

Several inputs are necessary for portfolio construction; they include the current portfolio, covariance estimates, alphas, transaction cost estimates, and active risk aversion. Only the current portfolio can be measured with near certainty. All others are subject to error.

The handling of perfect data is the easier dilemma. The main focus of this chapter will be on how to handle less than perfect data. In fact, most of the procedures applicable to portfolio construction are indirect methods of coping with noisy data.

The following points emerge in the course of this chapter:

- Partially, implementation schemes are safeguards against poor research;
- The alphas can be adjusted to be aligned with the manager’s risk control desire and anticipated sources of added value;
- Activities like screening, stratified sampling, linear programming, and quadratic programming are part of portfolio construction techniques;
- The use of alternative risk measures in the construction of portfolios greatly increases the efforts without greatly affecting the results, for most active institutional portfolio managers; and
- A dispersion can be controlled but not eliminated by managers running separate accounts for multiple clients.

## Alphas and Portfolio Construction

With the correct alphas, active management should be an easy task. Usually, a manager will add his/her own restrictions to the process. He/she may wish to avoid any position whose basis is a forecast of the benchmark portfolio’s performance. Such restrictions are usually applied by managers to ensure portfolio construction is a more robust process.

Another way of achieving the same final portfolio is by simply adjusting the inputs. A very sophisticated portfolio construction procedure can always be replaced. This procedure leads to active holdings, \({ h }_{ PA }^{ \ast }\), active risk, \({ \psi }_{ P }^{ \ast }\), and an ex-ante information ratio IR having a mean/variance optimization that is directly unconstrained using a set of alphas that is modified and the appropriate risk aversion level. The following shows the modified alphas:

$$ { \alpha }^{ \prime }=\left( \frac { IR }{ { \psi }_{ P }^{ \ast } } \right) .V.{ h }_{ PA }^{ \ast } $$

And the following equation is the appropriate active risk aversion:

$$ { \lambda }_{ A }^{ \prime }=\frac { IR }{ 2.{ \psi }_{ A }^{ \ast } } $$

Regardless of its sophistication, any portfolio construction process can be replaced by a process that first refines the alphas and applies a simple mean/variance optimization that is unconstrained determining the active positions. The preprocessing of alphas is discussed in the next section.

# Alpha Analysis

The implementation procedure can be greatly simplified by ensuring that our alphas are consistent with our goals and beliefs. Some procedures for refining alphas to simplify the implementation procedure and explicitly link our refinement in the alphas to the desired properties of the resulting portfolios are outlined in the following sections.

## Scale the Alphas

The natural structure of alphas that includes a natural scale is:

$$ \alpha =volatility\times IC\times Score $$

Where \(IC\) is the information coefficient, and residual risk, volatility, for a set of alphas is estimated to be a constant whose score has a mean of zero and standard deviation of 1 across the set.

A 0.05 IC and a typical volatility of 30% leads to an alpha scale of 1.5%. The mean alpha would, therefore, be zero, and roughly two-thirds of the stocks having alphas between -1.5% and +1.5%, with a rough estimate of 5% of the stocks whose alphas are larger than +3.0% or lower than -3.0%.

The manager’s information coefficient will determine the scale of the alphas.

## Trim Alpha Outliers

Trimming extreme values is the second refinement of the alphas. Positive or negative alphas that are very large can have undue influence. All stocks that are greater in magnitude than three times the alphas’ scale should be closely examined.

Forcing the alphas into a normal distribution having a benchmark alpha of zero and the required scale factor is the second and more extreme approach to trimming. Since the alpha typically utilizes only the ranking information in the alphas while ignoring the alphas’ sizes, such an approach is extreme. The neutrality and scaling of the benchmark must be rechecked after such a transformation.

## Neutralization

Biases and undesirable bets can be removed from our alphas beyond scaling and trimming. This process is called neutralization and has implications both in terms of alphas and portfolios.

Benchmark neutralization implies that the alpha of the benchmark is zero. To remove the benchmark alpha, the alphas are re-centered by the neutralization process, in case our initial alphas imply an alpha for the benchmark.

Even benchmark neutrality can be achieved in more than one way. To see this from the portfolio perspective, many different portfolios can be easily chosen to hedge out any active beta.

A priori approach, how our alphas can be neutralized, should be considered as a general principle. Benchmark, factor neutralization, industry, and cash will be included in our choices. As compared to simply trying all possibilities and choosing the best performer, this approach works better.

## Benchmark and Cash-Neutral Alphas

Making the alphas benchmark-neutral is the first and simplest neutralization. Although exceptional returns may be experienced by the benchmark, the benchmark portfolio, by definition, has 0 alpha. With this 0 benchmark alpha, the alphas are benchmark-neutral and avoid benchmark timing.

Making the alphas cash neutral is also a consideration. The alphas will therefore not lead to any active cash position. Making the alphas for both cash-neutral and benchmark-neutral is also a possibility.

## Risk-Factor-Neutral Alphas

Return is separated along several dimensions by the multiple-factor approach to portfolio analysis. Each of these dimensions can be identified by the manager either as a risk source or a source of value added.

The manager should thus neutralize the alphas against the risk factors. Information on the factors to be forecasted by the manager, including specific asset data, are the only things to be included in the neutralized alphas. The alphas of the risk factors will be zero, after neutralization.

To isolate the part of the alpha that fails to influence the common-factor positions, and achieve the desired active common-factor positions, the alphas can be modified.

## Transaction Costs

An accurate estimation of transaction costs is of equal importance to accurate forecasts of returns. Transaction costs have their own inherent challenges, in addition to sophisticating the problem of portfolio construction.

An increase or decrease of the active risk aversion offsets any challenge in setting the scale of the alphas, in case only alphas and active risk are considered in the process of portfolio construction. A one-dimensional challenge is finding the accurate trade-off between alpha and active risk. The right balance can be found by turning a single knob. This is made a two-dimensional problem by transaction costs.

Maximizing risk-adjusted annual active returns is the objective in portfolio construction. Transaction costs are incurred by rebalancing at that particular point in time. A rule to allocate transaction costs over the period of one year is necessary to contrast transaction costs incurred at the time with alphas and active risk expected over the next year.

The transaction costs must be amortized to compare them to the annual gain-rate from the alpha and the annual loss-rate from the active risk. The anticipated holding periods determine the amortization rate.

## Practical Details

The following equation establishes an optimality relationship between the information ratio, the optimal active risk, and the risk aversion:

$$ { \lambda }_{ A }=\frac { IR }{ 2\times { \psi }_{ P } } $$

The above equation can be applied to back out an appropriate risk aversion. Verifying that percentages rather than decimals are used by the optimizer should be carefully done. Contrary to common-factor risk, a second practical matter is about aversion to specific risks. This risk decomposition is applied by several commercial optimizers to allow differing aversions to these different risk sources:

$$ U={ \alpha }_{ P }-\left( { \lambda }_{ A,CF }\times { \psi }_{ P,CF }^{ 2 }+{ \lambda }_{ A,SP }\times { \psi }_{ P,SP }^{ 2 } \right) $$

The following are two reasons to consider the implementation of a higher risk aversion to specific risk:

- Bets on any one stock are reduced by a higher risk aversion to specific risk. This can be attributed to the fact that risks arise from bets on specific assets; and
- The dispersion can be reduced by an aversion to specific risk, for multiple portfolio managers. Therefore, all those portfolios will be pushed towards holding the same names.

If the collection of stocks with forecasts is represented by \({ N }_{ 1 }\), and the stocks without forecasts as \({ N }_{ 0 }\), then the following expression is a representation of the value-weighted fraction of stocks:

$$ H\left\{ { N }_{ 1 } \right\} =\sum _{ n\epsilon { N }_{ 1 } }^{ }{ { h }_{ B,n } } $$

The average alpha for the group \({ N }_{ 1 }\) is given by:

$$ \alpha \left\{ { N }_{ 1 } \right\} =\frac { { \Sigma }_{ n\epsilon { N }_{ 1 } }^{ }{ h }_{ B,n }\times { \alpha }_{ n } }{ H\left\{ { N }_{ 1 } \right\} } $$

We should set \({ \alpha }_{ n }^{ \ast }={ \alpha }_{ n }-\alpha \left\{ { N }_{ 1 } \right\}\) to round out the set of forecasts for stock in \({ N }_{ 1 }\), and for stocks in \({ N }_{ 0 }\), set \({ \alpha }_{ n }^{ \ast }=0\).

# Portfolio Revisions

The frequency of revising a portfolio should be whenever new information is received. Frequent revisions are not a challenge to a manager who knows how to make a correct trade-off between expected active return, transaction costs, and active risks.

Suppose a manager underestimates transactions costs, makes frequent large changes in estimates of alphas, and daily revises his/her portfolio. The manager should revise the portfolio less frequently since the portfolio will be churned and incur transaction costs that are higher than expected, with lower than expected alphas.

Decreasing the horizon of the expected alpha should contain large noise amounts despite the accurate transactions costs estimates. With shorter horizons, noisier returns are witnessed. Frequent reactions to noise rather than signal would be involved in rebalancing for very short horizons.

Due to the inherent importance of the horizon, there is difficulty in analyzing this trade-off between alpha, risk, and costs. By comparing the marginal contribution to value added for stock \(n\), \({ MCVA }_{ n }\), to the transaction costs, the effect of the new information can be captured and decide whether to trade.

The rate of change of active risks as more of stock \(n\) is added is evaluated by the stock’s marginal contribution to active risk, \(MCA{ R }_{ n }\). This measure will be proportional to the loss added as a result of the changes in the level of active risk. The alpha and marginal contribution to active management are determined by the stock \(n\)’s marginal contribution to value added, particularly:

$$ MCV{ A }_{ n }={ \alpha }_{ n }-2\times { \lambda }_{ A }\times \psi \times MCA{ R }_{ n } \quad \quad\quad\quad \left( I \right) $$

Supposing that the purchase cost and the sales cost for a stock \(n\) are given as \({ PC }_{ n }=0.50\% \) and \({ SC }_{ n }=0.75\%\), respectively. The marginal contribution to value added for stock \(n\) should be lower as compared to the cost of purchase, in the event that the current portfolio is optimal.

If the purchase cost is exceed by 0.8%, a purchase cost \(n\) would result to a net benefit of \(0.8\% – 0.5\% = 0.3\%\). A band can be put around the alpha for each stock due to this observation, making the portfolio to remain optimal. Further purchases will occur due to an increase in alpha. The following is the situation before the arrival of new information:

$$ -SC\le MCV{ A }_{ n }\le { PC }_{ n }\quad \quad \quad \quad \quad \left( II \right) $$

Or, using equation \(\left( I \right)\):

$$ 2\times { \lambda }_{ A }\times \psi \times MCA{ R }_{ n }-{ SC }_{ n }\le { \alpha }_{ n }\le { PC }_{ n }+2\times { \lambda }_{ A }\times \psi \times MCA{ R }_{ n }\quad \quad \quad \left( III \right) $$

The utilization of the information horizon analysis is an approach to the dynamic problem. Trading rules like equation \(\left( III \right)\) can be applied in the dynamic case of trading one position only, using data that is characterized by an information horizon.

# Techniques for Portfolio Construction

The number of techniques for portfolio construction is as many as the number of portfolio managers with each manager to add a special twist. The following four are generic classes of procedures to cover the vast majority of institutional portfolio management applications:

- Screens
- Stratification
- Linear programming
- Quadratic programming

Since our interest is high alpha, low active risk, and low costs of transactions, value-added minus the costs of transactions is our figure of merit:

$$ { \alpha }_{ P }-{ \lambda }_{ A }\times { \psi }_{ P }^{ 2 }-TC $$

## Screens

The screen recipe for constructing a portfolio from scratch is given as follows:

- The stock should be ranked by alpha;
- The first fifty stocks should be chosen; and
- Each stock should be equally weighted.

Screens are also applicable in rebalancing. There are several attractive properties for screens namely: simplicity, easily understood, and screens are easily computerized.

By concentrating the portfolio in high alpha stocks, alphas are enhanced by the screens. Controlling turnover via judicious choice of the buy size, sell size, and hold lists ensures transaction costs are limited.

There are several shortcomings of screens, namely:

- Apart from the rankings, all information in the past alphas is ignored;
- There is a lack of protection against biases in the alphas; and
- No utility stocks will be included in the event that the utility stocks are low in alphas.

## Stratification

Stratification can be termed as a glorified screening. By ensuring that the sample population represents the total population as it is broken down into distinct subpopulations, stratification guards against the sample bias.

Splitting the list of followed stocks into generally exclusive categories is the key to stratification. This is to ensure we obtain risk control by ensuring that, in each category, the portfolio has a representative holding.

The screening exercise within each category is mimicked to construct a portfolio. The stocks are ranked by alpha and placed into categories of buy, hold, and sell groups in a way that the turnover will be kept reasonable.

For the weight of the portfolio in each category to match the weights of the benchmark in the said category, the stocks are then weighted. Through stratification, the benchmark is matched by the portfolio along these imported dimensions.

Some of the shortcomings of a screen are retained by stratification. Some information is ignored by stratification without considering slightly over-weighting one category and under-weighting another.

## Linear Programming

A linear program (LP) is space-age stratification. Stocks are characterized along dimensions of risk in the linear programming approach. The LP will then attempt to construct reasonably close portfolios to the benchmark portfolio in all the applied risk control dimensions.

The method aims at setting up a linear program having transaction costs that are explicit, having a limit on turnover, and upper and lower position limits on each stock. The objective of the LP is to maximize the alpha of the portfolio minus the transactions costs while remaining close to the benchmark portfolio in the risk control dimensions.

All the information about alpha is taken into account by the LP, and the risks are controlled by keeping the portfolio’s characteristics close to the benchmark characteristics.

## Quadratic Programming (QP)

In portfolio building, QP is the ultimate method. Alpha, risk, and transactions costs are considered explicitly by the quadratic program. Since a linear program is included as a special case by the quadratic program, all the constraints and limitations in a linear program can be included. As compared to other portfolio construction techniques, many more inputs are required by the quadratic program. This leads to increased noise.

Assuming that a simple cash versus market trade-off is considered, \(\zeta \) is the actual volatility of the market, and \(\sigma\) the perceived volatility. The following expression represents the loss to be obtained with estimator \(\sigma\), in case \({ VA }^{ \ast }\) is the optimal value added to be obtained with the correct risk estimate \(\zeta \):

$$ Loss={ VA }^{ \ast }{ \left[ 1-{ \left( \frac { \xi }{ \sigma } \right) }^{ 2 } \right] }^{ 2 } $$

# Tests of Portfolio Construction Methods

The effectiveness of these portfolio construction procedures can be tested by placing them on an equal footing and the performance of their outputs judged. This way, identical alphas will be input to the following four procedures, and the transactions cost ignored:

- Screen i: \(N\) stocks with the highest alphas should be taken and then equal-weighted. For low, medium, and high-risk aversion, we should use \(N=50,100, and\quad 150\), respectively.
- \(N\) stocks with highest alphas should be taken and then capitalization-weighted.
- Strat: \(J\) stocks with the highest alphas in each of the BARRA 55 industry categories should be taken.
- QP: Portfolios that maximize value added should be chosen, assuming low, medium, and high-risk aversion parameters.

## Alternatives to Mean/Variance Optimization

Semivariance, downside risk, and shortfall probability are the alternatives to standard deviation as risk measurements. Forecasts of future risk are included in the utility function. Asset selection applies to mean/variance analysis, relying on complicated modeling techniques to forecast the risk accurately.

The historical returns-based analysis determines the forecasting of alternative risk measures. Very little predictability is exhibited by higher moments of asset and asset class return distributions. The predictability of the return kurtosis is in the sense that positive kurtosis is exhibited in most return distributions at most times.

The reduction of most alternative risk function to a standard deviation forecast based only on history is the empirical result.

Another approach is to first adjust returns-based analysis to the institutional context. The mean/variance is then compared to the returns-based analysis, under the assumption that no options are held by the benchmark and all options are fairly priced.

Therefore, the constructed portfolios using the returns-based analysis will be very close to mean/variance portfolios, although their construction requires much more effort.

Some active institutional investors do buy options typically to evade restrictions on leverage or short selling, or due to illiquidity concerns.

## Dispersion

Every manager that runs separate accounts for multiple clients is plagued by dispersion. The same alphas, benchmark, and investment process are seen by each account. The portfolio returns are also not identical.

The difference between maximum returns and minimum returns for these separate account portfolios is the definition of dispersion. Dispersion disappears in the event that the holdings in each account are identical. Dispersion measures how an individual client’s portfolio differs from the reported composite returns of the manager.

The dispersion can be practically enormous and can be classified by its various sources. Client-driven dispersion is the first type of dispersion. Due to the different constraints imposed by individual clients, portfolios may differ.

Other forms of dispersions can be controlled by managers. Lack of attention often leads to other forms of dispersion. Any benefit from reducing dispersion is often exceeded by the cost of holding exactly the same assets in each account.

## Managing Dispersion

Generally, the type of alphas in the strategy, the costs of the transaction, and possibly the methodology of constructing the portfolio determines the convergence. Dispersion will never disappear if alphas and risk are absolutely constant over time.

The following relationship shows that the remaining tracking error is bounded on the basis of transaction costs and the risk aversion of the manager:

$$ { \psi }^{ 2 }\le \frac { TC }{ 2\times { \lambda }_{ A } } $$

The cost of trading from the initial portfolio to the zero transactions cost optimal portfolio is measured by \(TC\). The tracking error and risk aversion are measured relative to a portfolio say, \(Q\).

Dispersion is also bounded due to the fact that tracking error is bounded. Dispersion happens to be proportional to track. The constant of proportionality should depend on the number of managed portfolios:

$$ E\left\{ { r }_{ PA,max }-{ r }_{ PA,min } \right\} =2\times { \emptyset }^{ -1 }\left\{ { \left( \frac { 1 }{ 2 } \right) }^{ \frac { 1 }{ N } } \right\} \times \psi $$

The inverse of the cumulative normal distribution function,\(\emptyset\), is involved in the constant of proportionality. The tracking error of each portfolio relative to the composite is given by \(\psi\).