Operational Risk

After completing this reading, you should be able to:

  • Describe the different categories of operational risk and explain how each type of risk can arise.
  • Compare the basic indicator approach, the standardized approach and the advanced measurement approach for calculating operational risk regulatory capital.
  • Describe the standardized measurement approach and explain the reasons for its introduction by the Basel committee.
  • Explain how a loss distribution is derived from an appropriate loss frequency distribution and loss severity distribution using Monte Carlo simulations.
  • Describe the common data issues that can introduce inaccuracies and biases in the estimation of loss frequency and severity distributions.
  • Describe how to use scenario analysis in instances when data is scarce.
  • Describe how to identify causal relationships and how to use Risk and Control Self-Assessment (RCSA) and Key Risk Indicators (KRIs) to measure and manage operational risks.
  • Describe the allocation of operational risk capital to business units.
  • Explain how to use the power law to measure operational risk.
  • Explain the risks of moral hazard and adverse selection when using insurance to mitigate operational risks.

Comparing the Three Approaches for Calculating Regulatory Capital

According to the Basel Committee, operational risk is “the risk of direct and indirect loss resulting from inadequate or failed internal processes, people, and systems or from external events.”

Operational risk emanates from internal functions or processes, systems, infrastructural flaws, human factors, and outside events. It includes legal risk but leaves out reputational and strategic risks in part because they can be difficult to measure quantitatively.

The Basel committee recommends three approaches that could be adopted by firms to build a capital buffer that can protect against operational risk losses. These are:

  1. Basic indicator approach
  2. Standardized approach
  3. Advanced measurement approach

Under the basic indicator approach, the amount of capital required to protect against operational risk losses is set equal to 15% of annual gross income over the previous three years. Gross income comprises of net interest income and noninterest income.

To determine the total capital required under the standardized approach, a bank’s activities are classified into eight distinct business lines, with each of the lines having a beta factor. The average gross income for each business line is then multiplied by the line’s beta factor. After that, the capital results from all eight business lines are summed up.

Below are the eight business lines and their beta factors:

Business \quad line & Beta \quad factor \\ \hline
Corporate\quad finance & 18\% \\ \hline
Retail \quad banking & 12\% \\ \hline
Trading \quad and\quad sales & 18\% \\ \hline
Commercial \quad banking & 15\% \\ \hline
Agency \quad services & 15\% \\ \hline
Retail \quad brokerage & 12\% \\ \hline
Asset \quad management & 12\% \\ \hline
Payment \quad and\quad settlement & 18\% \\ \hline

To use the standardized approach, a bank has to satisfy several requirements. The bank must:

  1. Have an operational risk management function tasked with identification, assessment, monitoring, and control of operational risk.
  2. Consistently keep records of losses incurred in each business line.
  3. Regularly report operational risk losses incurred in all business lines.
  4. Install an operational risk management system that’s well documented.
  5. Regularly subject its operational risk management processes to independent reviews by both internal and external auditors.

Under the advanced measurement approach – often abbreviated as AMA – a bank can make do with much less operational risk capital, on condition that it invests in risk assessment and management technologies.

To use the AMA method, a bank has to satisfy all the requirements under the standardized approach. Also, the bank must:

  1. Be able to estimate unexpected losses, guided by use of both external and internal data.
  2. Have a system capable of allocating economic capital for operational risk across all business lines in a way that creates incentives for these business lines to manage operational risk better.

Combining the seven categories of risk (seen in the next section) with the eight business lines gives a total of 7 x 8 = 56 potential sources of operational risk for a bank.

Banks must estimate one-year 99.9% VaRs for each combination and then aggregate them to determine a single one-year 99.9% operational risk VaR measure.

The Basel Committee’s Seven Categories of Operational Risk

  1. Internal fraud: Internal fraud encompasses acts committed internally that diverge from a firm’s interests. These include forgery, bribes, tax non-compliance, mismanagement of assets, and theft.
  2. External fraud: External fraud encompasses acts committed by third parties. Commonly encountered practices include theft, cheque fraud, hacking, and unauthorized access to information.
  3. Clients, products and business practices: This category has much to do with intentional and unintentional practices that fail to meet a professional obligation to clients. That includes issues such as fiduciary breaches, improper trading, misuse of confidential client data, and money laundering.
  4. Employment practices and work safety: These are acts that go against laws put in place to safeguard the health, safety, and general well-being of both employees and customers. Issues covered include unethical termination, discrimination, and the coerced use of a defective protective material.
  5. Damage to physical assets: These are losses incurred to either natural phenomena like earthquakes or human-made events like terrorism and vandalism
  6. Business disruption and system failures: This included supply-chain disruptions and system failures like power outages, software crashes, and hardware malfunctions.
  7. Execution, delivery, and process management: This describes the failure to execute transactions and manage processes correctly. Issues such as data entry errors and unfinished legal documents can cause unprecedented losses.

Loss Frequency And Loss Severity

Losses resulting from operational risk can be viewed in two dimensions: loss frequency and loss severity.

The term “loss frequency” is defined as the number of losses incurred over a specified time period, say, one year. The loss frequency distribution shows just how the losses are distributed over the one-year period, specifying the mean and variance.

The Poisson distribution is used to analyze the distribution of the number of losses. Among other assumptions, this distribution assumes that losses occur singly and at random, with a mean and variance equal to a parameter \(\lambda \). In any short time period \(\Delta t\), there is a probability \(\Delta t\times \lambda \) that a loss will be observed. The probability of \(n\) losses in \(T\) years can be given by:

$$ Pr\left( n \right) ={ e }^{ -\lambda T }\frac { { \left( \lambda T \right) }^{ n } }{ n! } $$

If 20 losses are registered over a 10-year period,\(\lambda =\frac { 20 }{ 10 } =2\)

The term “loss severity” is defined as the distribution of the size of a loss, given that a loss has occurred. Loss severity and loss frequency are independent, meaning that the size of loss does not depend on time \(t\).

Loss severity is analyzed using the lognormal distribution which is asymmetrical. This implies that the frequency of low-impact, high-frequency losses is not equal to the frequency of high-impact, low-frequency losses. The distribution also has fatter tails compared to the normal distribution, indicating a higher probability of tail end losses. The logarithm of losses gives the mean and variance of the lognormal distribution.

To determine the distribution of expected loss, loss severity and loss frequency are combined in a process called Monte Carlo simulation. In this process, numerous (thousands of) combinations of frequency and severity are made, with each combination representing a potential loss.

frm-Loss -frequency-and-Loss-severityCommon Data Issues That Can Introduce Inaccuracies And Biases In The Estimation of Loss Frequency And Severity Distributions

  1. Inadequate historical records: The data available for operational risk losses – including loss frequency and loss amounts – is grossly inadequate, especially when compared to credit risk data. This inadequacy creates problems when trying to model the loss distribution of expected losses.
  2. Inflation: When modeling the loss distribution using both external and internal data, an adjustment must be made for inflation. The purchasing power of money keeps on changing so that a $10,000 loss recorded today would not have the same effect as a similar loss recorded, say, ten years ago.
  3. Firm-specific adjustments: No two firms are the same in terms of size, financial structure, and operational risk management. As such, when using external data, it’s important to make adjustments to the data in cognizance of the different characteristics of the source and your bank. A simple proportional adjustment can either underestimate or overestimate the potential loss.

    The generally accepted scale adjustment for firm size is as follows:

    \( { Estimated\quad loss }_{ bank\quad A\quad }={ External\quad loss }_{ bank\quad B\quad }\times (\frac { { { Revenue } }_{ bank\quad A } }{ { Revenue }_{ bank\quad B } })^{ 0.23 } \)

Scenario Analysis in Instances When Data is Scarce

Scenario analysis aims at estimating how a firm would get along in a range of scenarios, some of which have not occurred in the past. It’s particularly important when modeling low-frequency high-severity losses. For instance, it would be unwise to exclude the possibility of an extreme loss of say, $50 million, just because it happens once every 100 years. Scenarios that are considered come from:

  • The firms’ own experience
  • The experience of other firms
  • Market analysts and consultants
  • The risk management unit in liaison with senior management

The Basel Committee recommends the formation of an operational risk committee to execute a scenario analysis. Such a committee should be made up of members of the risk management division and senior management. Among the committee’s preliminary tasks is the need to estimate the parameters for loss frequency and loss severity, taking into account firm-specific conditions and controls.

Causal Relationships, Risk and Control Self-assessment, and Key Risk Indicators

Causal Relationship:

Causal relationships describe the search for a correlation between firm actions and operational risk losses. It’s an attempt to identify firm-specific practices that can be linked to both past and future operational risk losses. For example, if the use of new computer software coincides with losses, it’s only wise to investigate the matter in a bid to establish whether the two events are linked in any way.

Once a causal relationship has been identified, the firm should then decide whether or not to act on it. This should be done by conducting a cost-benefit analysis of such a move.

Risk and control self-assessment:

Risk and control self-assessment involves asking departmental heads and managers to single out the operational risks in their jurisdiction. The underlying argument is that unit managers are the focal point of the flow of information and correspondence within a unit. As such, they are the persons best placed to understand the risks pertinent to their operations.

The problem with this approach is that managers may not divulge information freely if they feel they are culpable or the risk is out of control. Also, a manager’s perception of a risk and its potential rewards may not conform to the firm-wide assessment. For these reasons, there’s a need for independent review.

Key risk indicators:

Key risk indicators seek to identify firm-specific conditions that could expose the firm to operational risk. KRIs are meant to provide firms with a system capable of predicting losses, giving the firm ample time to make the necessary adjustments. Examples of KRIs include:

  • Staff turnover
  • Number of vacant positions
  • Number of failed transactions over a specified time period
  • Percentage of employees that take up the maximum leave days on offer.

The hope is that key risk indicators can identify potential problems and allow remedial action to be taken before losses are incurred.

Allocation of Operational Risk Capital and the Use of Score Cards

The amount of capital allocated to a particular business unit should be commensurate with its operational risk. In subsequent periods, the capital allocated can be adjusted upwards (downwards) to reflect increased (reduced) risk. How then does capital allocation act as an incentive to unit managers?

Return on capital employed (ROCE) is widely used to assess a unit’s performance relative to others. Reduced capital needs arising from better risk management improve the ROCE. This, in turn, improves the manager’s profile. However, firms must proceed with caution on matters to do with capital allocation because a reduced allocation may not necessarily be the best thing for the firm. Sometimes the risks of such a move may outweigh the benefits.

The scorecard capital allocation method:

Under this approach, each unit manager is subjected to a survey. The survey has questions regarding risk. Each manager’s responses are transformed into a quantitative measure to come up with an overall score. This total score represents the unit’s exposure to risk.

Possible questions may include:

  • The ratio of supervisors to staff
  • The average number of open positions in the business unit over a specified period

The scorecard approach encourages unit managers to develop an in-depth understanding of the risks affecting their jurisdictions. It also encourages senior management to be more active in risk management across the organization.

The Power Law

The power law states that the probability of a random variable \(x\) exceeding a value \(V\) is given by:

$$ p\left( x>v \right) =K{ V }^{ -\alpha } $$


\(K\) is constant,

\(\alpha\) is the power law parameter.

The power law presents a simple but powerful tool for the measurement of operational risk.


A risk manager has established that there’s a 95% probability that losses over the next year will not exceed $50 million. Given that the power law parameter is 0.7, calculate the probability of the loss exceeding:

  1. 20 million
  2. 70 million
  3. 100 million


According to the power law,

$$ p\left( x>v \right) =K{ V }^{ -\alpha } $$

$$ 0.05=K{ \left( 50 \right) }^{ -\left( 0.7 \right) } $$

$$ K=0.7731 $$


$$ p\left( x>v \right) =0.7731{ V }^{ -0.7 } $$

when \(v=20\),

$$ probability=0.7731\times { 20 }^{ -0.7 }=0.09495 $$

when \(v=70\),

$$ probability=0.7731\times { 70 }^{ -0.7 }=0.03951 $$

when \(v=100\),

$$ probability=0.7731\times { 100 }^{ -0.7 }=0.03078 $$

The Risks of Moral Hazard and Adverse Selection When Using Insurance to Mitigate Operational Risks

Earlier in the reading, we saw that a bank using the AMA approach could reduce its capital charge, subject to extensive investment in operational risk management.One of the ways through which a bank can achieve this is by taking an insurance cover. That way, the firm is eligible for compensation if it suffers a loss emanating from a covered risk.

For all its advantages, taking an insurance policy comes with two problems:

  1. Moral Hazard: Moral hazard describes the observation that an insured firm is likely to act differently in the presence of an insurance cover. In particular, traders might increasingly take high-risk positions in the knowledge that they are well protected from heavy losses. Without such an insurance policy, the traders would be a bit more cautionary and restricted in their trading behavior.

    In a bid to tame the moral hazard problem, insurers use a range of tactics. These may include deductibles, coinsurance, and policy limits. Stiff penalties may also be imposed in case there’s indisputable evidence of reckless, unrestricted behavior.

    A firm can intentionally keep insurance cover private. This ensures that its traders do not take unduly high-risk positions.

  2. Adverse Selection: Adverse selection describes a situation where the risk seller has more information than the buyer about a product, putting the buyer at a disadvantage. For example, a company providing life assurance may unknowingly attract heavy smokers, or even individuals suffering from terminal illnesses. If this happens, the company effectively takes on many high-risk persons but very few low-risk individuals. This may result in a claim experience that’s worse than initially anticipated.

    On matters trading, firms with poor internal controls are more likely to take up insurance policies compared to firms with robust risk management frameworks. To combat adverse selection, an insurer has to go to great lengths toward understanding a firm’s internal risk controls. The premium payable can then be adjusted to reflect the risk of the policy.


Question 1

According to the Basel Committee, a bank has to satisfy certain qualitative standards to be allowed to use the advanced measurement approach when computing the economic capital required. Which of the following options is NOT one of the standards?

  1. The bank must have a system capable of allocating economic capital for operational risk across all business lines in a way that creates incentives for these business lines to manage operational risk better.
  2. Internal and external auditors must regularly and independently review all operational risk management processes. The review must include the policy development process and independent scrutiny of the risk management function.
  3. The bank’s operational risk measurement system should only make use of internally generated data to avoid the bias associated with external data.
  4. The bank must have an operational risk management function tasked with identification, assessment, monitoring, and control of operational risk.

The correct answer is C.

The Basel committee does not rule out the use of external data by banks. In fact, the committee recommends the use of a combination of both external and internal data to estimate the unexpected loss. External data may not conform to a particular firm, but firms are allowed to scale the data to fit their profiles. In some cases, internal data may be either insufficient or entirely unavailable, forcing the firm to look elsewhere.


Question 2

Melissa Roberts, FRM, has observed 12 losses in her portfolio over the last four years. She believes the frequency of losses follows a Poisson distribution with a parameter \(\lambda \). The probability that she will observe a total of 4 losses over the next year is closest to:

  1. 17%
  2. 16%
  3. 20%
  4. 0.53%

The correct answer is A.

$$ \lambda \left( 1 \right) =\frac { 12 \quad losses }{ 4 \quad years } =3 \quad losses \quad  per \quad year$$

The probability of \(n\) losses in \(T\) years can be given by:

$$ Pr\left( n \right) ={ e }^{ -\lambda T }\frac { { \left( \lambda T \right) }^{ n } }{ n! } $$

$$ Pr\left( n=4 \right) ={ e }^{ -3 }\frac { { 3 }^{ 4 } }{ 4! } $$

$$ =0.168 $$


Leave a Comment