Rating Assignments Methodologies

Introduction

When covering unexpected credit losses, credit pricing and capital provisions are dependent on rating. With specific measurability, objectivity and homogeneity features and as an ordinal measure of the likelihood of default on a specified period, rating confronts counterparts and segments of the credit portfolio properly.

Rating systems are: measurable and verifiable, objective and homogeneous, and specific. In this chapter, an in-depth analysis of the following is given:

  1. Experts-based approaches;
  2. Statistical-based approaches; and
  3. Heuristic and numerical approaches.

Experts-Based Approaches

Structured Experts-Based Systems

Using extensive skills accumulated in a career that has been long and devoted, a credit analyst can expertly weigh institutions and perceptions. The gambler’s ruin theory applied by Wilcox utilizes accounting data to explain business failures where a net cash in or cash out will be there at the closure of each betting cycle ending only when the players run out of cash. The following relations were proposed by Wilcox:

  1. The likelihood of default \({ P }_{ default }\)
  2. The likelihood of gains \(f\) and losses (\(1-f\))
  3. The initial capital endowment of the organization
  4. For each cycle of the business game, profit \(U\) is given by the following equation:

$$ { P }_{ default }={ \left( \frac { 1-f }{ f } \right) }^{ { CN }/{ U } } $$

The inverse of \({ CN }/{ U }\) gives the return on equity ratio. The model was important because:

  1. It was the first model that was intrinsically probabilistic to describe the corporate default
  2. The embedding of the default event in the approach and stemming from the organization’s profile is not exogenously given
  3. Via likelihood \(f\) the business risk is related to the financial explanatory variables.

The point of no return theory; that sets a threshold which once exceeded a person proceeds on the current action course since it is impossible or very costly to turn back. Mathematically:

$$ \frac { \partial EBIT }{ \partial T } \ge \left( \frac { \partial \left( OF+\Delta D \right) }{ \partial T } \right) $$

This implies that the organization’s survival is possible if the interest charges and principal repayment exceeds the operational flow of funds. Otherwise, the company faces failure due to newly accumulated debt.

The purpose of analyzing the quality of credit is differentiating the risk of default of an obligor by concentrating on some classification types.

Agencies’ Ratings

Running systematic surveys is the aim of rating agencies on all default risk determinants. Most of their revenues are from fees charged to counterparts and a small portion from the sell of information to market participants and investors.

With asymmetric information in the market, the quality of goods can only be assured by reputable external appraisers. As a result, obtaining counterparties’ privileged data on budgeting, strategy and management visions is possible and essential to the business model of a reliable rating agency.

The nature of counterparties and products are the basis of distinguishing the procedures of assignment of rating agencies. A rating agency can apply financial ratios such as: profitability ratios, coverage ratios, and quick and current liquidity ratios. Large operational cash flows margins imply financial structures that are safe and consequently better credit ratings for borrowers.

In addition to the traditional analytical areas(reputation of the management, reliability, experience, and past performance), new areas of analysis to account for new risk sources were introduced as follows: the quality of internal governance, compliance and sustainability of the technology and production processes, environmental risks, potential liabilities that are hidden and legal, institution and political risks’ exposure.

The process of rating is structured into: preliminary analysis, scrutinized counterparty meetings, making and submitting rating dossier to the rating committee by the team doing the analysis(where necessary, perform a new detailed analysis), the rating committee’s final approval, official communication on counterparty meetings, new process of approval (when needed) and submitting ratings to the committee.

From Borrower Ratings to Default Probabilities

Studying rating class’ average actual default frequencies infers likelihoods when averaging observations over time. The computation of marginal frequencies is possible due to the availability of agencies’ data.

Fixed income market participants use measures based on: names denoting the number of issuers, the number of defaulting names in a given timeframe denoted by \(Def\), and the likelihood of default, expressed as \(PD\).

The definition of horizon \(k’s\) default frequency at \(\left[ t,\left( t+k \right) \right] \) is:

$$ { PD }_{ time\quad horizon\quad k }=\frac { { Def }_{ t }^{ t+k } }{ { Names }_{ t } } $$

At horizon \(k\), the cumulated default frequency is:

$$ { PD }_{ time\quad horizon\quad k }^{ Cumilated }=\frac { { \Sigma }_{ i=t }^{ j=t+k }{ DEF }_{ i } }{ { Names }_{ t } } $$

On horizon \(\left[ t,\left( t+k \right) \right] \), the marginal default rate is:

$$ { PD }_{ k }^{ marg }={ PD }_{ t+k }^{ cumilated }-{ PD }_{ t }^{ cumilated } $$

On a future time horizon \(k\), the forward probability,i.e., contingent to survival rate at time \(t\) is:

$$ { PD }_{ t;t+k }^{ Forw }=\frac { \left( { Def }_{ t+k }-{ Def }_{ t } \right) }{ { Names\quad Survived }_{ t } } =\frac { \left( { PD }_{ t+k }^{ cumilated }-{ PD }_{ t }^{ cumilated } \right) }{ 1-{ PD }_{ t }^{ cumilated } } $$

If the forward survival rates:

$$ { SR }_{ t;t+k }^{ Forw }=\left( 1-{ PD }_{ t;t+k }^{ Forw } \right) $$

Then:

$$ { PD }_{ t }^{ cumilated }=1-\left[ \left( 1-{ PD }_{ 1 }^{ Forw } \right) \times \left( 1-{ PD }_{ 2 }^{ Forw } \right) \times \dots \times \left( 1-{ PD }_{ n }^{ Forw } \right) \right] $$

Therefore:

$$ { PD }_{ t }^{ cumilated }=1-{ \Pi }_{ i=1 }^{ t }{ SR }_{ i }^{ Forw }\quad And\quad \left( 1-{ PD }_{ t }^{ cumilated } \right) ={ \Pi }_{ i=1 }^{ t }{ SR }_{ i }^{ Forw } $$

The Annualized Default Rate (\(ADR\)) can be computed from the equation:

$$ \left( 1-{ PD }_{ t }^{ cumilated } \right) ={ \overset { t }{ \underset { i=1 }{ \Pi } } }{ SR }_{ i }^{ Forw }={ \left( 1-{ ADR }_{ t } \right) }^{ t } $$

$$ \Rightarrow { ADR }_{ t }=1-\sqrt [ t ]{ { \Pi }_{ i=1 }^{ t }{ SR }_{ i }^{ Forw } } =1-\sqrt [ t ]{ \left( 1-{ PD }_{ t }^{ cumilated } \right) } $$

The continuous \(ADR\) is:

$$ 1-{ PD }_{ t }^{ cumilated }={ e }^{ -AD{ R }_{ t }\times t } $$

$$ \Rightarrow { ADR }_{ t }=-\frac { ln\left( 1-{ PD }_{ t }^{ cumilated } \right) }{ t } $$

Default frequencies are affected by different rating agencies’ methodological differences due to the following reasons:

  1. Dissimilar events are expressed due to different definitions via various rating agencies;
  2. Different populations generating observed frequencies;
  3. Different rated amounts; and
  4. Different initial rating for the same counterparts released by different rating agencies.

Experts-Based Internal Ratings Used by Banks

Internal classification methods of banks differ in backgrounds to the assignment processes of rating agencies. The approaches adopted by banks are usually more formalized for many borrower segments. Expert-based approaches and formal ones lack proven inferiority or superiority between them. Using judgment-based schemes makes it difficult to achieve consistency due to:

  • Patterns of companies being intrinsically dynamic;
  • Blending different credit portfolios, credit approval procedures, etc. by mergers and acquisitions; and
  • Changes in organization’s traditions, experts’ skills, and analytical frameworks over time.

Delicate and sophisticated structures of management are undermined by uncertainties in modern banking and financial companies based on structures of internal ratings.

Statistical-Based Models

Financial models are generally based on simplifying assumptions on the predicted phenomenon. Therefore, a mixture of statistics, psychology of behavior, and numerical procedures are embodied by financial models that are quantitative.

The basis of this model is low-frequency non-publicly available data and variables that are a mixture of both quantitative and qualitative. Knowing the difference between structural and reduced form approaches is the first step in describing alternative models.

Structural Approaches

Theoretic assumptions describing the default path, for economic and financial purposes, are the basis of structural approaches. To reduced form approaches, they are an exact opposite since using the most statistical variables set enables the final solution to be reached.

The probability of default based on the Black-Scholes-Merton formula is given by:

$$ PD=N\left( \frac { ln\left( F \right) -ln\left( { V }_{ A } \right) -\mu T+{ 1 }/{ 2 }{ \sigma }_{ A }^{ 2 }\times T }{ { \sigma }_{ A }\sqrt { T } } \right) $$

Where \(F\) is the face value of the debt, \({ V }_{ A }\) the firm’s asset value,\(\mu \) is the expected return of the risky world, \(T\) the remaining time to maturity, \({ \sigma }_{ A }\) the volatility of the instantaneous assets, and \(N\) is the cumulated normal distribution operator.

Based on their volatilities, equity asset values are related to the hedge ratio of the Black-Scholes-Merton formula:

$$ { \sigma }_{ equity }{ E }_{ 0 }=N\left( { d }_{ 1 } \right) { \sigma }_{ Asset\quad Value }{ V }_{ 0 } $$

Assuming \(T = 1\), the Distance to Default (\(DtD\)) is given by:

$$ DtD=\frac { ln{ V }_{ A }+lnF+\left( { \mu }_{ risky }-{ { \sigma }_{ A }^{ 2 } }/{ 2 } \right) -other\quad payouts }{ { \sigma }_{ A } } \cong \frac { lnV-lnF }{ { \sigma }_{ A } } $$

Calibrating \(DtD\) on real historical defaults enables solutions to be determined in real life. When predicting the likelihood of default in econometric or statistical models, \(DtD\) as an explicit variable is an effective indicator.

Reduced Form Approaches

No ex-ante assumptions on default causal drivers are made in this approach, contrary to structural approaches. Maximizing the prediction strength of this model requires us to estimate the relationships of the model. The occurrence of default is exogenously given.

A clear model risk is present in this approach. This implies that between the development sample and the population which the model will be applicable, a good degree of homogeneity is required for the generalization of results to be possible.

For benefits to be exploited, it is a requirement that a strategic vision and a structural design that is clear is in fact present when starting with a model building project. Integration of statistics and quantitative methods with professional experience and qualitative data from credit analysts is imposed naturally by reduced form approaches.

Credit risk models in the reduced form can be classified either as statistical or numerical based. The classifications are different from classifications based on aggregation of various counterparts in homogeneous segments. Scoring models is an important statistical tools family, developed from qualitative and quantitative empirical data.

Statistical Methods: Linear Discriminant Analysis (LDA)

Since \(LDA\) based models’ solutions are dependent on exogenous selection of variables, group composition, and default definition, they are reduced form. A linear function of variables known as the scoring function is produced by the analysis. The weight of each ratio to the overall score (Z-score or Z) is represented by the scoring function’s contribution.

With respect to the score produced by the function, new borrower groups can be assigned to pre-defined groups after the estimation of a good discriminant function by historical information on defaulted and performing borrowers.

The solution of \(LDA\) application generates (\(k – 1\)) discriminant functions where \(k\) is the number of groups. Supposing that a population of organizations is observed in a given timeframe and has time groups emerging that are insolvent (defaulting firms) and performing firms,at time before time (\(t – k\)), predicting solvent and performing firms is possible based on the organization’s profile.

At time \(t – k\), a Z-score is assigned to each organization based on data available about the firms. Supposing 33 defaulted firms and another 33 non-defaulted firms have each 22 financial ratios that are availed in the dataset as proposed by the Altman model including five discriminant variables and their optimal discriminant coefficients, then:

$$ Z=1.21{ x }_{ 1 }+1.40{ x }_{ 2 }+3.30{ x }_{ 3 }+0.6{ x }_{ 4 }+0.999{ x }_{ 5 } $$

Where

$$ { x }_{ 1 }=\frac { working\quad capital }{ total\quad assets } ,{ x }_{ 2 }=\frac { accrued\quad capital\quad reserves }{ total\quad assets } ,{ x }_{ 3 }=\frac { Ebit }{ total\quad assets } , $$

$$ { x }_{ 4 }=\frac { equity\quad market\quad value }{ face\quad value\quad of\quad term\quad debt } ,{ x }_{ 5 }=\frac { Sales }{ total\quad assets } $$

Coefficient Estimation in LDA

Let a dataset contain \(n\) borrowers described by \(q\) variables categorized into 2: performing and defaulted borrowers. We are to determine a discriminant function by assigning a new borrower \(k\), described in its \({ x }_{ k }\) profile of \(q\) variables to the solvent or insolvent groups by maximizing a default homogeneity measure.

In each category, means of variables defined in vectors \(\overline { { x }_{ solvent } } \) and \(\overline { { x }_{ insolvent } } \) called group centroids can be computed. Theretofore, either one or other group will be assigned to observation \(k\) based on a minimization criterion:

$$ min\left\{ \sum _{ i=1 }^{ q }{ { \left( { x }_{ i;k }-\overline { { x }_{ { i;solv }/{ insolv } } } \right) }^{ 2 } } \right\} $$

Or in the notation of matrix algebra:

$$ min\left\{ { \left( { x }_{ k }-\overline { { x }_{ { solv }/{ insolv } } } \right) }^{ \prime }\left( { x }_{ k }-\overline { { x }_{ { solv }/{ insolv } } } \right) \right\} $$

This is the Euclidean distance of the new observation \(k\) to the two centroids in a hyperspace with \(q\) dimensions. With the Mahalanobis generalized distance \(D\), the attribution criterion of the \(k\) borrower becomes:

$$ min\left( { D }_{ k }^{ 2 } \right) =min\left\{ { \left( { x }_{ k }-\overline { { x }_{ { solv }/{ insolv } } } \right) }^{ \prime }\times { C }^{ -1 }\times \left( { x }_{ k }-\overline { { x }_{ { solv }/{ insolv } } } \right) \right\} $$

Where \(C\) is the variance/covariance matrix of the \(q\) variables, considered in the development of the model. To achieve the function then:

$$ { Z }_{ k }=\sum _{ j=1 }^{ n }{ { \beta }_{ j }{ x }_{ k,j } } $$

Where \(\beta ={ \left( { \bar { x } }_{ insolv }-{ \bar { x } }_{ solv } \right) }^{ \prime }{ C }^{ -1 }\) and \({ \bar { x } }_{ insolv }-{ \bar { x } }_{ solv }\) is the difference between the centroids of the two groups.

The \(Z\) values corresponding to the two centroids, \(\overline { { Z }_{ solv } } \) and \(\overline { { Z }_{ insolv } } \), respectively can be computed as the optimal discriminant threshold given by:

$$ { Z }_{ cut-off }=\frac { \overline { { Z }_{ solv } } -\overline { { Z }_{ insolv } } }{ 2 } $$

To avoid inaccuracies and instability, the following \(LDA\) statistical requirements should be met:

  • Normal distribution of independent variables;
  • The diagonal of matrix \(C\) should be of similar values, i.e., no heteroscedasticity;
  • Multi-colinearity of independent variables should be low; and
  • The centroids of groups should have homogeneous independent variables.

Model Calibration and the Cost of Errors

The Basel Committee defined calibration as the quantification of the likelihood of default. The probability of default for a population is determined through the calibration process beginning with the outputs of rating systems that are statistical based while accounting for the difference between the default rates of development samples and the default rates of populations.

Distinguishing between the following two cases is crucial to understand this concept: firstly, accepting or rejecting the credit applications is the task of the model without the need for multiple rating classes and an approximation the likelihood of default per rating class and secondly, different rating classes are used to categorize borrowers and borrowers default likelihoods, by the model.

Model Calibration: Z-score Cut-Off Adjustment

Defaulted firms are usually overrepresented in the estimation sample hence the risk to over-predict defaults when model results are applied to real populations.

A hypothesis’ posterior probability is expressed by Bayes’ theorem when a model based on discriminant analysis is calibrated and applied solely for to classification in terms of: the hypothesis’ prior probabilities and the occurrence of evidence given the hypothesis.

Suppose a Z-score summarizes an \({ i }^{ th }\) borrower described in its profile given by a variables vector \(X\). Let prior probabilities be identified as \(q\) and posterior probabilities as \(p\). Then:

  • \({ q }_{ insolv }+{ q }_{ solv }=1\)
  • \( { p }_{ insolv }\left( X|insolv \right) \) and \( { p }_{ solv }\left( X|solv \right) \); default and performing groups conditional probabilities attributing the \({ i }^{ th }\) new observation described in profile \(X\), are generated by the model via a given sample.

Hence,\(p\left( X \right) \) which is the simple or marginal probability is:

$$ p\left( X \right) ={ q }_{ insolv }\times { p }_{ insolv }\left( X|insolv \right) +{ q }_{ solv }\times { p }_{ solv }\left( X|solv \right) $$

This is the likelihood of the \(X\) profile of variables values in the sample considered accounting for both defaulting and performing borrowers. Applying the Bayes’ formula:

$$ p\left( insolv|X \right) =\frac { { q }_{ insolv }\times { p }_{ insolv }\left( X|insolv \right) }{ p\left( X \right) } $$

$$ p\left( solv|X \right) =\frac { { q }_{ solv }\times { p }_{ solv }\left( X|solv \right) }{ p\left( X \right) } $$

The condition to assign the new unit \(i\) to the insolvent group is:

$$ p\left( insolv|X \right) >p\left( solv|X \right) $$

$$ \Rightarrow { { q }_{ insolv }\times { p }_{ insolv }\left( X|insolv \right) }>{ q }_{ solv }\times { p }_{ solv }\left( X|solv \right) $$

$$ \Rightarrow \frac { { p }_{ insolv }\left( X|insolv \right) }{ { p }_{ solv }\left( X|solv \right) } >\frac { { q }_{ solv } }{ { q }_{ insolv } } $$

Cost of Misclassification

There lacks a perfect model to split the two categories of solvent and insolvent organizations. Therefore, obligors will either be categorized as potentially defaulted and will not be accepted even if they may be solvent, hence creating a lost opportunity cost to the firm.

These are the two types of errors that have no equal cost should potential loss arising from them be considered. Let: \(COS{ T }_{ insolv/solv }\) be the false performing firm’s cost generating defaults and credit losses once accepted and \(COS{ T }_{ solv/insolv }\) be the false defaulting firm’s cost.

Therefore, the changing of the optimal cut-off point solution is as follows:

$$ \frac { { p }_{ insolv }\left( X \right) }{ { p }_{ solv }\left( X \right) } >\frac { { q }_{ solv }\times COS{ T }_{ solv/insolv } }{ { q }_{ insolv }\times COS{ T }_{ insolv/solv } } $$

The following relation gives an amount that should then be added to the original cut-off:

$$ \left[ ln\frac { { q }_{ solv }\times COS{ T }_{ solv/insolv } }{ { q }_{ insolv }\times COS{ T }_{ insolv/solv } } \right] $$

From Discriminant Scores to Default Probabilities

With a set of pre-defined variables, in potentially performing/defaulting firms splitting borrowers’ transactions, the main purpose of \(LDA\) is providing a taxonomic categorization of credit quality.

With probabilities falling between zero and one, Z-scores lack inferior or superior limits. According to Bayes’ theorem:

$$ p\left( insolv|X \right) =\frac { { { q }_{ insolv }\times { p }_{ insolv }\left( X|insolv \right) } }{ { q }_{ insolv }\times { p }_{ insolv }\left( X|insolv \right) +{ { q }_{ solv }\times { p }_{ solv }\left( X|solv \right) } } $$

We can, therefore, achieve a logistic function like:

$$ p\left( insolv|X \right) =\frac { 1 }{ { e }^{ \propto +\beta X } } $$

$$ \propto =ln\left( \frac { { q }_{ solv } }{ { q }_{ insolv } } \right) -\frac { 1 }{ 2 } { \left( \overline { { x }_{ solv } } -\overline { { x }_{ insolv } } \right) }^{ \prime }{ C }^{ -1 }\left( \overline { { x }_{ solv } } -\overline { { x }_{ insolv } } \right) $$

$$ =ln\left( \frac { { q }_{ solv } }{ { q }_{ insolv } } \right) -{ Z }_{ cut-off } $$

$$ \beta ={ C }^{ -1 }{ \left( \overline { { x }_{ solv } } -\overline { { x }_{ insolv } } \right) } $$

The logistic transformation can, therefore, be given as:

$$ p\left( insolv|X \right) =\frac { 1 }{ { e }^{ ln\left( \frac { { q }_{ solv } }{ { q }_{ insolv } } \right) -{ Z }_{ cut-off }+Z } } $$

Statistical Methods: Logistic Regression (LOGIT)

The basis of \(LOGIT\) models is the dependency analysis among variables and is contained in a class of statistical models known as \(GLM\)s (Generalized Linear Models) representing an extension of classical linear models applied in the analysis of dependence.

\(GLM\)s have traits with the following elements: a random component, a systematic component and a link function, which are characteristics of linear regression models for modeling default risk.

Suppose a random binary variable takes the value of one should a given event happen, with a probability \(\pi \), otherwise, the value is zero.

The traits of \(Y\), a Bernoullian distribution \((Y \sim Bern (\pi ),where \quad \pi \quad is \quad the \quad distribution’s \quad unknown \quad parameter)\) are as listed:

  • \(P\left( Y=1 \right) =\pi ,\quad P\left( Y=0 \right) =1-\pi \)
  • \(E\left( Y \right) =\pi ,\quad Variance\left( Y \right) =\pi \left( 1-\pi \right) \)
  • \( f\left( y:\pi \right) ={ \pi }^{ y }{ \left( 1-\pi \right) }^{ 1-y }\quad per\quad y\epsilon \left\{ 0,1 \right\} \quad \forall \quad 0\le \pi \le 1 \)

Suppose a set has variables \({ x }_{ 1 },{ x }_{ 2 },\dots { x }_{ p }\), and \(p + 1\) coefficients \({ \beta }_{ 0 },{ \beta }_{ 1 },\dots { \beta }_{ p }\) and a link function \(g\left( . \right) \). Then:

$$ g\left( { \pi }_{ i } \right) ={ \beta }_{ 0 }+{ \beta }_{ 1 }\times { x }_{ i1 }+{ \beta }_{ 2 }\times { x }_{ i2 }\dots +{ \beta }_{ p }\times { x }_{ ip }={ \beta }_{ 0 }+\sum _{ j=1 }^{ p }{ { \beta }_{ j }\times { x }_{ ij } } \quad \quad \quad i=1,\dots ,n $$

It can be shown that:

$$ g\left( { \pi }_{ i } \right) =\log { \frac { { \pi }_{ i } }{ 1-{ \pi }_{ i } } } =b_{ 0 }+\sum _{ j=1 }^{ p }{ { b }_{ j }\times { x }_{ ij } } \quad \quad \quad i=1,\dots ,n $$

Consequently, the link function, defined as the logarithm of the ratio between the default probability and the likelihood of being a performing borrower, i.e.,the logarithm of odds, is:

$$ logit\left( { \pi }_{ i } \right) =log\frac { { \pi }_{ i } }{ 1-{ \pi }_{ i } } $$

\(Logit\) (\( \pi \)) can be expressed based on default probability as:

$$ { \pi }_{ i }=\frac { 1 }{ 1+{ e }^{ -\left( { b }_{ 0 }+{ \Sigma }_{ j=1 }^{ p }{ b }_{ j }\times { x }_{ ij } \right) } } \quad \quad \quad i=1,\dots ,n $$

To have one explanatory variable, then the \(LOGIT\) function becomes:

$$ \frac { { \pi }_{ i } }{ 1-{ \pi }_{ i } } ={ e }^{ \left( { \beta }_{ 0 }+{ \beta }_{ 1 }{ x }_{ i1 } \right) }={ e }^{ { \beta }_{ 0 } }\times { \left( { e }^{ { \beta }_{ 1 } } \right) }^{ { x }_{ i1 } } $$

$$ { e }^{ \beta }=\frac { Odds\quad after\quad a\quad unit\quad of\quad change\quad in\quad the\quad predictor }{ Origina\quad lodds } $$

The following are the steps involved in rescaling the results of Logistic Regression:

  • Computing the average default rate due to logistic regression by applying development sample (\(\pi\));
  • Changing the average default rate of this sample to the samples average odds as follows: \(Odds=\frac { \pi }{ 1-\pi } \);
  • Computing the average rate of default of the population;
  • Computing unscaled odds from the likelihood of default due to each borrower’s logistic regression;
  • Getting the product of unscaled odds and specific scaling factor: \(Scaled\quad Odds=Unscaled\quad Odds\times \frac { Pop\quad Odds }{ Sample\quad Odds } \)
  • Converting the resulting scales into scaled default probabilities (\({ \pi }_{ s }\) ): \(\left( { \pi }_{ s } \right) =\frac { Scaled\quad Odds }{ 1+Scaled\quad Odds } \)

Unsupervised Techniques for Variance Reduction and Variables’ Association

When portfolios are segmented and exploring a borrower’s preliminary statistical characteristics and properties of variables, unsupervised statistical techniques are very crucial.

For a database with observations in rows and variables in columns, then:

  • Obligors based on the profile of their variables are aggregated by cluster analysis’ operating in rows.
  • If a variables’ set is to be optimally transformed into smaller significant ones, principal component analysis, canonical correlation analysis and factor analysis must operate in columns.

Cluster Analysis

Exploring whether similar cases’ groups are observable is the aim of cluster analysis based on measures of distance of features of observations. To be applicable, clusters are affected by: algorithms used to define them and the economic implication obtained in the extracted aggregations. There are two approaches to clustering:

Hierarchical Clustering:

Creates a hierarchy of clusters, aggregating them on the basis of case to case, forming a tree-like structure having the entire population represented by roots and clusters formed by the leaves.

Divisive Clustering:

This approach is opposite to hierarchical clustering as it starts from the root and divides clusters using algorithms by assigning each observation to a cluster with the nearest centroid.

Cash Flow Simulations

Ideally, the simulation of an organization’s cash flows is always in the middle between structural and reduced form models, based on forecasting a firm’s pro-forma financial reports and observing the volatility of future performances. The basis of the model is partly statistical and partly numeric simulations implying that as a result of models aim and design, the definition of default could either be endogenously or exogenously provided.

Codified steps producing inter-temporal specifications of pro-forma financial reports of the future are the basis of these models, considering the scenarios regarding firstly the amount of cash flows generated by operations, applied for financial obligations and other investments and their determinants and secondly, future pro-forma specifications that are complete.

Either the scenario approach or the numerical simulation model can be applied for the likelihood of default to be achieved. For the scenario approach, the likelihood of discrete cases that are pre-defined will be applied. In the numerical simulation model, a large number of model iterations describing different scenarios can be used such that default and lack of default are determined and then the relative frequency of different stages is computed.

Heuristic and Numerical Approaches

In credit management, only two main approaches are used. They are:

  1. Heuristic methods: Human procedures for decision making are imitated, and rules that are properly calibrated are applied for solutions in sophisticated surroundings to be determined.
  2. Numerical methods: The objective is to reach optimal solutions adopting algorithms that are trained to take decisions in highly sophisticated surroundings characterized redundant and fuzzy information that is inefficient.

Expert Systems

These are software solutions attempting to give a solution in cases where human experts would need to be consulted. These systems are:

  • The creation of a knowledge base; and
  • Tasked to ensure knowledge is collected and coded based on some frameworks.

Typically, the components of this system are:

  1. 1. The knowledge base: is similar to a database with facts, measures, and rules for the process of decision making and is based on production rules implying that hierarchical items are often integrated by probabilities \(p\) and utility \(u\).
  2. The working memory: Contains data on the problem to be solved hence is the virtual space where the combination of rules and final solutions are determined.
  3. The inferential engine: To comprehend the working of expert systems and their uses, then the understanding of inference rules is crucial. These were made to be a substitute for human-based processes through the application tools that are mechanical and automatic
  4. The user’s communication and interface.

Neural Networks

They comprise interconnecting artificial neurons imitating features of the real biological ones. These hierarchical nodes are joined forming a network by mathematical models exploiting connections by operating a mathematical transformation of data at each node adopting a fuzzy logic approach.

Here, the input data \({ x }_{ j }\) is multiplied by the weights and the following factors influence the sum of the said products:

  • A flexible mathematical function’s argument
  • The specific computation involving some nodes while ignoring others

The connection of each neuron node is to a previous one and with no returns delivers input in the node that follows in a flow that is continuous and ordered. Therefore, nonlinear, weighted sum of inputs is defined:

$$ f\left( x \right) =k\left( \sum _{ i }^{ }{ { w }_{ i }{ g }_{ i }\left( x \right) } \right) $$

where \(k\) is the pre-defined function.

Practice Questions

1) Suppose that Armenia Marketshas $230 million in assets and was issued a 10-year loan three years ago. The face value of the debt is $700million. The expected return of the risky world is 11%,and the instantaneous assets value volatility is 18%. Compute the value of default probability following the Merton approach and applying the Black-Scholes-Merton formula.

  1. 0.6028
  2. 0.1822
  3. 0.8311
  4. 0.2438

The correct answer is C.

$$ PD=N\left( \frac { ln\left( F \right) -ln\left( { V }_{ A } \right) -\mu T+\frac { 1 }{ 2 } { \sigma }_{ A }^{ 2 }\times T }{ { \sigma }_{ A }\sqrt { T } } \right) $$

From the question, we have that:

\(F=$700,000,000\);

\({ V }_{ A }=$230,000,000\);

\(T=7\);

\({ \sigma }_{ A }=0.18\);

\(\mu =0.11\)

Therefore:

$$ PD=N\left( \frac { ln700,000,000-ln230,000,000-7\times 0.11+{ 1 }/{ 2 }\times { 0.18 }^{ 2 }\times 7 }{ 0.18\times \sqrt { 7 } } \right) $$

$$ \Rightarrow N(0.9583) = P(Z < 0.9583) = 0.8311 $$


Leave a Comment

X