Limited Time Offer: Save 10% on all 2021 and 2022 Premium Study Packages with promo code: BLOG10    Select your Premium Package »

Explain and calculate variance, standard deviation, and coefficient of variation

Explain and calculate variance, standard deviation, and coefficient of variation

Variance of a Discrete Random Variable

The variance of a discrete random variable is the sum of the square of all the values the variable can take times the probability of that value occurring minus the sum of all the values the variable can take times the probability of that value occurring squared as shown in the formula below:

$$ Var\left( X \right) =\sum { { x }^{ 2 }p\left( x \right) -{ \left[ \sum { { x }p\left( x \right) } \right] }^{ 2 } } $$

Or written in another way, as a function of \(E(X)\), then:

$$ Var\left( X \right) =E\left( { X }^{ 2 } \right) -E{ \left( X \right) }^{ 2 } $$

Often \(E(X)\) is written as \(\mu\) and therefore variance can also be shown as in the formula below:

$$ Var\left( X \right) =\sum { { \left( x-\mu \right) }^{ 2 }p\left( x \right) } $$

Note that given the constants \(\alpha\) and \(\beta\), we have:

$$Var(\alpha X)=a^2 \bullet Var(X)$$

and

$$Var(\alpha X+\beta)=\alpha ^2 \bullet Var(X)$$

Since the variance of a constant is 0.

Example: Calculating the Variance of a Discrete Random Distribution

Given the experiment of rolling a single die, calculate \(Var(X)\).

Solution

We know that:

$$ \text{Var}\left( \text{X} \right) = \text{E} \left({ \text{x} }^{2} \right) – \text{E}\left( \text{X} \right)^{2} $$

\( E(X) = 1 \bullet ({1}/{6})+2 \bullet ({1}/{6})+ 3 \bullet ({1}/{6})+ 4 \bullet ({1}/{6})+ 5 \bullet ({1}/{6})+ 6 \bullet ({1}/{6}) = 3.5 \)

\( E(X^2) = 1 \bullet ({1}/{6})+2^2 \bullet ({1}/{6})+ 3^2 \bullet ({1}/{6})+ 4^2 \bullet ({1}/{6})+ 5^2 \bullet ({1}/{6})+ 6^2 \bullet ({1}/{6}) = {91}/{6} \)

\( Var \left(X\right) = \left(91/6 \right) – \left(3.5 \right)^2 = {35}/{12} = 2.92 \)

We could also calculate \(Var(X)\) as:

$$ \text{Var}\left( \text{X} \right)=\sum _{ \forall \text{x} }^{ }{ \left(\text{X}-{ \mu } \right)^{2} \text{f}\left( \text{x} \right) } $$

where \( E \left(X \right) = \mu \).

\begin{align*}
Var \left(X \right) = & (1-3.5)^2 \bullet (1/6) + (2-3.5)^2 \bullet (1/6)+(3-3.5)^2 \bullet (1/6)+(4-3.5)^2 \bullet (1/6)+ \\
& (5-3.5)^2 \bullet (1/6)+(6-3.5)^2 \bullet (1/6) = 2.92 \\
\end{align*}

Variance of a Continuous Random Variable

The variance of a continuous random variable is shown in the formula below:

$$ Var\left( X \right) =\int _{ -\infty }^{ \infty }{ { x }^{ 2 }f\left( x \right) dx } -{ \left[ \int _{ -\infty }^{ \infty }{ { x }f\left( x \right) dx } \right] }^{ 2 } $$

where \(f(x)\) is the probability density function of \(x\).

As in the discrete case, it can also be written as:

$$ Var\left( X \right) =E\left( { X }^{ 2 } \right) -E{ \left( X \right) }^{ 2 } $$

And also,

$$ Var\left( X \right) =\int _{ -\infty }^{ \infty }{ { \left( x-\mu \right) }^{ 2 }f\left( x \right) dx } $$

Example: Calculating Variance of a Continuous Random Variable

Given the following probability density function of a continuous random variable:

$$ f\left( x \right) =\begin{cases} \frac { x }{ 2 } , & 0 < x < 2 \\ 0, & otherwise \end{cases} $$

Calculate \(Var(X)\).

Solution

We know that:

$$ \text{Var}\left( \text{X} \right) = \text{E} \left({ \text{x} }^{2} \right) – \text{E}\left( \text{X} \right)^{2} $$

Therefore,

$$
\begin{align*}
& E\left( X \right) =\int _{ -\infty }^{ \infty }{ xf\left( x \right) dx= } \int _{ 0 }^{ 2 }{ x\bullet \frac { x }{ 2 } \bullet dx } ={ \left[ \frac { { x }^{ 3 } }{ 6 } \right] }_{ x=0 }^{ x=2 }=\frac { 8 }{ 6 } =\frac { 4 }{ 3 } \\
& E\left( X^2 \right) =\int _{ -\infty }^{ \infty }{ x^2 f\left( x \right) dx= } \int _{ 0 }^{ 2 }{ x^2 \bullet \frac { x }{ 2 } \bullet dx } ={ \left[ \frac { { x }^{ 4 } }{ 8 } \right] }_{ x=0 }^{ x=2 }=2 \\
& Var \left(X \right) = 2 – {\left({4}/{3}\right)}^ 2 = {2}/{9} \\
\end{align*}
$$

Alternatively,

$$ Var\left( X \right) =\int _{ -\infty }^{ \infty }{ { \left( x-\mu \right) }^{ 2 }f\left( x \right) dx= } \int _{ 0 }^{ 2 }{ { \left( x-\frac { 4 }{ 3 } \right) }^{ 2 }\bullet \frac { x }{ 2 } \bullet dx } ={ \left[ \frac { { x }^{ 4 } }{ 8 } -\frac { 4 }{ 9 } { x }^{ 3 }+\frac { 4 }{ 9 } { x }^{ 2 } \right] }_{ x=0 }^{ x=2 }=\frac { 2 }{ 9 } $$

Standard Deviation

The standard deviation, often written as \(\sigma\), of either a discrete or continuous random variable, can be defined as:

$$ S.D.\left( X \right) =\sigma =\sqrt { Var\left( X \right) } $$

Example: Calculating the Standard Deviation

Using the example above, we found that:

$$ \text{Var}\left( \text{X} \right)= \cfrac{2}{9} \\ \Rightarrow \text{S} .\text{D}={ \sigma }= \sqrt {\cfrac{2}{9}} \\ = 0.4714 $$

Coefficient of Variation

The coefficient of variation of a random variable can be defined as the standard deviation divided by the mean (or expected value) of \(X\), as shown in the formula below:

$$ \text{C} .\text{V}.= \cfrac{ \sigma }{ \text{M} } $$

The coefficient of variation of a random variable can be defined as the standard deviation divided by the mean (or expected value) of \(X\) as shown in the formula below:

$$ C.V.= {\frac {\sigma}{\mu}} $$

Example: Calculating Coefficient of Variation

Using the example above we have:

\({ \sigma }= \cfrac{2}{9} \)

\({ \mu }= \cfrac{4}{3}\)

Thus,

$$ \text{C} .\text{V}.=\cfrac{ \sigma }{ \text{M} }= \frac { \cfrac{2}{9} }{ \frac{4}{3} } = \cfrac{1}{6} $$

Skewness and Kurtosis

The variance of \(X\) is sometimes often referred to as the second moment of X about the mean. The third moment of \(X\) is referred to as the skewness, and the fourth moment is called kurtosis. In general, the mth moment of \(X\) can be calculated from the following formula:

$$ \text{m}^{th} \text{ moment} \left( \text{X} \right)=\int _{ -\infty }^{ \infty }{ \left(\text{x}-{ \mu } \right)^{ \text{m}} \text{f}\left( \text{x} \right) \text{dx} } $$

Example: Calculating the Skewness

Given the following probability density function of a continuous random variable:

$$ f\left( x \right) =\begin{cases} \frac { x }{ 2 } , & 0 < x < 2 \\ 0, & otherwise \end{cases} $$

Calculate the skewness.

Solution

The skewness is calculated as the third moment of \(X\). More specifically,

$$ \text{Skewness} \left( \text{X} \right)=\int _{ -\infty }^{ \infty }{ \left(\text{x}-{ \mu } \right)^{ 3 } \text{f}\left( \text{x} \right) \text{dx} } $$

Now,

\( \begin{align*}
\mu & =\int _{ -\infty }^{ \infty }{ xf\left( x \right) dx= } \int _{ 0 }^{ 2 }{ x\bullet \frac { x }{ 2 } \bullet dx } ={ \left[ \frac { { x }^{ 3 } }{ 6 } \right] }_{ x=0 }^{ x=2 }=\frac { 8 }{ 6 } =\frac { 4 }{ 3 } \end{align*} \\
\)

$$ \begin{align*} \text{Skewness} \left( \text{X} \right)&=\int _{ -\infty }^{ \infty }{ \left(\text{x}-{ \mu } \right)^{ 3 } \text{f}\left( \text{x} \right) \text{dx} } \\ &=\int _{ -0 }^{ 2 }{ \left(\text{x} -\cfrac{4}{3}\right)^{3} }\times \cfrac{\text{x}}{2} \times \text{dx} \\ &=\int _{ -\infty }^{ \infty }{\left(\cfrac{ { \text{x} }^{4}}{2}-2{ \text{x} }^{3}+ \cfrac{8{ \text{x} }^{2}}{3}-\cfrac{32 \text{x}}{27} \right)} \text{dx} \\ &=\left[\cfrac{{\text{x}}^{5}}{10}-\cfrac{{\text{x}}^{4}}{2}+\cfrac{{8\text{x}}^{3}}{9}-\cfrac{{16\text{x}}^{2}}{27}\right]_{\text{x}=0}^{\text{x}=2} \\ &=-\cfrac{8}{135} \end{align*} $$

Discrete Probability Distributions

Bernoulli Distribution

A random variable ­\(X\) is said to be a Bernoulli random variable if its probability mass function is given by:

$$ \text{f}\left( \text{x} \right)=\begin{cases} \text{p}, & {\text{X}}=1 \\ 1-\text{p}, & {\text{X}}=0 \end{cases} $$

Where \(0\le{\text{p}}\le1\).

Bernoulli random variable is an experiment whose results can be either a success (when \(X=1\)) and denoted by \(p\) or a failure (\({\text{X}}=0\)) and denoted by \(1-\text{p}\). Graphically,

The expected value and the variance of a Bernoulli random variable are given below:

$$ \text{E}\left( \text{X} \right)=\text{p} $$

And

$$ \text{Var}\left( \text{X} \right)=\text{p}\left(1-\text{p}\right) $$

Binomial Distribution

A binomial distribution is a collection of Bernoulli random variables. A binomial experiment is performed \(n\) times with a probability of success, \(p\), and failure \(1-p\).

The probability of success, \(p\), is the same throughout the experiment, and the trials are independent.

The probability mass function of a binomial distribution is given by:

$$
\begin{align*}
& p\left( x \right) =\left( \begin{matrix} n \\ x \end{matrix} \right) { p }^{ X }{ \left( 1-p \right) }^{ n-x } \end{align*} $$

The mean and variance of a binomial distribution are given by:

$$ \begin{align*} E\left( X \right) & =np \\ Var\left( X \right) & =np\left( 1-p \right) \\ \end{align*} $$

The binomial distribution can be seen to be n Bernoulli trials.

Example: Calculating the Variance of a Binomial Distribution

One out of 6 students at a local training institute say they skip lunch in the lunch break. Find the variance if 10 students are randomly selected.

$$ \text{Var}\left( \text{X} \right)={ \sigma }^{2}=\text{np}{\left(1-\text{p}\right)}\\ \text{n}= 10 $$

\( \text{p}= \cfrac{1}{6}=0.1667 \)

$$ 1-\text{p}= \left(1-\cfrac{1}{6}\right) = 0.8333 $$

$$ \text{Var}\left( \text{X} \right)={ \sigma }^{2}=\left(10\right)\left(\cfrac{1}{6}\right)\left(\cfrac{5}{6}\right)=\cfrac{25}{18} $$

Negative Binomial Distribution

The negative binomial distribution consists of independent trials, with each having a probability of \(p\) (where \(0<\text{p}<1\) ) of being a success, performed until an \(r\) success is accumulated. If we denote the number of trials by \(X\), then the PMF of a negative binomial distribution is given by:

$$
\begin{align*}
& P\left( x \right) =\left( \begin{matrix} r+x-1 \\ r-1 \end{matrix} \right) { \left( 1-p \right) }^{ x }{ p }^{ r } \\
\end{align*} $$

The mean and variance of a negative binomial distribution are given by:

$$ \begin{align*} & E\left( X \right) =\frac { r(1-p) }{ p } \\
\end{align*} $$

And

$$ \begin{align*} & Var\left( X \right) ={ r\left( 1-p \right) }/{ p^{ 2 } } \\
\end{align*} $$

A negative binomial experiment consists of the following properties;

  • The experiment entails \(X\) repeated trials.
  • Each trial can result in just two possible outcomes; success or failure.
  • The probability of success, denoted by \(p\), is the same on every trial.
  • The trials are independent. As such, the outcome of one trial does not affect the outcomes of other trials.
  • The experiment continues until \(r\) successes are observed, where \(r\) is specified in advance.

Thus, a negative binomial random variable refers to the number \(X\) of repeated trials such that \(r\) successes are produced.

For instance, if flipping a coin repeatedly gives us the total number of successes by counting the number of heads landed, and we are expected to continue flipping until it lands two times on heads, we conduct a negative binomial experiment.

Example: Negative Binomial Distribution

A person conducting telephone surveys must get 5 more completed surveys before completing their job. There is a 15% chance of reaching an adult who will complete the survey on each randomly dialed contact.

Find the probability that the 5th completed surveys occur on the 12th call, and then the mean and the variance of the distribution.

Solution

In this case:

\(p= 0.15\)

\(r= 5\)

\(\text{P}(\text{X}=12)\)

Now using the PMF formula:

$$ \begin{align*} \text{p}\left( \text{x} \right)&=\left(\begin{matrix} \text{x}-1 \\ \text{r}-1 \end{matrix} \right) {\left(1-\text{p}\right)}^{\text{x}-\text{r}} {\text{p}}^{\text{r}} \\ \text{p}\left( \text{x} \right)&=\left(\begin{matrix} 12-1 \\ 5-1 \end{matrix} \right) {\left(1-0.15\right)}^{12-5} {0.15}^{5} \\ \text{p}\left( \text{x} \right)&=\left(\begin{matrix} 11 \\ 4 \end{matrix} \right) \left(0.85\right)^{7} \left(0.15\right)^{5} \\ \text{P}\left( \text{X}=12\right)&=0.0080334615\approx0.0080 \end{align*} $$

Now finding the mean:

$$ \begin{align*} \text{E}\left( \text{X} \right)&={ \mu }=\cfrac{\text{r}}{\text{p}} \\ { \mu }&=\cfrac{5}{0.15}=33 \cfrac{1}{3} \end{align*} $$

And now finding the variance:

$$ \begin{align*} \text{Var}\left( \text{X} \right) & =\cfrac{\text{r}{\left(1-\text{p}\right)}}{{\text{p}}^{2}} \\ & =\cfrac{5\left(1-0.15\right)}{{0.15}^{2}} =\cfrac{4.25}{0.0225}=188.89 \end{align*} $$

Geometric Distribution

A geometric distribution is a special case of a negative binomial distribution that deals with the number of trials (X) needed for a single success. Therefore, the geometric distribution is a negative binomial distribution where the number of successes, \(r\), is equal to 1. The PMF of geometric is given by:

$$ P\left( \text{X} =\text{n}\right)={\left(1-\text{p}\right)}^{\left(\text{n}-1\right)} \text{p}, \ \text{n}=1,2,… $$

The mean and variance of a geometric variable are:

$$ \text{E}\left( \text{X} \right)=\cfrac{1}{\text{p}} $$

And

$$ \text{Var}\left( \text{X} \right)=\cfrac{\left(1-\text{p}\right)}{{\text{p}}^{2}} $$

An example would be a case where we are tossing a coin until it lands on heads.

Hypergeometric Distribution

A hypergeometric refers to a statistical experiment that carries the following properties:

  1. A sample of size \(n\) is randomly selected without replacement from a population of \(N\) items.
  2. In the population, \(k\) items can be classified as successes; \(\text{N}-\text{k}\) items can be classified as failures.

For example, in an experiment, assume you have a basket containing 10 red balls and 10 white balls. You randomly select 6 balls without replacement and count the number of white balls selected. Such a scenario would be a hypergeometric experiment.

The PMF of a hypergeometric distribution is given by:

$$ \begin{align*}
& p\left( x \right) =\frac { \left( \begin{matrix} M \\ x \end{matrix} \right) \left( \begin{matrix} N-M \\ n-x \end{matrix} \right) }{ \left( \begin{matrix} N \\ n \end{matrix} \right) } \\
\end{align*}
$$

The mean and variance of a hypergeometric distribution is given by:

$$ \begin{align*}
& E\left(X \right)={\frac {nm}{N}} \\
\end{align*}
$$

And

$$ \begin{align*}
& Var \left(X \right)={nm \left(N-M \right)\left(N-n\right)}/{N^2 \left(N-1 \right)} \\
\end{align*}
$$

Example: Hypergeometric Distribution

In an ordinary deck of playing cards, out of the total cards in a deck, there are 26 red cards. Suppose we randomly select 7 cards without replacement.

Find the probability, means, and variance of getting exactly 3 red cards, that is, diamonds or hearts.

Solution

\(\text{N}\)= 52; (there are a total of 52 cards in a deck)

\(\text{M}\)= 26 red cards

\(\text{n}\)= 7 (there are 7 randomly selected cards from the deck)

\(\text{x}\)=3 (out of the randomly selected cards, 3 are red)

We know that:

$$ \begin{align*} \text{p}\left( \text{x} \right)&=\cfrac{ \left(\begin{matrix} \text{m} \\ \text{x} \end{matrix}\right)\left(\begin{matrix} \text{N}-\text{M} \\ \text{n}-\text{x} \end{matrix}\right)}{\left(\begin{matrix} \text{N} \\ \text{n} \end{matrix}\right)} \\ \Rightarrow \text{p}\left( \text{x}=3 \right)&=\cfrac{ \left(\begin{matrix} 26 \\ 3 \end{matrix}\right)\left(\begin{matrix} 52-26 \\ 7-3 \end{matrix}\right)}{\left(\begin{matrix} 52 \\ 4 \end{matrix}\right)} \\ &=0.29051748\approx0.2905 \end{align*} $$

For the mean,

$$ \text{E}\left( \text{X} \right)={ \mu }=\cfrac{\text{nm}}{\text{N}} \\ { \mu }=\cfrac{7\times26}{52}=3.5 $$

For the variance, we have:

$$ \begin{align*} \text{Var}\left( \text{X} \right)&=\cfrac{7\times26\left(52-26\right)\left(52-7\right)}{{52}^{2} \left(52-1\right)} \\ &=1.54411764706\approx1.5441 \end{align*} $$

Poisson Distribution

A Poisson random variable can be described as the number of events occurring in a fixed time period if the events occur at a known constant rate, \(\lambda\).

It has the following characteristics:

  • It is a discrete distribution.
  • Each occurrence is independent of the other occurrence.
  • Discrete occurrences are described over an interval.
  • The occurrences in each interval range from zero to infinity.
  • The mean number of occurrences must remain constant throughout the experiment.

The Poisson Formula is given as:

$$ \begin{align*}
& p \left(x \right)={\frac {{e}^{-\lambda} {\lambda}^{x}}{x!}} \\
\end{align*}
$$

Where:

\(\text{X}\) = 0,1,2,3, …

\({ \lambda }\) = mean number of occurrences in the interval

\(\text{e}\) = Euler’s constant \(\approx\) 2.71828

The mean and variance of a Poisson Random Variable are given by:

$$ \begin{align*}
& E(X) = \lambda \\
& Var \left(X \right) = \lambda \\
\end{align*}
$$

Example: Poisson Distribution

Suppose a filling station can expect two customers every four minutes, on average. What is the probability that four or fewer customers will enter the filling station in 15 minutes?

Solution

$$ \lambda=2\times4=8 \\ \text{P}\left(\text{X} = x \right)=\cfrac{{\lambda}^{\text{x}} {\text{e}}^{- \lambda} }{\text{x}ǃ} $$

Therefore;

$$ \begin{align*} \text{P}\left(\text{x};{\lambda} \right)&=\cfrac{{\lambda}^{\text{x}} {\text{e}}^{- \lambda} }{\text{x}ǃ} \\ &\text{P}\left(0;8\right)+\text{P}\left(1;8\right)+\text{P}\left(2;8\right)+\text{P}\left(3;8\right)+\text{P}\left(4;8\right) \\ \\ \text{P}\left(\text{X} \le 4 \right)&=\cfrac{{8}^{0}\times{\text{e}}^{-8}}{0!}+ \cfrac{{8}^{1}\times{\text{e}}^{-8}}{1!}+ \cfrac{{8}^{2}\times{\text{e}}^{-8}}{2!}+ \cfrac{{8}^{3}\times{\text{e}}^{-8}}{3!}+ \cfrac{{8}^{4}\times{\text{e}}^{-8}}{4!} \\ &=0.00033546+0.00268370+0.01073480+0.02862614+0.0572523 \\ &=0.0996684\approx0.0997 \\ \\  \text{E}\left( \text{X} \right)&=\lambda=8 \\ \text{Var}\left( \text{X} \right)&=\lambda=8 \end{align*} $$

Uniform Discrete Distribution

A random variable \(X\) has a discrete uniform distribution if each of the n values in its range, let us say \({\text{x}}_{1},{\text{x}}_{2},…,{\text{x}}_{\text{n}} \), consists of an equal probability. Then, where f(x) represents the probability mass function (PMF).

Suppose X represents a random variable taking on values of {0,1,2,3,4,5,6,7,8,9}, where each possible value consists of equal probability. The example provided is a discrete uniform distribution, and each of the 10 possible values will have its probability as \( \text{P}\left( \text{X}={ \text{x} }_{ \text{i} } \right)=\text{f}\left({ \text{x} }_{ \text{i} } \right)=\cfrac{1}{10}=0.10\)

Mean and Variance for a Discrete Uniform Distribution

Given that \(X\) is a discrete uniform random variable on the consecutive integer \( \text{a}, \text{a}+1, \text{a}+2, …, \text{b} \text{ for } \text{a} \le \text{b} \)

$$ \text{P}\left( \text{x} \right)=\cfrac{1}{\text{b}-\text{a}+1}=\cfrac{1}{\text{N}} $$

Where \(\text{n}= \text{b}-\text{a}+1\).

Then, the mean of X is:

$$ {\mu}=\text{E}\left( \text{X} \right)=\cfrac{\text{b}+\text{a}}{2}=\cfrac{\text{N}+1}{2} $$

PDF of a Uniform Distribution

And the variance of \(X\) is:

$$ \text{Var}\left( \text{X} \right)=\cfrac{(\text{b}-\text{a}+2)\left(\text{b}-\text{a}\right)}{12}=\cfrac{(\text{N}^{2}-1)}{12} $$

Example: Uniform Distribution

Suppose a standard six-sided die undergoes a uniform distribution in a statistical experiment of rolling one die. Find its probability, mean, and variance.

Solution

$$ \begin{align*} \text{P}\left( \text{X} \right)&=\cfrac{1}{ \text{N} }=\cfrac{1}{6} \\ \text{E}\left( \text{X} \right)&={ \mu }=\cfrac{\text{N}+1}{2}=\cfrac{6+1}{2}=3.5 \\ \text{Var}\left( \text{X} \right)&=\cfrac{{\text{N}}^{2}-1}{12}=\cfrac{{6}^{2}-1}{12}=2.92 \end{align*} $$

Continuous Probability Distributions

Uniform Continuous Distribution

The uniform distribution is a continuous probability distribution, and it is concerned with events that are equally likely to occur. If the continuous random variable \(X\) is said to be uniformly distributed, or having rectangular distribution on the interval \([a, b]\), we write \(\text{X}∼\text{U}\left(\text{a},\text{b}\right)\) and its probability density function is given by:

$$ \text{f}\left( \text{x} \right)=\begin{cases} \cfrac{1}{\text{b}-\text{a} },\text{ when } \text{a}\le\text{x}\le\text{b} \\ 0, \text{ elsewhere } \end{cases}\\ $$

For the uniform distribution, \(\text{f}\left( \text{x} \right)\) is constant over the possible values of \(x\). Thus,

$$ \text{f}\left( \text{x} \right)=\cfrac{1}{\text{b}-\text{a} } \\ \text{E}\left( \text{X} \right)=\cfrac{\text{b}+\text{a} }{1} \\ \text{Var}\left( \text{X} \right)=\cfrac{ { \left( \text{b}-\text{a} \right) }^{2} }{12} $$

Example: Uniform Continuous Distribution

Assuming that the dial rate, in seconds, follows a uniform distribution between 5 and 30 seconds inclusive. Calculate the probability, mean, and variance.

Solution

Its uniform distribution notation is X~U(a,b)=X~U(5,30)

$$ \Rightarrow \text{f}\left( \text{x} \right)=\cfrac{1}{30-5}=\cfrac{1}{25} $$

Therefore,

$$ \text{E}\left( \text{X} \right)=\cfrac{ \left( \text{b}-\text{a} \right)}{2}=\cfrac{30-5}{2}=12.5 $$

And

$$ \text{Var}\left( \text{X} \right)=\cfrac{ { \left( \text{b}-\text{a} \right) }^{2} }{12}=\cfrac{{\left(30-5\right)}^{2}}{12}=52.0833 $$

Exponential Distribution

An exponential distribution is often associated with the amount of time until some specific event occurs. For instance, the amount of time (starting now) until a volcanic eruption occurs has an exponential distribution.

An exponential distribution follows a continuous random variable:

$$ \text{f}\left( \text{x} \right)=\begin{cases} \lambda {\text{e}}^{ -\lambda \text{x} } \text{ for }\text{x}>0 \\ 0\text{ for }\text{x} \le 0 \end{cases} $$

Where \( \lambda \)>0 is referred to as the rate of the distribution.

The mean of an exponential random variable is given by:

$$ \text{E}\left( \text{X} \right)=\cfrac{1}{\lambda} $$

And the variance is given by:

$$ \text{Var}\left( \text{X} \right)={\cfrac{1}{\lambda}}^{2} $$

Gamma Distribution

The gamma function, shown by \(\mathrm{\Gamma}\left(\text{x}\right)\), is an extension of the factorial function to real (and complex) numbers. Specially, for \(\text{n}\epsilon{1,2,3,\ldots}\), then:

$$ \mathrm{\Gamma}\left(\text{n}\right)=\left(\text{n}-1\right)! $$

Generally, for any positive real number a, \(\mathrm{\Gamma}\left(\text{a}\right)\), is defines as:

$$ \mathrm{\Gamma}\left(\alpha\right)=\int_{0}^{\infty}{\text{x}^{\alpha-1}\text{e}^{-\text{x}}\text{dx}, ,\ \ \ \ \text{ for } \ \alpha>0} $$

For \(\alpha\)=1 were denote it as:

$$ \mathrm{\Gamma}\left(1\right)=\int_{0}^{\infty}{\text{e}^{-\text{x}}\text{dx}} $$

By using the change of variable \(\text{x}=\lambda\ \text{y}\), we can show the equation below that is often useful when working with the gamma distribution:

$$ \mathrm{\Gamma}(\alpha)=\lambda^\alpha\int_{0}^{\infty}{\text{y}^{\left(\alpha-1\right)}\text{e}^{-\lambda \text{y}}}\text{dy}\ \ \ \ \ \text{ for }\ \alpha,\ \lambda>0 $$

By using integration by parts, it can be shown that;

$$ \mathrm{\Gamma}\left(\alpha+1\right)=\alpha\Gamma\left(\alpha\right),\ \ \ \ \ \text{ for }\ \alpha>0 $$

If \(\alpha=n\), where n consists of a positive integer, the equation above reduce to:

$$ \text{n}!=\text{n}\times\left(\text{n}-1\right)! $$

Properties of the gamma function

For any positive real number \(\alpha\):

  1. The gamma function, \(\mathrm{\Gamma}(\alpha)\), for \(\alpha>0\), is defined as:

    $$ {\Gamma}\left(\alpha\right)=\int_{0}^{\infty}{\text{x}^{\alpha-1}\text{e}^{-\text{x}}\text{dx}}; $$

  2. It satisfies the recursive property:

    $$ {\Gamma}\left(\alpha\right)=\left(\alpha-1\right)\mathrm{\Gamma}\left(\alpha-1\right) $$

  3. It is related to the factorial function when \(\alpha\)=n and n is a positive integer such that:

    $$ \mathrm{\Gamma}\left(\text{n}\right)=\left(\text{n}-1\right)! $$

  4. For specific values of \(\alpha\), the exact values of \(\mathrm{\Gamma}(\alpha)\) exists. For the positive integers, \({\Gamma}\left(\text{n}\right)\) is defined by property \(3 \mathrm{\Gamma}\left(\text{n}\right)=\left(\text{n}-1\right)!\). The gamma function evaluated at \(\alpha\ =\frac{1}{2}\) is:

    $$ \mathrm{\Gamma}\left(\frac{1}{2}\right)=\sqrt\pi $$

PDF of the Gamma Distribution

The probability density function is such that:

$$ \text{f}\left(\text{x}\right)=\frac{\beta^\alpha \text{x}^{\alpha-1}\text{e}^{-\beta \text{x}}}{\mathrm{\Gamma}\left(\alpha\right)} $$

Mean

Given that \(\text{X}\sim \mathrm{\Gamma}(\alpha,\ \beta) \) for some \(\alpha,\ \beta>0\), where \(\mathrm{\Gamma}\) is the gamma distribution, the expectation of X (mean \({\mu}\)) is given by:

$$ \text{E}\left(\text{X}\right)=\frac{\alpha}{\beta} $$

Proof:

The gamma distribution a probability density function of:

$$ \text{f}\left(\text{x}\right)=\frac{\beta^\alpha \text{x}^{\alpha-1}\text{e}^{-\beta \text{x}}}{\mathrm{\Gamma}\left(\alpha\right)} $$

The definition of the expected value of a continuous random variable goes as follows:

$$ \text{E}\left(\text{X}\right)=\int_{0}^{\infty}{\text{xf}_\text{X}\left(\text{x}\right)\text{dx}}\ $$

Therefore,

$$ \text{E}\left(\text{X}\right)=\frac{\beta^\alpha}{\mathrm{\Gamma}\left(\alpha\right)}\int_{0}^{\infty}{\text{x}^\alpha \text{e}^{-\beta \text{x}}\text{dx}} $$

Let \(t={\beta}x\) so that:

$$ \begin{align*} \text{E}\left(\text{X}\right)&=\frac{\beta^\alpha}{\mathrm{\Gamma}\left(\alpha\right)}\int_{0}^{\infty}{\left(\frac{t}{\beta}\right)^\alpha \text{e}^{-\text{t}}\frac{\text{dt}}{\text{t}}\ \ \ \ \ } \\ \ \ \ \ \ \ &=\frac{\beta^\alpha}{\beta^{\alpha+1}\mathrm{\Gamma}\left(\alpha\right)}\int_{0}^{\infty}{\text{t}^\alpha \text{e}^{-\text{t}}\text{dt}\ } \end{align*} $$

By the definition of the gamma function,

$$ \begin{align*} &=\frac{\beta^\alpha}{\beta^{\alpha+1}\mathrm{\Gamma}\left(\alpha\right)}\int_{0}^{\infty}{\text{t}^\alpha \text{e}^{-\text{t}}\text{dt}\ }=\frac{\mathrm{\Gamma}\left(\alpha+1\right)}{\beta\Gamma\left(\alpha\right)}\ \ \ \ \ \\ &=\frac{\alpha\Gamma\left(\alpha\right)}{\beta\Gamma\left(\alpha\right)} \\ &=\frac{\alpha}{\beta} \end{align*} $$

Variance

Given that \( \text{X}\sim {\Gamma}\left({\alpha},{\beta} \right)\) for some \({\alpha},{\beta}>\), where \({\Gamma}\) is the gamma distribution, the variance of \(X\) is given by:

$$ \text{Var}\left(\text{X}\right)=\frac{\alpha}{\beta^2} $$

Solution

$$ \text{Var}\left(\text{X}\right)=\text{E}\left(\text{X}^2\right)-\left(\text{E}\left(\text{X}\right)\right)^2=\int_{0}^{\infty}{\text{x}^2\text{f}_\text{X}\left(\text{x}\right)\text{dx}-\left(\text{E}\left(\text{X}\right)\right)^2} $$

Recall that for gamma distribution, \( \text{E}\left(\text{X}\right)=\frac{\alpha}{\beta}\) so that:M

$$ \text{Var}\left(\text{X}\right)=\frac{\beta^2}{\mathrm{\Gamma}\left(\alpha\right)}\int_{0}^{\infty}{\text{x}^{\alpha+1}\text{e}^{-\beta \text{x}}\text{dx}-\left(\frac{\alpha}{\beta}\right)^2}\ \ \ \ \ \ $$

Now, let \( \text{t}={\beta} \text{x} \) so that:

$$ \begin{align*} &=\frac{\beta^\alpha}{\mathrm{\Gamma}\left(\alpha\right)}\int_{0}^{\infty}{\left(\frac{\text{t}}{\beta}\right)^{\text{a}+1}\text{e}^{-\text{t}}\frac{\text{dt}}{\beta}-\frac{\alpha^2}{\beta^2}\ \ \ \ \ \ \ } \\ &=\frac{\beta^\alpha}{\beta^{\alpha+2}\mathrm{\Gamma}\left(\alpha\right)}\int_{0}^{\infty}{\left(\frac{\text{t}}{\beta}\right)^{\text{a}+1}\text{e}^{-\text{t}}\text{dt}-\frac{\alpha^2}{\beta^2}\ \ } \end{align*} $$

Using the definition of the gamma function, we have:

$$ \begin{align*} &=\frac{\beta^\alpha}{\beta^{\alpha+2}\mathrm{\Gamma}\left(\alpha\right)}\int_{0}^{\infty}{\left(\frac{\text{t}}{\beta}\right)^{\text{a}+1}\text{e}^{-\text{t}}\text{dt}-\frac{\alpha^2}{\beta^2}\ \ }=\frac{\mathrm{\Gamma}\left(\alpha+2\right)}{\beta^2\mathrm{\Gamma}\left(\alpha\right)}-\frac{\alpha^2}{\beta^2}\ \ \ \ \ \\ &=\frac{\mathrm{\Gamma}\left(\alpha+2\right)-\alpha^2\mathrm{\Gamma}\left(\alpha\right)}{\beta^2\mathrm{\Gamma}(\alpha)} \\ &=\frac{\alpha\left(\alpha+1\right)\mathrm{\Gamma}\left(\alpha\right)-\alpha^2\mathrm{\Gamma}\left(\alpha\right)}{\beta^2\mathrm{\Gamma}\left(\alpha\right)} \\ &=\frac{\alpha\Gamma\left(\alpha\right)\left(\alpha+1-\alpha\right)}{\beta^2\mathrm{\Gamma}\left(\alpha\right)} \\ &=\frac{\alpha}{\beta^2} \end{align*} $$

Normal Distribution

A random variable ­\(X\) is said to be a normal random variable with parameters \({ \mu }\) and \({ \sigma }^{2}\) if the density function is given by:

$$ \text{f}\left(\text{x};\mu,\sigma^2\right)=\frac{1}{\sqrt{2\pi\sigma^2}}\text{e}^\frac{{-\left(\text{x}-\mu\right)}^2}{2\sigma^2},\ -\infty<\text{x}<\infty\ $$

Characteristics of a Normal Distribution

  1. It is symmetric, taking a bell-shaped curveshape-of-the-normal-distribution
  2. For all values of X between \(-\infty \) and \(\infty \) is continuous so that each conceivable interval of real numbers consists of a probability other than zero.
  3. \(-\infty\le\ \text{X}\le\infty\ \)
  4. Uses two parameters, \({ \mu }\) and \( \sigma \). The \(\text{N}\left(\mu,\ \sigma^2\right) \) notation means normally distributed with mean \( {\mu} \) and variance \(\sigma^2\), such that if we say \( \text{X}\sim \text{N}(\mu,\ \sigma^2) \), it means that \(X\) is distributed \(\text{N}(\mu,\ \sigma^2)\).

Standard Normal Distribution

If a random variable \(X\) is normally distributed with parameters \({ \mu }\) and \({ \sigma^2 }\), then:

$$ \text{Z}=\frac{\text{X}-\mu}{\sigma} $$

is normally distributed with the mean and standard deviation of 0 and 1, respectively, where Z is a standard normal variable.

The density function of a standard normal variable is given by:

$$ \text{f}\left(\text{x}\right)=\frac{1}{\sqrt{2\pi}}\text{e}^\frac{{-\text{x}}^2}{2}\ $$

The cumulative distribution function of a standard normal random variable is usually denoted by \( \Phi\left(\text{x}\right) \). That is:

$$ \Phi\left(\text{x}\right)=\frac{1}{\sqrt{2\pi}}\int_{-\infty\ }^{\text{x}}{\ \frac{1}{\sqrt{2\pi}}\text{e}^\frac{{-\text{x}}^2}{2}\ \text{dy}\ } $$

Non-negative values of \( \Phi\left(\text{x}\right) \) are usually tabulated (shown below) while for negative values of \( \Phi\left(\text{x}\right) \) can be obtained by using the following formula:

$$ \ \Phi\left(-\text{x}\right)=1-\ \Phi\left(\text{x}\right),\ \ \ -\infty<\text{x}<\infty $$

Note that the formula above follows from the symmetry property of the standard normal density.

Given that \( \text{Z}=\frac{\text{X}-\mu}{\sigma}\ \) is a standard normal random variable, if \(X\) is normally distributed with parameters \({ \mu }\) and \( \sigma^2 \), then the distribution function of \(X\) can be expressed as:

$$ \text{F}_\text{X}\left(\text{a}\right)=\text{P}\left( \text{X}\le \text{a}\right)=\text{P}\left(\frac{\text{X}-\mu}{\sigma}\le\frac{\text{a}-\mu}{\sigma}\right)=\text{P}\left(\text{Z}\le\frac{\text{a}-\mu}{\sigma}\right)=\Phi\left(\frac{\text{a}-\mu}{\sigma}\right) $$

Example: Normal Distribution Probabilities

If \(X\) is a normal random variable with parameters \({ \mu }=5\) and \(\sigma^2=16\), calculate \( \text{P}\left(3<\text{X}<6\right)\).

Solution

We need to express \(\text{P}\left(3<\text{X}<6\right)\).

$$ \begin{align*} \text{P}\left(3<\text{X}<6\right)&=\text{P}\left(\frac{3-5}{4}<\frac{\text{X}-5}{4}<\frac{6-5}{4}\right) \\ &=\text{P}\left(-0.5<\text{Z}<0.25\right) \\ &=\Phi\left(0.25\right)-\Phi(-0.5) \\ &=\Phi\left(0.25\right)-\left[1-\Phi\left(0.5\right)\right] \end{align*} $$

Now using a standard normal table, we have:

$$ =\Phi\left(0.25\right)-\left[1-\Phi\left(0.5\right)\right]=0.5987-\left[1-0.6915\right]=0.2902 $$

Learning Outcome

Topic 2.d: Univariate Random Variables – Explain and calculate variance, standard deviation, and coefficient of variation.

Featured Study with Us
CFA® Exam and FRM® Exam Prep Platform offered by AnalystPrep

Study Platform

Learn with Us

    Subscribe to our newsletter and keep up with the latest and greatest tips for success
    Online Tutoring
    Our videos feature professional educators presenting in-depth explanations of all topics introduced in the curriculum.

    Video Lessons



    Daniel Glyn
    Daniel Glyn
    2021-03-24
    I have finished my FRM1 thanks to AnalystPrep. And now using AnalystPrep for my FRM2 preparation. Professor Forjan is brilliant. He gives such good explanations and analogies. And more than anything makes learning fun. A big thank you to Analystprep and Professor Forjan. 5 stars all the way!
    michael walshe
    michael walshe
    2021-03-18
    Professor James' videos are excellent for understanding the underlying theories behind financial engineering / financial analysis. The AnalystPrep videos were better than any of the others that I searched through on YouTube for providing a clear explanation of some concepts, such as Portfolio theory, CAPM, and Arbitrage Pricing theory. Watching these cleared up many of the unclarities I had in my head. Highly recommended.
    Nyka Smith
    Nyka Smith
    2021-02-18
    Every concept is very well explained by Nilay Arun. kudos to you man!
    Badr Moubile
    Badr Moubile
    2021-02-13
    Very helpfull!
    Agustin Olcese
    Agustin Olcese
    2021-01-27
    Excellent explantions, very clear!
    Jaak Jay
    Jaak Jay
    2021-01-14
    Awesome content, kudos to Prof.James Frojan
    sindhushree reddy
    sindhushree reddy
    2021-01-07
    Crisp and short ppt of Frm chapters and great explanation with examples.