###### Hypothesis Tests of Risk and Risk

Hypothesis Test Concerning Single Mean The z-test is the ideal hypothesis test when... **Read More**

A point estimator (PE) is a sample statistic used to estimate an unknown population parameter. It is a random variable and therefore varies from sample to sample. A good example of an estimator is the sample mean, \(x\), which helps statisticians estimate the population mean, μ. There are three desirable properties every good estimator should possess. These are:

- Unbiasedness.
- Efficiency.
- Consistency.

Let us now look at each property in detail.

We say that a PE β’_{j} is an unbiased estimator of the true population parameter β_{j} if the expected value of β’_{j} is equal to the true β_{j}. Putting this in standard mathematical notation, an estimator is unbiased if:

E(β’_{j}) = β_{j} as long as the sample size *n* is finite.

Bias is the difference between the expected value of the estimator and the true value of the parameter. Therefore, this difference is and should be zero if an estimator is unbiased. Given the foregoing statement, a non-zero difference indicates bias. A biased estimator can be less or more than the true parameter, giving rise to positive and negative biases.

Suppose we have two unbiased estimators – β’_{j1 }and β’_{j2} – of the population parameter β_{j}:

E(β’_{j1}) = β_{j} and E(β’_{j2}) = β_{j}

We say that β’_{j1 }is more efficient relative to β’_{j2 }if the variance of the sample distribution of β’_{j1 }is less than that of β’_{j2 } for all finite sample sizes.

In short, if we have two unbiased estimators, we prefer the estimator with a smaller variance because this means it is more precise in statistical terms. It is also important to note that the property of efficiency only applies in the presence of unbiasedness. This is because we only consider the variances of unbiased estimators.

Let β’_{j}(N) denote an estimator of β_{j }where *n* represents the sample size. We would consider β’_{j}(N) a consistent point estimator of β_{j} if its sampling distribution **converges to** the true value of the population parameter β_{j} as *n* tends to infinity.

This intuitively means that if a PE is consistent, its distribution becomes more concentrated around the real value of the population parameter involved. As such, we could say that as *n* increases, the probability that the estimator ‘closes in’ on the actual value of the parameter approaches 1.