A point estimator (P.E) is a sample statistic used to estimate an unknown population parameter. It is a random variable and therefore varies from sample to sample. A good example of an estimator is the sample mean x, which helps statisticians to estimate the population mean, μ. There are three desirable properties every good estimator should possess. These are:

- Unbiasedness
- Efficiency
- Consistency

Let’s now look at each property in detail:

**Unbiasedness**

We say that the P.E β’_{j} is an unbiased estimator of the true population parameter β_{j} if the expected value of β’_{j} is equal to the true β_{j}. Putting this in standard mathematical notation, an estimator is unbiased if:

E(β’_{j}) = β_{j} as long as the sample size *n* is finite.

The bias is the difference between the expected value of the estimator and the true value of the parameter. Thus, this difference is, and should be zero, if an estimator is unbiased. i.e.

E(β’_{j}) – β_{j} = 0 if β’_{j } is unbiased, otherwise a non-zero difference indicates bias. A biased estimator can be less or more than the true parameter, giving rise to both positive and negative bias.

We can prove that the sample mean X is an unbiased estimator of the population mean μ.

**Efficiency**

Suppose we have two unbiased estimators – β’_{j1 }and β’_{j2}– of the population parameter β_{j}. i.e.

E(β’_{j1}) = β_{j} and E(β’_{j2}) = β_{j}

We say that β’_{j1 }is more efficient relative to β’_{j2 }if the variance of the sample distribution of β’_{j1 }is less than that of β’_{j2 } for all finite sample sizes.

In short, if we have two unbiased estimators, we prefer the estimator with a smaller variance because this means it’s more precise in statistical terms. It’s also important to note that the property of efficiency only applies in the presence of unbiasedness since we only consider the variances of unbiased estimators.

**Consistency of an Estimator**

Let β’_{j}(N) denote an estimator of β_{j }where N represents the sample size. We would consider β’_{j}(N) a consistent P.E of β_{j} if its sampling distribution **converges to **or **collapses on** the true value of the population parameter β_{j} as N tends to infinity.

This intuitively means that if a P.E is consistent, its distribution becomes more and more concentrated around the real value of the population parameter involved. We could say that as N increases, the probability that the estimator ‘closes in’ on the actual value of the parameter approaches 1.

*Reading 11 LOS 11g:*

*Identify and describe desirable properties of an estimator.*