###### Contingency Tables

A contingency table is a tabular representation of category-based data. It shows the... **Read More**

A point estimator (PE) is a sample statistic used to estimate an unknown population parameter. It is a random variable and therefore varies from sample to sample. A good example of an estimator is the sample mean x, which helps statisticians estimate the population mean, μ. There are three desirable properties every good estimator should possess. These are:

- unbiasedness;
- efficiency; and
- consistency.

Let’s now look at each property in detail.

We say that the PE β’_{j} is an unbiased estimator of the true population parameter β_{j }if the expected value of β’_{j} is equal to the true β_{j}. Putting this in standard mathematical notation, an estimator is unbiased if:

E(β’_{j}) = β_{j} as long as the sample size *n* is finite.

Bias is the difference between the expected value of the estimator and the true value of the parameter. As such, this difference is and should be zero if an estimator is unbiased. Otherwise, a non-zero difference indicates bias. A biased estimator can be less or more than the true parameter, giving rise to positive and negative biases.

Suppose we have two unbiased estimators – β’_{j1 }and β’_{j2} – of the population parameter, β_{j}:

E(β’_{j1}) = β_{j} and E(β’_{j2}) = β_{j}

We say that β’_{j1 }is more efficient relative to β’_{j2 }if the variance of the sample distribution of β’_{j1 }is less than that of β’_{j2 } for all finite sample sizes.

In short, if we have two unbiased estimators, we should give preference to the estimator with a smaller variance because this means it’s more precise in statistical terms. It’s also important to note that the property of efficiency only applies in the presence of unbiasedness since we only consider the variances of unbiased estimators.

Let β’_{j}(N) denote an estimator of β_{j }where N represents the sample size. We would consider β’_{j}(N) a consistent point estimator of β_{j} if its sampling distribution **converges to **or **collapses on** the true value of the population parameter β_{j} as N tends to infinity.

This intuitively means that if a PE is consistent, its distribution becomes more and more concentrated around the real value of the population parameter involved. Therefore, we could say that as N increases, the probability that the estimator ‘closes in’ on the actual value of the parameter approaches 1.

*Reading 10 LOS 10g:*

*Identify and describe desirable properties of an estimator.*